APPROACHES TO BOOTSTRAPPING VOLUME 1
LANGUAGE ACQUISITION & LANGUAGE DISORDERS
EDITORS Harald Clahsen University of ...
43 downloads
1221 Views
2MB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
APPROACHES TO BOOTSTRAPPING VOLUME 1
LANGUAGE ACQUISITION & LANGUAGE DISORDERS
EDITORS Harald Clahsen University of Essex
Lydia White McGill University
EDITORIAL BOARD Melissa Bowerman (Max Planck Institut für Psycholinguistik, Nijmegen) Katherine Demuth (Brown University) Nina Hyams (University of California at Los Angeles) William O’Grady (University of Hawaii) Jürgen Meisel (Universität Hamburg) Mabel Rice (University of Kansas) Luigi Rizzi (University of Siena) Bonnie Schwartz (University of Durham) Antonella Sorace (University of Edinburgh) Karin Stromswold (Rutgers University) Jürgen Weissenborn (Universität Potsdam) Frank Wijnen (Utrecht University)
Volume 23
Jürgen Weissenborn and Barbara Höhle (eds.) Approaches to Bootstrapping. Phonological, lexical, syntactic and neurophysiological aspects of early language acquisition. Volume 1.
APPROACHES TO BOOTSTRAPPING PHONOLOGICAL, LEXICAL, SYNTACTIC AND NEUROPHYSIOLOGICAL ASPECTS OF EARLY LANGUAGE ACQUISITION VOLUME 1
Edited by
JÜRGEN WEISSENBORN BARBARA HÖHLE University of Potsdam
JOHN BENJAMINS PUBLISHING COMPANY AMSTERDAM/PHILADELPHIA
8
TM
The paper used in this publication meets the minimum requirements of American National Standard for Information Sciences — Permanence of Paper for Printed Library Materials, ANSI Z39.48-1984.
Library of Congress Cataloging-in-Publication Data Approaches to bootstrapping : phonological, lexical, syntactic and neurophysiological aspects of early language acquisition / edited by Jürgen Weissenborn, Barbara Höhle. p. cm. -- (Language acquisition & language disorders : ISSN 0925-0123; v. 23-24) Includes bibliographical references and index. 1. Language acquisition. 2. Language awareness in children. I. Weissenborn, Jürgen. II. Höhle, Barbara. III. Series. P118.A66. 2000 401’.93--dc21 00-058560 ISBN 90 272 2491 9 (Eur.) / 1 55619 992 9 (US) (v. 1. – alk. paper) ISBN 90 272 2492 7 (Eur.) / 1 55619 993 7 (US) (v. 2. – alk. paper) © Copyright 2001 - John Benjamins B.V. No part of this book may be reproduced in any form, by print, photoprint, microfilm, or any other means, without written permission from the publisher. John Benjamins Publishing Co. • P.O.Box 36224 • 1020 ME Amsterdam • The Netherlands John Benjamins North America • P.O.Box 27519 • Philadelphia PA 19118-0519 • USA
"fer"> "kee">
iTITLE "Table of Contents"
SUBJECT "Linguistik Aktuell/Linguistics Today, Volume 23" nt"> s">
KEYWORDS ""
SIZE HEIGHT "220"
WIDTH "150" BOOKMARK "Table of Contents">
Table of Contents
Introduction
vii
P I Early Word Learning and its Prerequisites Bootstrapping from the Signal:Some further directions Peter W. Jusczyk Contributions of Prosody to Infants’ Segmentation and Representation of Speech Catharine H. Echols
3
25
Implicit Memory Support for Language Acquisition Cynthia Fisher & Barbara A. Church
47
How Accessible is the Lexicon in Motherese? Nan Bernstein Ratner & Becky Rooney
71
Bootstrapping a First Vocabulary Lila Gleitman & Henry Gleitman
79
Infants’ Developing Competence in Recognizing and Understanding Words in Fluent Speech Anne Fernald, Gerald W. McRoberts & Daniel Swingley Lemma Structure in Language Learning:Comments on representation and realization Cecile McKee & Noriko Iwasaki
97
125
"dur"> "gua"> "hoe"> "pen">
vi
TABLE OF CONTENTS
P II From Input Cues to Syntactic Knowledge Signal to Syntax:Building a bridge LouAnn Gerken A Reappraisal of Young Children’s Knowledge of Grammatical Morphemes Roberta Michnick Golinkoff, Kathy Hirsh-Pasek & Melissa A. Schweisguth Predicting Grammatical Classes from Phonological Cues:An empirical test Gert Durieux & Steven Gillis Pre-lexical Setting of the Head:Complement parameter through prosody Maria Teresa Guasti, Anne Christophe, Brit van Ooyen & Marina Nespor Discovering Word Order Regularities:The role of prosodic information for early parameter setting Barbara Höhle, Jürgen Weissenborn, Michaela Schmitz & Anja Ischebeck
147
167
189
231
249
On the Prosody/Lexicon Interface in Learning Word Order:A study of normally developing and language-impaired children Zvi Penner, Karin Wymann & Jürgen Weissenborn
267
Index
295
Introduction Jürgen Weissenborn & Barbara Höhle University of Potsdam
There is growing consensus that by the age of three children have acquired the basic phonological, morpho-syntactic, and semantic regularities of the target language irrespective of the language or languages to be learned, and the language modality in which learning takes place, i.e., spoken or signed language. Evidence is also accumulating that the schedule for the major milestones of language development in the productive as well as in the receptive mode is largely identical from language to language (for a detailed overview see Jusczyk 1997). How is this early learning or bootstrapping into the native language possible? The notion of bootstrapping implies that the child (on the basis of already existing knowledge and information processing capacities) can make use of specific types of information in the linguistic and non-linguistic input in order to determine the language particular regularities which constitute the grammar and the lexicon of her native language. Depending on the type of information which the child makes use of, we can distinguish prosodic, lexico-semantic, conceptual, morpho-syntactic, and pragmatic bootstrapping. The central assumption behind the bootstrapping approach is that there is a systematic relationship between properties of the input at one level of representation, which the child already has access to, and another level of representation. An example is the intensively studied parallelism between prosodic and syntactic structure, or between lexico-semantic and syntactic structure (e.g., Gleitman 1990; Pinker 1994). In other words, the child makes use of the regularities that characterize the interface, i.e., the interaction between different linguistic and non-linguistic domains of representation. A problem with this strategy is, as has repeatedly been pointed out, that this parallelism between levels of representation is only partial (e.g., Selkirk 1984; Jackendoff 1997). The child must thus use other means to solve the problems that result from this type of discrepancy. It could be
viii
JÜRGEN WEISSENBORN & BARBARA HÖHLE
that the child makes use of different types of information in order to overcome these difficulties (Hirsh-Pasek & Golinkoff 1996; Mattys, Jusczyk, Luce & Morgan 1999; Morgan, Shi & Allopenna 1996). Other questions related to the process of bootstrapping are whether and how the bootstrapping strategies and their interrelation may change during development. Such a change is to be expected given the constantly increasing knowledge of the child in the linguistic and non-linguistic domain. For example, the growing lexicon of the child, especially in the domain of the closed class, functional vocabulary which in languages like English, French or German constitute about 50% of the lexical tokens of any given text, should considerably facilitate and enhance the lexical (e.g., word segmentation and categorization) and syntactic (e.g., determination of syntactic boundaries) bootstrapping capacities of the child because of the distributional properties of these items. That is, from a very early stage, the child should be able to apply — albeit only to a certain extent — an adult-like top-down parsing strategy (e.g., Höhle & Weissenborn 2000). We may thus have to reckon with a constantly changing hierarchy of bootstrapping strategies. The extent to which these changes may result in the attrition of bootstrapping capacities that are no longer in use is not clear. The best known evidence for changes in the sensitivity of the child to distinctions in the input is the attested restriction of the child’s segmental discrimination capacities to the phonological contrasts of the target (Werker & Lalonde 1988). In addition to the dependency on the perceptual and representational capacities of the child in the different linguistic and non-linguistic domains, success of bootstrapping strategies will also depend on the availability of information processing capacities like memory and attention which are necessary to integrate the information extracted from the input into the learning mechanisms. Thus, rule formation on the basis of distributional learning probably puts particular demands on memory because of the necessity to keep track of the relevant co-occurrence relations. The existence of such frequency effects in prelinguistic children points to the importance of memory processes (e.g., Jusczyk, Luce & Charles-Luce 1994). Consequently, changes in the bootstrapping capacities of the child may also be the result of changes in her information processing capacities, i.e., changes in memory and attentional resources, like, for example, changes in (short term) auditory memory or in the capacity of the child to coordinate her eye gaze with the eye gaze of the caretaker (e.g., Adams & Gathercole 1995; Baldwin 1995). In order to understand the acquisition process, it is crucial to ask to which extent (and how) the child uses the information accessed in the input in her rule learning mechanisms (via bootstrapping mechanisms). The fact that the child is
INTRODUCTION
ix
sensitive to a certain property of the input which may be relevant from a theoretical perspective for the acquisition of a particular aspect of linguistic knowledge does not yet mean that she actually uses this information to acquire this knowledge. Last but not least, we also have to reckon with the fact that all the capacities and the related processes mentioned so far may be affected by changes in the biological, neurophysiological environment in which they are embedded, and which in turn will also be affected by the perceptual and cognitive processes supported by it. Thus, the assumption that certain processes like the processing of closed class functional elements are subject to an increasing degree of automatization may be the expression of changes in the underlying brain structure (e.g., Friederici 1995). Another effect of maturational processes in the brain may be the existence of critical periods for the acquisition of specific aspects of linguistic knowledge. The main aim of the present collection of studies is to contribute to the clarification and understanding of the questions and issues mentioned above. One important aspect is the interdisciplinary and cross-linguistic approach taken. Apart from experimental studies, the study of the acquisition of different languages, which differ only minimally in some well-defined respect, is a powerful tool for collecting evidence about the structure and interaction of bootstrapping mechanisms. The present studies should both challenge and stimulate the efforts in related areas of research which are only marginally represented by these two volumes, like the increasingly active field of modelling of acquisition processes, the study of the interaction between general cognitive and linguistic development, the reflection on general models of language development, and especially the study of developmental language disorders. If, as we mentioned in the beginning, and as shown pervasively in the research on language acquisition in the last years, the decisive steps into language are taken during the first two years of life (made on the basis of the powerful bootstrapping capacities displayed by the child), it seems more promising to investigate the hypothesis that it is deficiencies in the bootstrapping capacities that largely contribute to the emergence of developmental language disorders. For the study in the origin of language disorders, the importance of getting a clearer picture of the contribution of the different bootstrapping mechanisms and their interactions with normal language development becomes more and more clear. As mentioned before, the relative strength of the contribution of the different bootstrapping strategies for the extraction of language-specific regularities from the input seems to change over time. What we do not yet know is how
x
JÜRGEN WEISSENBORN & BARBARA HÖHLE
much development differs across subjects and how much deviance from the general course is tolerable without constituting a risk for successful language acquisition. In order to find out where the potential risks for the emergence of language disorders lie, it is necessary to compare the language development of unimpaired and language-impaired children over time. Initial results from current longitudinal studies in impaired and unimpaired language acquisition point to the fruitfulness of this approach (e.g., Benasich 1998; Lyytinen 1997). Longitudinal data is also needed to answer the question of which of the child’s early linguistic and non-linguistic capacities underlying the bootstrapping mechanisms are innately determined and which are rather the result of epigenetic processes. The papers contained in the two volumes are organized into five chapters. Chapter one concentrates on the prerequisites of early word learning. In his paper Jusczyk discusses the beginnings of word segmentation abilities at around 7 to 8 months of age. He presents evidence that English children use mainly prosodic cues with a preference for trochaic rhythmical patterns at the beginning but also benefit from phonotactic constraints, allophonic cues and distributional regularities from very early on. Furthermore, he reviews findings on the detection of function words in the input as an aid for the development of syntactic knowledge. Echols reports further evidence for a trochaic segmentation strategy in English children. Moreover, she argues that perceptually salient syllables are those syllables in the speech stream infants are especially sensitive to. Besides stressed syllables final syllables have a high degree of perceptual saliency. She presents findings according to which final syllable lengthening is more pronounced in child-directed speech than in adult-directed speech. This fits with production patterns where stressed and final syllables are more likely to be included in the speech of one-word speakers than unstressed nonfinal syllables. This saliency pattern could also contribute to the tendency to extract trochaic feet from the input. Fisher and Church discuss another open question with regard to lexical processing:namely the question how the initially rather poor word recognition abilities of young children develop into the efficient and rapid recognition skills found in adults. Differences in processing as well as in lexical representations are discussed as potential sources for these differences between children and adults. In a series of experiments the authors found evidence that basic word identification processes of preschoolers resemble those of adults. On the basis of these findings it is argued that the learning mechanisms that children use to create lexical phonological representations are the same as those mechanisms that create long-term auditory word priming in adults, i.e., a mechanism that continu-
INTRODUCTION
xi
ally updates the representations of the sound of words to reflect ongoing auditory experience. Bernstein Ratner and Rooney provide evidence that certain structural properties of child-directed speech facilitate the early stages of word learning, especially the segmentation of the speech input in word like units. Their analysis of 10000 utterances spoken to children between 13 and 20 months of age shows several features that might assist children in solving the segmentation problem, namely a high proportion of very short utterances with many repetitions of lexical items and syntactic frames. Along with the demonstrated abilities of young children to use input information these specific input characteristics might support early language acquisition. With the study by Gleitman and Gleitman the focus of the discussion changes to the semantic aspects of the acquisition of the lexicon:they ask how word meanings are learned and how word meanings function in the semantics of sentences. They argue that one potential source for the learning of word meanings lies in the child’s capacity to match the occurrence of words with the scenes and events that accompany the words in adult-to-child interactions. Furthermore, within the syntactic bootstrapping account language internal contextual information is assumed to provide another powerful source of information on word meaning. Some experiments with adults reveal that these different sources of information might be relevant for the acquisition of the meaning of different word classes:given only extralinguistic context of a word use by video scenes without tone adult subjects were much better in identifying the meanings of nouns as compared to verbs. Verb identification abilities were better giving the subjects sentence structures in which only the grammatical morphemes appeared and all lexical morphemes were replaced by nonsense syllables. In language acquisition these different information sources for different word classes might be related to the initial dominance of nouns in children’s production. Fernald, McRoberts and Swingley focus on the developmental changes in word comprehension during the second year of life. They report findings that the speed and the accuracy in recognizing familiar words increases significantly within this period and that children from 18 months on already show the features of incremental processing which are found also in adults. They argue that these changes may reflect changes in the nature of lexical representations as well as changes in general perceptual and cognitive processing abilities. McKee and Iwasaki argue in a similar direction on the basis of production data. Within the framework of a model of lemma-driven syntactic processing they point out that the misuse and the missing of closed-class elements in children’s production data may have several reasons:it could either result from
xii
JÜRGEN WEISSENBORN & BARBARA HÖHLE
incomplete linguistic knowledge or from a deficient processing system that put this underlying knowledge into actual utterances. A critical feature for distinguishing between these alternatives is the consistency with which a pattern of misuse appears:a deficient processing system allows for more variability than lack of linguistic knowledge. Based on data on the acquisition of Japanese they show the relevance of this criterion. Chapter two focuses on the development of early syntactic knowledge. In the first paper Gerken argues that one of the main tasks of future research is to build the bridge between input features and the acquired system in the domain of syntax. She focuses on the question which input cues might help the child to detect phrase and clause boundaries to find out about syntactic structure and syntactic categories. Besides mentioning prosodic cues she draws the attention to the importance of the processing of grammatical morphemes which could signal phrase and clause boundaries and could also be used to assign a syntactic category to adjacent words. She points out that the recent findings on the richness of the signal and the high sensitivity of infants for distributional properties of the input should shed new light on the discussion of the logical problem of the acquisition of syntax. Golinkoff, Hirsh-Pasek and Schweisguth follow the line of Gerken arguing that the sensitivity to grammatical morphemes may contribute in important ways to the acquisition of syntax. They report findings of an experiment that support the assumption of an early sensitivity to grammatical morphemes:children between 18 to 20 months of age react differently to correctly inflected verbforms than to verbs with a “wrong” inflectional ending or a nonsense syllable replacing the inflection. A slightly different perspective on possible input cues to the acquisition of syntactic categories is taken in the paper by Durieux and Gillis. They discuss several phonological features of a word itself that could be used to predict its syntactic category. They show that the integration of several phonological cues (stress, length, vowel and consonant quality among them) leads to good predictions of the syntactic category for English as well as in Dutch words. But it is still an open question whether infants can benefit from these cues in natural language acquisition. Within the framework of the parameter setting model for acquisition of syntax Guasti, Nespor, Christophe and van Ooyen argue that children use the correlation of prosodic and syntactic structure — especially the rhythmic pattern within the intonational phrase — to find out whether their target language is head initial or head final. Following this idea Höhle, Weissenborn, Schmitz and Ischebeck present
INTRODUCTION
xiii
the results of a series of studies on the sensitivity of German children to word order regularities. They found clear prosodic differences between sentences involving head-complement constructions as compared to head-modifier constructions. This may help children to discriminate between complements and modifiers. Furthermore they present evidence, that children of 20 to 21 months of age may discriminate grammatical vs. ungrammatical word order if the difference in grammaticality correlates with differences in prosody. Penner, Wymann and Weissenborn discuss an apparent asymmetry in the speech of children learning German between systematic violations of the canonical strong-weak pattern in speech production and target consistent word order which is assumed to be acquired on the basis of the knowledge of the stress pattern of the target. They explain the delay at the production level by the fact that intricate interface data force the child to resort to intermediate underspecified representations of phonological phrases. Chapter three focuses on the interaction between prosodic and morphosyntactic factors in the process of development of linguistic knowledge. Demuth reports an account of syllable omission and the development of grammatical morphology in early mono- and multimorphemic utterances of a Spanish child on the basis of a theory of Prosodic Constraints. She shows that these constraints are different from those found in English. The main result is that the appearance of grammatical morphology depends on the level at which grammatical morphemes are prosodified, with lower level elements being acquired before higher level elements. She concludes by pointing out possible implications of her approach for the study of individual differences, for the identification of children at risk of language delay, and for a more general constraint-based approach to language acquisition. In a similar vain Lleó shows in her contribution that the fact that Spanish determiners are acquired way before their appearance in the language of German speaking children is explained by the different prosodic structures of the article in the languages concerned. These prosodic differences explain that the Spanish article appears already on single nouns whereas in German the article is first realized within larger structures. These results provide further evidence for the importance of the prosody-syntax interface for the acquisition of grammatical knowledge. This importance is confirmed by the findings of the study by Freitas, Miguel and Hub Faria on the acquisition of codas in European Portuguese. They show that the acquisition of elements of syllabic structure like codas may differ depending on the grammatical features encoded by them in the target language. Thus, codas with fricatives encoding plural are acquired earlier than
xiv
JÜRGEN WEISSENBORN & BARBARA HÖHLE
one would expect on the basis of prosodic factors alone. This finding opens up new perspectives on the intricate interaction of different linguistic levels in development, and especially draws attention to the fact that from very early on abstract grammatical features must be taken into account. Fikkert discusses data from the development of the prosodic structure of monomorphemic and compound nouns in Dutch. In this domain, contrary to a widely held view, it is not the case that simple structures are acquired earlier than complex ones. What she observes instead is that the acquisition of compounds guides the child in the acquisition of monomorphemic words consisting of more than one foot. Her analysis is formulated in terms of a parameter setting approach that assumes that parameters are set from an initial unmarked (default) value to the marked value when the required evidence is encountered in the input. In his paper Lebeaux develops an account of how the properties of telegraphic speech in children can be explained as the result of a prosodicsyntactic tree mapping at the phonology-syntax interface. More specifically, he argues that telegraphic speech is derived as a consequence of the child computing structure with two representations:the syntactic one and the prosodic one. The child attempts to find the maximal alignment of these two structures by factoring out their discrepancies which had been introduced by generalized transformations operating on identical phonological and syntactic kernel structures. Peters proposes a model for the development of distinct closed class lexical elements in English from an initial undifferentiated single protomorpheme occupying grammatical positions which the child is assumed to discover on the basis of their prosodic characteristics. The subsequent differentiation of this protomorpheme into three distinct classes (catenatives, auxiliaries, and modals) is the result of a gradual process of specification on the basis of growing information from phonological, semantic, and syntactic properties of the input. In the last paper of this section Strömqvist, Ragnarsdóttir and Richthoff show on the basis of a particular cross-linguistic approach, namely the withinlanguage group comparison (Danish, Icelandic, Swedish) that subtle differences in the configuration of function words in terms of frequency, stress, word order, and ambiguity have an impact on the course and structure of acquisition. They provide evidence that the child starts with stressed, more concrete (e.g., deictic) elements which may serve as templates for the acquisition of unstressed, functionally different (e.g., expletive) forms instantiating the developmental principle that new functions are first expressed by old forms. Chapter four deals with neurophysiological aspects of language acquisition.
INTRODUCTION
xv
Molfese, Narter, van Matre, Ellefson and Modglin give an overview of changes found in ERP-patterns to linguistic stimuli in infancy and early childhood. In the domain of sound discrimination ERPs reflect behavioral findings very closely, including categorical perception and the emergence of the discrimination of different speech cues at different times. Changes observed during early language development include changes in temporal as well as in topological features of the ERPs. If words are used as stimuli ERPs reflect whether the words are rated as known or as unknown by the child. Furthermore, the paper discusses findings that ERPs may be used as a predictor for later language development:longitudinal data suggest that children who differ in their language abilities at three or five years of age already differ in their ERPs to speech at birth. Friederici and Hahne focus on ERP components that correlate with the processing of syntactic information. They report findings that adult-like temporally different ERP patterns to semantic and syntactic violations can be found already in children from 6 years on but that especially the component related to a first-pass syntactic parsing mechanism is slowed down in the children. On the basis of a three stage model for language comprehension they argue that the parsing routines of the children are similar to those used by adults but have not yet reached the highly automatic status found with adults. St. George and Mills take a closer look at correlations of changes in ERP patterns and changes in word knowledge. They report that the vocabulary spurt goes hand in hand with dramatic changes in the topology of the ERP-pattern of known and unknown words. They recorded ERP responses to open and closed class items during the second to the fourth year of life linking the acquisition of lexical knowledge and the acquisition of syntax. While initial responses to open and closed class items are the same, at around 28 to 30 months of age the ERPs start to be different for the two classes with a greater lateralization to the left hemisphere for the closed class than for the open class. This difference is even bigger for older children. Furthermore, the appearance of these changes seem to be linked to language abilities and not to chronological age. Chapter five groups together studies on additional perspectives of language acquisition addressing questions of methodology, the nature of linguistic primitives, and the development of bird song as compared to human language acquisition. Plunkett summarizes the recent contributions of cognitive neuroscience, experimental psycholinguistics, and neural network modelling for our understanding of how brain processing, neural development, genetic programmes, and the environment interact in language acquisition by focussing on the areas of early speech perception, word recognition and the acquisition of inflectional morphology. Each area demonstrates how linguistic development can be driven
xvi
JÜRGEN WEISSENBORN & BARBARA HÖHLE
by the interaction of general learning mechanisms, highly sensitive to particular statistical regularities in the input, with a richly structured environment. Bierwisch addresses the question whether the primitives of linguistic knowledge, i.e., phonetic, semantic, and formal, morpho-syntactic features, are a prerequisite or a result of the acquisition process. He concludes that they must basically be considered as derived categories which emerge from the accommodation of actual data according to general principles of representation provided by Universal Grammar which may be interpreted as genetically fixed dispositions. On the basis of the analysis of trajectories of song development in nightingales Hultsch and Todt provide evidence that, in addition to interactional variables and a predisposition to sensitive phases, the development of bird song shares learning mechanisms with human language development like the hierarchical organization of memory, the chunking of information into distinct units, e.g., songs vs. sentences, and the sensitivity to contextual factors. These similarities have to be contrasted with the structural differences between bird song allowing only for a limited number of meaningful elements, and human language which provides the speaker with the possibility of an unlimited number of novel meaningful utterances.
Acknowledgments The preparation of these volumes has been made possible by the German National Science Foundation (DFG) and the Berlin-Brandenburg Academy of Science (BBAW) through the financement of a workshop in September 1996 in the framework of the research groups on “Formal Models of Cognitive Complexity” and “Rule Knowledge and Rule Learning”, respectively. Special thanks go to Michaela Schmitz for editorial help, and to Susan Powers, Caroline Fery and Derek Houston for their assistance in the reviewing process.
References Adams, A. M. and Gathercole, S. 1995. “Phonological working memory and speech production in preschool children”. Journal of Speech and Hearing Research 38:403–414. Baldwin, D. A. 1995. “Understanding the link between joint attention and language”. In Joint Attention: Its Origins and Role in Development, C. Moore and P. J. Dunham (eds.). Hillsdale, NJ:Lawrence Erlbaum.
INTRODUCTION
xvii
Benasich, A. A. 1998. “Temporal integration as an early predictor of speech and language development”. In Basic Mechanisms in Cognition and Language, C. von Euler, I. Lundberb and R. Llinas (eds.). Amsterdam:Elsevier . Friederici, A. 1995. “The temporal structure of language processes:Developmental and neurophysiological aspects”. In Biological and Cultural Aspects of Language Development, B. M. Velichkovsky and D. M. Rumbaugh (eds.). Princeton:Princeton University Press. Gleitman, L. 1990. “The structural sources of verb meaning”. Language Acquisition 1:3–55. Hirsh-Pasek, K. and Golinkoff, R. M. 1996. The Origins of Grammar. Evidence from Early Language Comprehension, Cambridge, Mass.:The MIT Press. Höhle, B. and Weissenborn, J. 2000. “The origins of syntactic knowledge:Recognition of determiners in one year old German children”. In Proceedings of the 24th Annual Boston University Conference on Language Development, S. C. Howell, S. A. Fish and T. Keith-Lucas (eds.). Somerville:Cascadilla Press. Jackendoff, R. 1997. The Architecture of the Language Faculty. Cambridge, MA.:MIT Press. Jusczyk, P. W. 1997. The Discovery of Spoken Language. Cambridge, MA.:The MIT Press. Jusczyk, P. W., Luce, P. A. and Charles-Luce, J. 1994. “Infants´ sensitivity to phonotactic patterns in the native language”. Journal of Memory and Language 33:630–645. Lyytinen, H. 1997. “In search of precursors of dyslexia:A prospective study of children at risk for reading problems”. In Dyslexia: Biology, Cognition and Intervention, M. Snowling and C. Hulme (eds.). London:Whurr Publishers. Mattys, S., Jusczyk, P. W., Luce, P. A. and Morgan, J. L. 1999. “Phonotactic and prosodic effects on word segmentation in infants”. Cognitive Psychology 38:465–494. Morgan, J. L., Shi, R. and Allopenna, P. 1996. “Perceptual bases of rudimentary grammatical categories:T oward a broader conceptualization of bootstrapping”. In Signal to syntax, J. L. Morgan and K. Demuth (eds.). Mahwah:Lawrence Earlbaum. Pinker, S. 1994. “How could a child use verb syntax to learn verb semantics? “. Lingua 92:377–410. Selkirk, E. O. 1984. Phonology and Syntax: The Relation between Sound and Structure. Cambridge, Mass.:MIT Press. Werker, J. and Lalonde, C. 1988. “Cross language speech preception:Initial capabilities and developmental change”. Developmental Psychology 24:672–683.
P I Early Word Learning and its Prerequisites
Bootstrapping from the Signal Some further directions Peter W. Jusczyk
Johns Hopkins University
When Kemler Nelson, Hirsh-Pasek and I first began to explore the possibility that infants might use information in the speech signal as aid to discovering the underlying grammatical organization of their native language, we focused primarily on potential markers of syntactic boundaries (Hirsh-Pasek, Kemler Nelson, Jusczyk, Wright Cassidy, Druss & Kennedy 1987; Jusczyk, Hirsh-Pasek, Kemler Nelson, Kennedy, Woodward & Piwoz 1992; Kemler Nelson, HirshPasek, Jusczyk & Wright-Cassidy 1989). Indeed, much of the discussion at the “Signal to Syntax” conference centered around the extent to which a grasp of the prosodic organization of utterances could provide the learner with useful cues to their syntactic organization (Morgan & Demuth 1996). Nevertheless, even at that meeting, some consideration was also given to the way that other kinds of information in the speech signal could also help learners in discovering the syntactic organization of their native language (Gerken 1996; Jusczyk & Kemler Nelson 1996; Morgan, Allopenna & Shi 1996). One new avenue that we had just begun to pursue at that time concerned the ability of infants to segment words from fluent speech contexts. An ability to recognize words in fluent speech and to determine how these words are distributed is potentially useful in discovering the syntactic organization of utterances, especially in a language like English where word order is used to convey syntactic relations. Even in a language, such as Polish, in which word order is relatively free and inflections are used to mark syntactic relations, an ability to recognize and detect the occurrence of a given morphemic stem is likely to be an important step in determining the distributional properties of the inflections. In what follows, I review what we have learned about how infants begin to segment words from speech, and then discuss several different lines of investigation
4
PETER W. JUSCZYK
that begin to relate word segmentation abilities to the earlier findings on bootstrapping from the speech signal.
The beginnings of word segmentation Early research on infant speech perception addressed many important issues such as (1) infants’ capacities to discriminate subtle phonetic contrasts (Eimas 1974, 1975; Eimas & Miller 1980b; Eimas, Siqueland, Jusczyk & Vigorito 1971; Trehub 1973, 1976); (2) their abilities to compensate for variability in speech produced by differences in speaking rates or talker’s voice (Eimas & Miller 1980a; Kuhl 1980, 1983); and (3) the role of experience in the development of speech perception (Best, McRoberts & Sithole 1988; Lasky, Syrdal-Lasky & Klein 1975; Streeter 1976; Werker & Lalonde 1988; Werker & Tees 1984). However, because typical infant test procedures allowed only for the presentation of very short stimuli (Gottlieb & Krasnegor 1985), the issue of how and when infants begin to segment speech was not seriously addressed until procedures that allowed for presentation of long samples of speech were developed (Fernald 1984; Hirsh-Pasek et al. 1987; Kemler Nelson et al. 1995). Several years ago, Dick Aslin and I discussed how we might adapt the Headturn Preference Procedure for studying word segmentation abilities of infants (Jusczyk & Aslin 1995). The method that we devised was inspired by consideration of some paradigms used in research with adults, such as priming and word spotting. We decided to familiarize infants with a pair of words for some period of time and then present them with a series of passages, some of which contained the familiarized target words, and others of which did not. We hypothesized that if the infants recognized the familiarized target words in the passages, they might listen longer to these than to the ones without the target words. We had a female talker record four different 6-sentence test passages. Each passage included a particular target word in each sentence, although its sentential position varied so that for two of the sentences it was near the beginning, for two others it was in the middle, and the remaining two at the end of the sentences. A typical passage is the following: The feet were all different sizes. This girl has very big feet. Even the toes on her feet are large. The shoes gave the man red feet. His feet get sore from standing all day. The doctor wants your feet to be clean.
After the passages had been recorded, we asked the same talker to record 15 different tokens of each of the four target words, “cup”, “dog”, “feet”, and
BOOTSTRAPPING FROM THE SIGNAL
5
“bike”. We had chosen these monosyllabic words as our targets because they had clear onsets and offsets and because they differed in vowel quality. Our aim was to determine first whether infants had any capacity to detect these simple and distinctive targets. We began by testing 7.5-month-olds. We selected this age because previous work had suggested that sensitivity to native language sound patterns increases between 6- and 9-months (Jusczyk, Cutler & Redanz 1993; Jusczyk, Friederici, Wessels, Svenkerud & Jusczyk 1993). Each infant was familiarized with two target words (e.g. “cup” and “dog”) on alternating trials of the Headturn Preference Procedure until he or she accumulated at least 30 s. listening time to each word. Then the infant heard four blocks of test trials. In each block of trials, all four passages were played in random order and the infant’s listening times to each passage were recorded. Several interesting findings emerged from this initial investigation. First, we found that 7.5-month-old English-learners did listen significantly longer to the passages containing the target words that they had been familiarized with. Hence, the infants displayed some ability to detect the occurrence of these targets in fluent speech contexts. Second, by comparison, when tested under the same conditions, 6-month-olds did not listen longer to the passages containing the target words. Third, the ability of 7.5-month-olds to segment the sound patterns of the target words from fluent speech did not depend on hearing the words first in isolation. Thus, in one experiment, 7.5-month-olds were familiarized with a pair of passages first, then received blocks of test trials in which on each trial they heard tokens of one of the four words (“cup”, “dog”, “feet”, or “bike”). Despite the fact that the passages contain many other words besides the targets, the infants evidently detected the fact that certain sound patterns of words were recurring in the passages. Specifically, during the test phase, they listened significantly longer to the words that corresponded to the targets in the familiarization passages. Finally, there was some evidence that the infants were responding to the whole words in the passages rather than to just a salient feature such as vowel quality. In particular, in one experiment, 7.5-month-olds were familiarized with pairs of nonsense words (e.g., “tup” and “bawg” or “zeet” and “gike”) that were phonetically very similar to the words that recurred in the test passages (i.e. “cup”, “dog”, “feet” and “bike”). Were the infants simply responding to a salient property of the words such as their vowel quality or syllable rhyme, then one might expect them to listen longer to the passage containing the words that matched the familiarized target items in these properties. However, this was not the case. Infants familiarized with “tup” and “bawg” were no more apt to listen to the “cup” and “dog” passages than they were to listen to the “feet” and “bike”
6
PETER W. JUSCZYK
passages. More recently, Ruth Tincoff and I found the same pattern of results when we familiarized 7.5-month-olds with items that differed only by their final consonants (e.g. “cut” and “dawb” or “feek” and “gipe”) from the words recurring in the test passages. Consequently, it appears that infants are not simply extracting a salient property of the familiarized item, but rather to something more like a detailed representation of the sound patterns of these targets. Before considering other findings on the early word segmentation abilities of infants, it is worth noting that we are not claiming that these sound patterns are attached to any specific meanings at this stage of development. Rather, what the infant appears to be doing is detecting recurring sound patterns that are possible words in the language. Indeed, the process may well become more complicated as the infant’s task enlarges to not only extract such patterns but to attach coherent meanings to each during on-line speech processing (Fernald, McRoberts & Herrera in press, Stager & Werker 1997). However, at least one element required for comprehending words in utterances — the process of finding familiar sound patterns in fluent speech contexts does appear to begin early in the second half of the first year.
The robustness of early word segmentation abilities Although it is interesting that English-learning 7.5-month-olds show some capacity to recognize the sound patterns of familiarized words in the idealized conditions of the laboratory, one might wonder about the extent to which infants can draw on this capacity under the noisier and more distracting conditions that normally occur for language learning. In particular, the infant often hears speech in the presence of many distractions including bickering siblings, appliance noises, and competing voices on radio and TV, etc. To understand the role that newly developing word segmentation abilities may play in language acquisition, it would be helpful to have some idea of how effective these abilities are in conditions that approximate those that language learners are likely to experience. We have begun to explore how some of these factors affect infants’ abilities to detect the occurrence of familiarized words in fluent speech. In one set of studies, Rochelle Newman and I explored how the presence of speech from a competing voice affects infants’ abilities to encode information about the sound patterns of repeated words. To explore this issue, we modified the word detection procedure by introducing a competing voice during the familiarization period. In our experiments, the competition came in the form of a pre-recorded male voice reading, with little expression, the methods section of
BOOTSTRAPPING FROM THE SIGNAL
7
an infant testing procedure. The speech from this male voice was blended with that of the female voice used in the familiarization phase of Jusczyk and Aslin (1995). For example, on a given familiarization trial, while the male voice was reading, the female voice was uttering one of the two target words. The speech from the two talkers was blended in such a way to ensure that the target words always occurred while the male talker was speaking, and not during silent periods between utterances. We also blended the male and female voices at three different signal-to-noise ratios. In one condition, the female voice was 10 dB louder than the competing male voice; in another condition, the female voice was 5 dB louder than the male voice; and in a third case (0 dB), the two voices were equally loud. Next, we needed a way to indicate to the infants that they should attend to the female voice. For this purpose, all the infants saw a short video tape of a puppet show featuring the female talker. The hope was that infants’ prior experience with this voice in an interesting setting might make them more prone to attend to this voice than to the male one. After watching the video tape in an adjacent room, English-learning 7.5-month-olds were taken into the testing room and then the familiarization period began. As in the Jusczyk and Aslin (1995) studies, the infants were familiarized with two different words (e.g. “feet” and “bike”) on alternating trials until they accumulated 30 s. of listening time to each one. However, this time the competing male voice was present during the familiarization period. Once familiarization was completed, the test phase began. This phase was identical to the one used by Jusczyk and Aslin. That is, there was no competing voice present. Instead, the infants heard the female talker producing each of the four test passages. Our results indicated that infants did have some ability to encode the target words spoken by the female even in the presence of the competing male voice. In particular, in the conditions in which the female voice was either 10 dB or 5 dB louder than the male voice, the infants listened significantly longer to the passages containing the familiarized targets during the test phase. In contrast, no significant listening differences among the test passages were observed after the 0 dB familiarization condition. Thus, the presence of the distracting male voice in the 0 dB condition, appears to have interfered with the infants’ encoding of the target words during familiarization. Still, the overall performance by the 7.5-month-olds in attending to the information produced by the target voice suggests that infants do have some capacity to deal with interference from competing voices. In fact, the infants’ performance is particularly encouraging considering that by blending the voices, we removed the kinds of localization cues that might further aid in distinguishing the voices in a typical home environment.
8
PETER W. JUSCZYK
In addition to demonstrating that infant word segmentation abilities can tolerate noisy input conditions, we have also explored whether their early representations of word sound patterns are robust in other ways. For example, do infants form any long-term representations of these sound patterns and to what extent can they generalize representations of words produced by one talker to those produced by another talker? Both of these abilities are critical for developing the kind of lexicon that could support word recognition in fluent speech contexts. With respect to the issue of how long-lasting are the representations of the sound patterns of words that infants form, Elizabeth Hohne and I found some suggestive evidence in a study that examined 8.5-month-olds’ memory for words that they heard frequently in stories that were repeated to them on ten occasions during a two-week period (Jusczyk & Hohne 1997). The infants heard audio taped versions of the same three stories on each occasion. Two weeks after they had last heard the stories, they were brought to the laboratory and tested on lists of words. Half of the lists contained words that had occurred frequently in the stories; the other half were composed of foil words which were matched to the story words in terms of their phonetic properties and frequency of occurrence within the language. The infants listened significantly longer to the story words than to the foil words. By comparison, a control group of infants who had not heard the stories showed no preference for either type of list. Hence, it appears that it was the first group’s prior experience with the words in the stories that led to their preferences for these words over the foil words. Although these results suggest that infants are engaging in some long-term storage of the sound patterns of words, the testing paradigm does not permit any inferences about memory for any particular word because infants’ responses were measured to whole lists of words. Consequently, to better understand what infants encode about individual words, we have modified the word detection paradigm, by familiarizing infants with a pair of words on one day, and then testing them on the passages, 24 hours later (Houston, Jusczyk & Tager 1997). Under these conditions, 7.5-month-olds perform about as well as they do under immediate test conditions. That is, they listen significantly longer to the passages containing the target words that they had been familiarized with on the previous day. It does appear that 7.5-month-olds are beginning to engage in some longterm storage of the sound patterns of words that they hear frequently addressed to them. Evidence from another on-going investigation suggests that infants have some ability to generalize from one talker’s production of these words to those of another talker. In particular, when infants are familiarized with tokens of
BOOTSTRAPPING FROM THE SIGNAL
9
words produced by one female talker, and then tested on passages produced by a different female talker, they still listen significantly longer to the passages containing the familiarized target words. Similarly, 7.5-month-olds generalize between familiarized target words produced by one male talker to ones in passages produced by a different male talker. However, 7.5-month-olds appear to have difficulty in generalizing their representations of words from a female talker to those of a male talker, and vice versa. By comparison, 10.5-month-olds who were familiarized with target words produced by a female talker listened significantly longer to a male talker’s passages containing the target words than to passages without the targets. Hence, the ability to generalize between the sound patterns of words produced by one talker to those of another is limited in 7.5-month-olds, but appears to develop to more adultlike levels over the course of the next three months. In summary, 7.5-month-olds’ abilities to segment sound patterns of words from fluent speech are robust in several respects. Infants at this age display some ability to extract information about the sound patterns of words even under noisy input conditions. They also appear to engage in some long-term encoding and storage of sound patterns that occur frequently in speech directed to them. In addition, even at these early stages of extracting and storing information about the sound patterns of words, they show at least some limited ability to generalize tokens produced by one talker to those produced by another talker.
What cues do infants use to locate word boundaries? The studies reviewed to this point indicate that English-learning 7.5-month-olds have some ability to segment words in fluent speech. However, these studies do not address the issue of what enables the 7.5-month-old, but evidently not the 6-month-old, to segment words in fluent speech. Of course, one possible answer to this question is that what happens between 6- and 7.5-months is simply a maturational process — one that does not depend directly on experience with native language input. That is, all infants at this age might begin segmenting fluent speech in the same way (e.g. perhaps according to rhythmic pattern like a trochaic template) regardless of the nature of the sound organization of the particular language that they are learning. Alternatively, it is possible that 7.5-month-olds have acquired more information about the sound organization of the native language that is useful in locating the boundaries of words. Although maturational factors may play a role in the ability of 7.5-montholds to segment words from fluent speech, it is clear that greater familiarity with
10
PETER W. JUSCZYK
native language sound organization is also a critical factor. For example, adults exposed to speech in an unfamiliar foreign language often report great difficulty in determining where one word ends and another begins. This difficulty is attributable to the fact that different languages signal boundaries between words in fluent speech in different ways. Hence, the information that a native speaker of one language has learned to use does not always transfer appropriately to utterances in another language. Indeed, a period of experience with the sound patterns of the language also appears to be required for infants to begin segmenting words from fluent speech. Jane Tsay, Rochelle Newman and I investigated whether English-learning 7.5-month-olds could segment familiarized words from Mandarin Chinese. The infants were familiarized with a pair of monosyllabic target words from Mandarin Chinese for 30 s. to each word. Then during the test phase they heard four Mandarin Chinese passages, two of which contained the familiarized targets and two of which did not. In contrast to Jusczyk and Aslin’s earlier findings with English materials, these English-learning 7.5-month-olds gave no evidence of listening longer to the Mandarin Chinese passages that contained the familiarized target words. In a subsequent experiment, we investigated whether a relatively brief exposure to Mandarin Chinese might facilitate the ability of Englishlearning infants to segment words from Mandarin Chinese. Five times during the week preceding their visit to the laboratory, infants were shown a half-hour cartoon with a Mandarin Chinese soundtrack. Despite their pre-exposure to Mandarin Chinese, this new group of English-learners performed no better than did the earlier group. Thus, it appears that more than a relatively brief exposure to speech in a foreign language is required for infants to extract the information needed to locate the boundaries of words in fluent speech. What sources of information might English-learners draw on when they begin to find word boundaries in English? A number of possibilities have been suggested including (1) word stress (Cutler & Carter 1987; Cutler & Norris 1988; Jusczyk et al. 1993); (2) allophonic cues having to do with which variants of a phoneme are typically found at the beginnings and ends of words (Church 1987); (3) phonotactic constraints i.e. the likelihood of a particular segment following another within- or between-words (Brent & Cartwright 1996; Myers et al. 1996); and (4) distributional regularities concerning variety of contexts in which a particular syllable is likely to follow another (Brent & Cartwright 1996; Saffran, Aslin & Newport 1996; Saffran, Newport & Aslin 1996). In our laboratory, we have been investigating the possibility that Englishlearners first begin to segment words from fluent speech on the basis of the location of stressed syllables in utterances. Cutler and Carter (1987) noted that
BOOTSTRAPPING FROM THE SIGNAL
11
a very high proportion of English content words in conversational speech begin with a stressed (or strong) syllable. Hence, Cutler and her colleagues (Cutler 1990, 1994; Cutler & Butterfield 1992; Cutler & Norris 1988) have suggested that listeners may adopt a Metrical Segmentation Strategy (MSS) whereby they identify onsets of new words with the occurrence of strong syllables in English utterances. We first began to take seriously the prospect that English-learners might also use something like MSS to segment words from fluent speech, when we discovered that their sensitivity to the predominant stress pattern of English words develops between 6- and 9-months of age (Jusczyk et al. 1993). Thus, although an English-learning 6-month-old listens equally long to English words with (strong/weak) and without (weak/strong) the predominant stress pattern, 9-month-olds show a significant preference for strong/weak words. Where might such a bias for strong/weak patterns come from? One possibility is that many of the words that infants are apt to hear as isolated utterances conform to this pattern. For example, names and nicknames in English are typically strong/weak (e.g. David, Mary, Peter, Sammy, Betty, etc.). Even when a person has a name with a different stress pattern, they often have a nickname that begins with a strong syllable. So, Jerome becomes Jerry and Elizabeth becomes Betty or Lizzie, etc. In addition, many of the diminutive terms typically used in talking to young infants conform to a strong/weak pattern (e.g. “mommy”, “daddy”, “baby”, “kitty”, “bunny”, “doggie”, etc). The frequent occurrence of such patterns may set up expectations on the part of English-learners about the typical stress patterns of words in their language. A similar situation may hold for the way that names and diminutives may reflect the predominant patterns of words in other languages. Of course, displaying sensitivity to the predominant stress pattern of English words is one thing, but actually using this information in segmenting words from fluent speech is quite another. To explore whether English-learners use the characteristic stress pattern of English words in segmenting speech, we conducted a series of experiments using bisyllabic words (Newsome & Jusczyk 1995). We began by familiarizing 7.5-month-olds with pairs of words with the predominant stress pattern (e.g. “kingdom” and “hamlet” or “doctor” and “candle”). Then the infants were tested on four passages — two of which included the familiarized words and two of which did not. The infants listened significantly longer to the passages containing the familiarized target words, suggesting that they had recognized them in the passages. However, to be certain that the infants were responding to the whole words and not just the strong syllable of these words, we conducted additional experiments in which we either familiarized them with just the strong syllables of these words (e.g. “king” and “ham” or “dock” and
12
PETER W. JUSCZYK
“can”) and tested them on the passages with the whole strong/weak words, or familiarized them with the strong/weak words and tested them on new passages in which the target words corresponded to the strong syllables of the familiarization words (i.e. “king”, “ham”, “dock” and “can”). In both cases, the results were the same:the infants showed no significant preference for the passages containing words that matched the strong syllables of the familiarized targets. Therefore, when familiarized with strong/weak words, English-learning 7.5-month-olds match the whole words and not just the strong syllables. Although our results for strong/weak words are certainly consistent with the predictions of MSS, it is weak/strong words which provide a stronger test of this hypothesis. This is because MSS predicts that English-learners may have difficulty with such patterns given that they identify word onsets with the occurrence of strong syllables. Consequently, when learners hear a word in fluent speech such as “guitar”, they may be inclined to posit a word boundary before the strong syllable “tar”, thereby mis-segmenting the word. To investigate this possibility we familiarized 7.5-month-olds with pairs of weak/strong words (i.e. “guitar” and “device” or “surprise” and “beret”) and tested them on four passages, two of which included the familiarized targets and two of which did not. Contrary to the earlier results for the strong/weak words, but consistent with the predictions of MSS, the infants did not detect the familiarized weak/strong words in the passages. That is, they showed no significant listening preferences for the passages containing the familiarized weak/strong targets. To further explore the possibility that infants were mis-segmenting the weak/strong words in fluent speech, we conducted an additional experiment in which we familiarized the infants with pairs of strong syllables from the weak/strong words (i.e. “tar” and “vice” or “prize” and “ray”), and then tested them on the passages containing the corresponding weak/strong words. This time, the infants did display significant listening preferences for those passages that included the weak/strong words whose strong syllables matched the familiarized targets. Hence, consistent with the predictions of MSS, English-learning 7.5-month-olds appear to mis-segment weak/strong words at the strong syllable boundary. In considering the overall pattern of results with the strong/weak and weak/strong words, one question that arises concerns the infants’ performance in the conditions in which they were familiarized with just the strong syllables of the words. For the strong/weak words, familiarization with just the strong syllables did not lead to preference for the passages containing the corresponding strong/weak words. In contrast, for the weak/strong words, the familiarization with just the strong syllables did lead to significant preferences for the passages with the corresponding weak/strong words. How can we explain this pattern of
BOOTSTRAPPING FROM THE SIGNAL
13
results? The answer appears to lie in the distributional properties of the information that follows the strong syllables in each case. For example, the strong syllable (i.e. [dak]) of “doctor” is always followed by the same weak syllable Û (i.e. [t6 ]). By comparison, the strong syllable (i.e. [tar]) of “guitar” is followed by a variety of different syllables when it occurs in the passages (e.g. “is” in one case, “can” in another case, etc). Hence, in addition to using information about the strong syllables to locate the onsets of words in fluent speech, Englishlearners also appear to use distributional cues to determine where the word is likely to end. To verify whether the distributional properties of syllables following the strong syllable play a critical role in infants’ segmentation of words, we conducted some additional experiments with 7.5-month-olds. We re-wrote the weak/ strong passages so as to always follow the weak/strong target word with the same weak syllable. For example, the new passage for “beret” was the following: Susie is buying her beret on credit. That red beret on the shelf might do. She asked the clerk to put the pink beret on. It was next to the plain beret on the counter. The old beret on the model is my favorite. Your beret on her is very chic.
Once again, the infants were familiarized with pairs of strong syllables (e.g. “tar” and “vice” or “prize” and “ray”) from the weak/strong words and then tested on the new passages. This time the infants showed no significant preference for the passages containing the weak/strong words which corresponded to the familiarized target syllables. Hence, the change in the distributional properties of the weak syllable following the weak/strong targets in the passages affected the infants’ performance. The fact that matching a familiarized target like “ray” in the new “beret on” passage was no better than matching “king” to “kingdom” in the earlier experiment with strong/weak words suggests that infants may have inferred a word such as “rayon” in the new passage. To check this possibility, we familiarized a new group of infants with pairs of strong/weak items (e.g. “rayon” and “prizin” or “tariz” and “viceto”) and then tested them on the new passages. Now the infants displayed a preference for those passages that contained the sequences which corresponded to the familiarized targets (i.e. if they had heard “rayon” and “prizin” during familiarization, they listened significantly longer to the passages with “beret on” and “surprise in” than to ones with “device to” and “guitar is”). Taken together, the findings for these studies with bisyllabic words suggest that English-learners use something like MSS to begin segmenting words from fluent speech. That is, they appear to identify the occurrence of strong syllables with the onsets of new words in fluent speech. They also appear to attend to the
14
PETER W. JUSCZYK
distributional properties of the syllables that follow in determining the likely offset of a given word. It is worth noting that although this strategy may be effective in segmenting words that begin with strong syllables, it will lead English-learners to missegment words beginning with weak syllables. Consequently, although a strategy such as MSS might help infants get started in segmenting words, it will clearly have to be supplemented or superseded by some procedure that allows them to detect the onsets of words without the predominant stress pattern. Still, even though, their initial strategy may lead to mis-segmentations, breaking the input into smaller chunks based on the occurrence of strong syllables could provide the opportunity to learn about other possible clues to word boundaries in the input. For example, the learner may discover that certain kinds of allophones occur at the beginnings of such chunks, but not at the ends, or that certain phonotactic sequences are found in some positions, but not others. Learning about these regularities may then provide the learner with additional information that overrides a segmentation strategy based purely on stress cues. Thus, one description of what English-learners gain from a first-pass strategy like MSS is that it allows them to “divide and conquer” the input. There is evidence from other studies in our laboratory that sensitivity to other potential sources of information about word boundaries tends to develop between 7.5- and 10.5-months of age. For example, 2-month-olds can discriminate the kinds of allophonic differences that could help in correctly segmenting “nitrate” and “night rate” from fluent speech (Hohne & Jusczyk 1994). However, English-learning 9-month-olds who are familiarized with “nitrates” are just as apt to listen to a passage containing “night rates” as they are to a passage containing “nitrates” (Jusczyk, Hohne & Bauman 1999). In contrast, 10.5-month-olds appear to use the relevant allophonic differences to distinguish these items in fluent speech. Hence, 10.5-month-olds familiarized with “night rates” listen significantly longer to a passage containing “night rates” than they do to one containing “nitrates” (and vice versa, after familiarization with “nitrates”). Furthermore, in contrast to 7.5-month-olds, when 10.5-month-olds are familiarized with pairs of weak/strong words (e.g. “guitar” and “device”), they listen significantly longer to the passages containing these targets than they do to ones containing other weak/strong targets (Houston, Jusczyk & Newsome 1995). Moreover, unlike their younger counterparts, 10.5-month-olds who are familiarized with the strong syllables from weak/strong words (i.e. “tar” and “vice”) do not show significant listening preferences for passages with the corresponding weak/strong words (i.e. “guitar” and “device”). Consequently, English-learning 10.5-month-olds display a pattern of responding that is more in line with that of mature speakers of the language.
BOOTSTRAPPING FROM THE SIGNAL
15
In summary, English-learning 7.5-month-olds appear to identify the onsets of words in fluent speech with the occurrence of strong syllables. They also appear to use distributional cues in inferring the ends of words (see also Saffran, Aslin & Newport 1996). Although imperfect, this strategy appears to facilitate the discovery of other potential cues to word boundaries in fluent speech. Carving the input up into word-sized chunks provides the learner with opportunities for observing regularities in allophonic and phonotactic properties which occur at the onsets and offsets of these units. By 10.5-months, English-learners seem to have word segmentation abilities that are similar to those displayed by English-speaking adults.
How word segmentation may further knowledge of syntactic organization One problem raised about prosodic bootstrapping accounts of the acquisition of syntax is the fact that phonological phrase boundaries do not map perfectly onto syntactic phrase boundaries (Fisher & Tokura 1996; Jusczyk & Kemler Nelson 1996). For example, consider the following sentences: (1) (2)
Thomas / ate the cake. He ate / the cake.
In (1) the prosodic phrase boundary (indicated by “/”) coincides with the syntactic boundary between the Subject phrase and the Predicate Phrase. However, in (2), the prosodic phrase boundary occurs within the Predicate phrase between the verb and its Direct Object. Moreover, given a conflict between prosodic phrase marking and syntactic phrase marking such as in (2), Englishlearning 9-month-olds display listening preferences that accord with the prosodic organization of utterances (Gerken, Jusczyk & Mandel 1994). Still, it is worth noting that the prosody does mark syntactic boundaries in both cases, just not the same type of syntactic boundary. In the face of such mismatches between prosodic and syntactic organization, one might be tempted to abandon the notion that prosody is used in bootstrapping the acquisition of syntax. However, even given the imperfect correlation between prosodic and syntactic organization, it is possible that attention to the prosodic organization of utterances still helps in the discovery of their syntactic organization. Recall that a similar situation holds with respect to the role that attention to prosodic features, such as word stress, plays in segmenting words. There, too, the correlation between stressed syllables and word onsets is not perfect. However, we argued that dividing the input into smaller chunks provides
16
PETER W. JUSCZYK
the learner with greater opportunities for learning about the distribution of other potential cues within these chunks. A similar case can be made with respect to grouping information in utterances into prosodic phrasal units. Access to smaller prosodic phrase packets may allow learners to pick up certain kinds of syntactic regularities within such units. For example, one possible source of information within prosodic phrases is the occurrence of grammatical morphemes (Morgan 1986; Morgan, Meier & Newport 1987). In English, certain function words are typically found only at certain locations inside phrasal units. Thus, “the” marks the beginning of a noun phrase, and would be extremely unlikely to occur as the last word of a phrasal unit. Hence, grouping the input into prosodic phrases and attending to regularities in how certain morphemes are distributed within such phrases affords the learner with another opportunity to divide and conquer. Critically, grammatical morphemes are the kinds of elements that learners need to track within phrasal units. Yet, the fact that such elements are often left out of early word combinations produced by children has sometimes been taken as an indication that young learners may not attend to these elements because they are unstressed (Echols & Newport 1992; Gleitman, Gleitman, Landau & Wanner 1988). However, studies by Gerken and her colleagues (Gerken 1991, 1994; Gerken, Landau & Remez 1990) have suggested that these kinds of omissions are attributable to constraints on production rather than on perception. Moreover, recent evidence suggests that during their first year, infants are sensitive to the occurrence of typical function words in native language utterances. Using an electrophysiological measure, Shafer, Gerken, Shucard and Shucard (1992) found that infants distinguished normal English passages from ones in which nonsense syllables replaced function words. Similarly Höhle and Weissenborn have found that German learning infants below the age of 9.5-months show some capacity to recognize the occurrence of function words in fluent speech. In her dissertation research with LouAnn Gerken and me, Michele Shady (1996) replicated and extended the Shafer et al. findings. First, Shady found that English-learning 10.5-month-olds listen significantly longer to passages with real function words than to ones in which nonsense words were substituted for the real function words. Furthermore, this result held even when the nonsense items had phonological characteristics that were very similar to those of real English function words. To determine whether infants were simply responding to the presence of any nonsense words in the passages, as opposed to the nonsense function words, Shady tried another manipulation. This time, nonsense words were substituted for the content words of the same passages, but the function words were not altered. No significant listening preferences were observed for
BOOTSTRAPPING FROM THE SIGNAL
17
the passages with the real as opposed to the nonsense content words — not surprising since there are many real content words that infants do not already know. Shady’s findings suggest that by 10.5-months, English-learners have developed some expectations about the kinds of function words that are likely to appear in utterances. However, what is not clear from these results is whether infants at this age have any expectations about where such function words are likely to appear within utterances. To investigate this issue, Shady created a new set of stimuli. She constructed pairs of passages which were identical except for the placement of certain function words. In the natural passages, each function word occurred in its proper sentential position. In the unnatural passages, the function words were misplaced by interchanging them with function words from another sentential position. In the following example of an unnatural passage, the interchanged function words are italicized. Is bike with three wheels a coming down the street. Johnny that seen had bike yesterday. Was lady with him the his aunt. Was red bike this missing for a day. Had cover that fallen on it. We the found had bike next to her garage.
Shady began by testing 10.5-month-olds on the natural and unnatural passages. Infants at this age displayed no significant listening preference for the natural over the unnatural passages. Subsequently, she found that neither 12.5- nor 14-month-olds showed significant listening preferences for the natural passages. In contrast, 16-month-olds did listen significantly longer to natural than to the unnatural passages. Consequently, Shady concluded that although 10.5-montholds may have some idea of which function words belong in English utterances, it is not until between 14- and 16-months of age that they learn the typical locations of these function words in sentences. The sensitivity which these older infants show to the placement of function words in sentences seems to be a step towards understanding the role of such words in marking syntactic distinctions. Thus, the presence or absence of an article prior to an unknown label has been shown to affect whether infants, a few months older, treat the label as a common or proper noun. The correct placement of function words within phrases is by no means the only type of regularity that learners may discover within prosodic groupings. For instance, breaking the input up in this way may facilitate the discovery of other kinds of relations between elements within these groupings. There are certain kinds of syntactic dependencies that occur among words in an utterance. In English, demonstratives such as “this” and “these” must agree in number with the nouns that they modify. Hence, we say “this dog” or “these dogs”, but not “this dogs” or “these dog”. Another kind of dependency has to do with the
18
PETER W. JUSCZYK
relationships among certain auxiliaries and verb forms. We say “the dog is running” but not “the dog can running”. For the learner, one of the interesting, and potentially problematic, properties of these dependencies is the fact that the critical elements can often occur at some remove from each other in an utterance. So, although we might say “Everyone is baking bread”, in which the critical elements occur almost adjacent to each other, we can also say “Everyone is not very skillfully baking bread”, in which the critical elements are separated by a five-syllable adverbial phrase. Given the amount of intervening material in the latter case, one may wonder how a learner ever correctly relates the verb form back to the auxiliary. Lynn Santelmann and I have been investigating when, and under what circumstances, English-learners begin to discover dependencies involving auxiliaries and verbs (Santelmann & Jusczyk 1997, 1998). We began by testing infants on passages in which the critical auxiliary element occurred adjacent to a monosyllabic verb stem with an “ing” ending. For the natural passages, we used the auxiliary “is”, whereas for the unnatural passages, we substituted the auxiliary “can”. For example, a natural passage included a sentence such as “John is running” which became “John can running’ in the unnatural passage. We found that 18-month-olds, but not 15-month-olds, listened significantly longer to the natural than to the unnatural passages. Thus, 18-month-olds have developed at least some sensitivity to this type of syntactic dependency. Our next objective was to determine the conditions under which the infants respond to the dependency. In a series of follow-up experiments, we systematically varied the distance between the auxiliary and the verb stem by inserting an adverbial phrase between them. We found that when a two-syllable adverbial was present, as in the case of “John is always running”, 18-month-olds continued to listen significantly longer to the natural than to the unnatural passages. However, when longer adverbials, 3 or 4 syllables, were used, the listening preferences for the natural versions disappeared. It was as if the infants no longer tracked the dependency between the auxiliary and the verb ending. In this context, it is interesting to note that the greater the separation of the critical elements, the less likely they are to appear within the same phrasal unit. Fortunately, for language learners long adverbial phrases between auxiliaries and verb endings are apt to be very rare in the input. In summary, we have considered some examples of cases in which sensitivity to the organization of elements within phrasal packages could be useful to the learner in discovering certain kinds of syntactic relations. Crucially, success in learning about the distributional properties of such elements requires some prior ability to detect the occurrence of these elements in fluent speech. The data we
BOOTSTRAPPING FROM THE SIGNAL
19
reviewed on the development of word segmentation skills suggests that Englishlearning infants do have the capacities required to detect such elements by 10.5-months of age.
Conclusions Much of the early research on bootstrapping from the signal focused on the possibility that the prosodic organization of utterances could provide a direct pathway to their underlying syntactic organization. Even in the simple kinds of utterances that characterize child-directed speech, the relationship between prosodic and syntactic organization has turned out to be more complex than first thought. Nevertheless, it is clear that infants are sensitive to prosodic organization in clauses and phrases (Gerken et al. 1994; Jusczyk et al. 1992) and that this organization plays some role in their processing of speech information (Mandel, Jusczyk & Kemler Nelson 1994; Mandel, Kemler Nelson & Jusczyk 1996). The challenge is to illuminate how sensitivity to prosodic organization might be used with sensitivity to other information in the signal to facilitate the discovery of the syntactic organization of utterances. I have suggested that infants developing word segmentation abilities may play a role in this process by enabling the learner to track the distribution of grammatical morphemes within the boundaries of prosodic phrases. Infants divide the input into linguistically relevant chunks and look for regularities in organization within such chunks. Finally, although the focus of this paper has been on how infants can use information in the signal in working out the syntactic organization of utterances in their native language, I am not claiming that the information in the signal is sufficient for this purpose. Infants clearly draw on a variety of sources of information — conceptual and linguistic — to learn about the syntactic organization of the language. The aim here has been to discuss the role that sensitivity to information in the signal may play in the overall process of acquiring a native language.
Acknowledgments Preparation of the present manuscript was facilitated by an NICHD Research Grant (#15795) and an NIMH Senior Scientist Award (#01490). The author wishes to thank Ann Marie Jusczyk, Roberta Golinkoff, Jürgen Weissenborn, and Barbara Höhle for comments made on an earlier version of this manuscript.
20
PETER W. JUSCZYK
References Best, C. T., McRoberts, G. W. and Sithole, N. M. 1988. “Examination of the perceptual reorganization for speech contrasts:Zulu click discrimination by English-speaking adults and infants.” Journal of Experimental Psychology: Human Perception and Performance 14:345–360. Brent, M. R. and Cartwright, T. A. 1996. “Distributional regularity and phonotactic constraints are useful for segmentation.” Cognition 61:93–125. Church, K. W. 1987. Phonological parsing in speech recognition. Dordrecht:Kluwer Academic Publishers. Cutler, A. 1990. “Exploiting prosodic probabilities in speech segmentation.” In Cognitive Models of Speech Processing: Psycholinguistic and Computational Perspectives, G. T. M. Altmann (ed.). Cambridge:MIT Press. Cutler, A. 1994. “Segmentation problems, rhythmic solutions.” Lingua 92:81–104. Cutler, A. and Butterfield, S. 1992. “Rhythmic cues to speech segmentation:Evidence from juncture misperception.” Journal of Memory and Language 31:218–236. Cutler, A. and Carter, D. M. 1987. “The predominance of strong initial syllables in the English vocabulary.” Computer Speech and Language 2:133–142. Cutler, A. and Norris, D. G. 1988. “The role of strong syllables in segmentation for lexical access.” Journal of Experimental Psychology: Human Perception and Performance 14:1 13–121. Echols, C. H. and Newport, E. L. 1992. “The role of stress and position in determining first words.” Language Acquisition 2:189–220. Eimas, P. D. 1974. “Auditory and linguistic processing of cues for place of articulation by infants.” Perception & Psychophysics 16:513–521. Eimas, P. D. 1975. “Auditory and phonetic coding of the cues for speech:Discrimination of the [r-l] distinction by young infants.” Perception & Psychophysics 18:341–347. Eimas, P. D. and Miller, J. L. 1980a. “Contextual effects in infant speech perception.” Science 209:1 140–1141. Eimas, P. D. and Miller, J. L. 1980b. “Discrimination of the information for manner of articulation.” Infant Behavior & Development 3:367–375. Eimas, P. D., Siqueland, E. R., Jusczyk, P. W. and Vigorito, J. 1971. “Speech perception in infants.” Science 171:303–306. Fernald, A. 1984. “The perceptual and affective salience of mothers’ speech to infants.” In The origins and growth of communication, L. Feagans, C. Garvey and R. Golinkoff (eds.). Norwood, NJ:Ablex. Fernald, A., McRoberts, G., and Herrera, C. In press. “Effects of prosody and word position on lexical comprehension in infants.” Journal of Experimental Psychology: Learning, Memory, and Cognition. Fisher, C. and Tokura, H. 1996. “Prosody in speech to infants:Direct and indirect acoustic cues to syntactic structure.” In Signal to Syntax: Bootstrapping fromSpeech to Grammar in Early Acquisition, J. L. Morgan and K. Demuth (eds.). Mahwah, NJ: Erlbaum.
BOOTSTRAPPING FROM THE SIGNAL
21
Gelman, S. A. and Taylor, M. 1984. “How two-year-old children interpret proper and common nouns for unfamiliar objects.” Child Development 55:1535–1540. Gerken, L. A. 1991. “The metrical basis for children’s subjectless sentences.” Journal of Memory and Language 30:431–451. Gerken, L. A. 1994. “Young children’s representation of prosodic phonology:evidence from English-speakers’ weak syllable omissions.” Journal of Memory and Language 33:19–38. Gerken, L. A. 1996. “Phonological and distributional information in syntax acquisition.” In Signal to Syntax: Bootstrapping from Speech to Grammar in Early Acquisition, J. L. Morgan and K. Demuth (eds.). Mahwah, NJ:Erlbaum. Gerken, L. A., Jusczyk, P. W. and Mandel, D. R. 1994. “When prosody fails to cue syntactic structure:Nine-month-olds’ sensitivity to phonological vs syntactic phrases.” Cognition 51:237–265. Gerken, L. A., Landau, B. and Remez, R. E. 1990. “Function morphemes in young children’s speech perception and production.” Developmental Psychology 25:204–216. Gleitman, L., Gleitman, H., Landau, B. and Wanner, E. 1988. “Where the learning begins:Initial representations for language learning.” In The Cambridge Linguistic Survey, Vol. 3, F. Newmeyer (ed.). Cambridge, MA:Harvard University Press. Gottlieb, G. and Krasnegor, N. A. (eds.) 1985. Measurement of audition and vision during the first year of postnatal life: A methodological overview. Norwood, NJ:Ablex. Hirsh-Pasek, K., Kemler Nelson, D. G., Jusczyk, P. W., Wright Cassidy, K., Druss, B. and Kennedy, L. 1987. “Clauses are perceptual units for young infants.” Cognition 26:269–286. Höhle, B. and Weissenborn, J. 1998. “Sensitivity to closed-class elements in preverbal children.” In Proceedings of the 22nd Annual Boston University Conference on Language Development, A. Greenhill, M. Hughes, H. Littlefield and H. Walsh (eds.). Somerville, MA:Cascadilla Press. Hohne, E. A. and Jusczyk, P. W. 1994. “Two-month-old infants’ sensitivity to allophonic differences.” Perception & Psychophysics 56:613–623. Houston, D., Jusczyk, P. W. and Newsome, M. 1995. “Infants’ strategies of speech segmentation:Clues from weak/strong words.” Paper presented at the 20th Annual Boston University Conference on Language Acquisition, Boston, MA. Houston, D., Jusczyk, P. W. and Tager, J. 1998. “Talker-specificity and the persistence of infants’ word representations.” In Proceedings of the 22nd Annual Boston University Conference on Language Development, A. Greenhill, M. Hughes, H. Littlefield and H. Walsh (eds.). Somerville, MA:Cascadilla Press. Jusczyk, P. W. and Aslin, R. N. 1995. “Infants’ detection of sound patterns of words in fluent speech.” Cognitive Psychology 29:1–23. Jusczyk, P. W., Cutler, A. and Redanz, N. 1993. “Preference for the predominant stress patterns of English words.” Child Development 64:675–687.
22
PETER W. JUSCZYK
Jusczyk, P. W., Friederici, A. D., Wessels, J., Svenkerud, V. Y. and Jusczyk, A. M. 1993. “Infants’ sensitivity to the sound patterns of native language words.” Journal of Memory and Language 32:402–420. Jusczyk, P. W., Hirsh-Pasek, K., Kemler Nelson, D. G., Kennedy, L., Woodward, A. and Piwoz, J. 1992. “Perception of acoustic correlates of major phrasal units by young infants.” Cognitive Psychology 24:252–293. Jusczyk, P. W. and Hohne, E. A. 1997. “Infants’ memory for spoken words.” Science 277:1984–1986. Jusczyk, P. W., Hohne, E. A. and Bauman, A. 1999. “Infants’ sensitivity to allophonic cues for word segmentation.” Perception & Psychophysics 61:1465–1476. Jusczyk, P. W. and Kemler Nelson, D. G. 1996. “Syntactic units, prosody, and psychological reality during infancy.” In Signal to Syntax: Bootstrapping fromSpeech to Grammar in Early Acquisition, J. L. Morgan and K. Demuth (eds.). Mahwah, NJ: Erlbaum. Katz, N., Baker, E. and Macnamara, J. 1974. “What’s in a name? A study of how children learn common and proper nouns.” Child Development 45:469–473. Kemler Nelson, D. G., Hirsh-Pasek, K., Jusczyk, P. W. and Wright-Cassidy, K. 1989. “How prosodic cues in motherese might assist language learning.” Journal of Child Language 16:55–68. Kemler Nelson, D. G., Jusczyk, P. W., Mandel, D. R., Myers, J., Turk, A. and Gerken, L. A. 1995. “The Headturn Preference Procedure for testing auditory perception.” Infant Behavior & Development 18:1 11–116. Kuhl, P. K. 1980. “Perceptual constancy for speech-sound categories in early infancy.” In Child phonology; Perception, Vol. 2, G. H. Yeni-Komshian, J. F. Kavanagh and C. A. Ferguson (eds.). New York:Academic Press. Kuhl, P. K. 1983. “Perception of auditory equivalence classes for speech in early infancy.” Infant Behavior and Development 6:263–285. Lasky, R. E., Syrdal-Lasky, A. and Klein, R. E. 1975. “VOT discrimination by four to six and a half month old infants from Spanish environments.” Journal of Experimental Child Psychology 20:215–225. Mandel, D. R., Jusczyk, P. W. and Kemler Nelson, D. G. 1994. “Does sentential prosody help infants to organize and remember speech information?” Cognition 53:155–180. Mandel, D. R., Kemler Nelson, D. G. and Jusczyk, P. W. 1996. “Infants remember the order of words in a spoken sentence.” Cognitive Development 11:181–196. Morgan, J. L. 1986. From simple input to complex grammar. Cambridge, MA:MIT Press. Morgan, J. L., Allopenna, P. and Shi, R. (1996). “Perceptual bases of rudimentary grammatical categories:T oward a broader conception of bootstrapping.” In Signal to Syntax: Bootstrapping from Speech to Grammar in Early Acquisition, J. L. Morgan and K. Demuth (eds.). Mahwah, NJ:Erlbaum. Morgan, J. L. and Demuth, K. (eds.) 1996. Signal to Syntax: Bootstrapping fromSpeech to Grammar in Early Acquisition. Mahwah, NJ:Erlbaum.
BOOTSTRAPPING FROM THE SIGNAL
23
Morgan, J. L., Meier, R. P. and Newport, E. L. 1987. “Structural packaging in the input to language learning:Contributions of prosodic and morphological marking of phrases to the acquisition of language?” Cognitive Psychology 19:498–550. Myers, J., Jusczyk, P. W., Kemler Nelson, D. G., Charles-Luce, J., Woodward, A. and Hirsh-Pasek, K. 1996. “Infants’ sensitivity to word boundaries in fluent speech.” Journal of Child Language 23:1–30. Newsome, M. and Jusczyk, P. W. 1995. “Do infants use stress as a cue for segmenting fluent speech?” In Proceedings of the19th Annual Boston University Conference on Language Development, Vol.2, D. MacLaughlin and S. McEwen (eds.). Somerville, MA:Cascadilla Press. Saffran, J. R., Aslin, R. N. and Newport, E. L. 1996. “Statistical learning by 8-month-old infants.” Science 274:1926–1928. Saffran, J. R., Newport, E. L. and Aslin, R. N. 1996. “Word segmentation:The role of distributional cues.” Journal of Memory and Language 35:606–621. Santelmann, L. and Jusczyk, P. W. 1998. “18-month-olds’ sensitivity to relationships between morphemes.” In Proceedings of the 22nd Annual Boston University Conference on Language Development, A. Greenhill, M. Hughes, H. Littlefield, and H. Walsh (eds.). Somerville, MA:Cascadilla Press. Santelmann, L. M. and Jusczyk, P. W. 1997. “What discontinuous dependencies reveal about the size of the learner’s processing window.” In Proceedings of the 21st Annual Boston University Conference on Language Development, E. Hughes, M. Hughes, and A. Greenhill (eds.). Somerville, MA:Cascadilla Press. Shady, M. E. 1996. Infants’ sensitivity to function morphemes. Unpublished Doctoral Dissertation, State University of New York at Buffalo. Shafer, V., Gerken, L. A., Shucard, J. and Shucard, D. 1992. “ ‘The’ and the brain:An electrophysiological study of infants’ sensitivity to English function morphemes.” Paper presented at the Boston University Conference on Language Development, Boston, MA. Stager, C. L. and Werker, J. F. 1997. “Infants listen for more phonetic detail in speech perception than in word-learning tasks.” Nature 388:381–382. Streeter, L. A. 1976. “Language perception of 2-month old infants shows effects of both innate mechanisms and experience.” Nature 259:39–41. Trehub, S. E. 1973. “Infants’ sensitivity to vowel and tonal contrasts.” Developmental Psychology 9:91–96. Trehub, S. E. 1976. “The discrimination of foreign speech contrasts by infants and adults.” Child Development 47:466–472. Werker, J. F. and Lalonde, C. E. 1988. “Cross-language speech perception:Initial capabilities and developmental change.” Developmental Psychology 24:672–683. Werker, J. F. and Tees, R. C. 1984. “Cross-language speech perception:Evidence for perceptual reorganization during the first year of life.” Infant Behavior and Development 7:49–63.
Contributions of Prosody to Infants’ Segmentation and Representation of Speech Catharine H. Echols
The University of Texas at Austin
The word is a fundamental building block of the sentence. Consequently, infants will have to identify words in the speech stream before they can begin to identify the syntactic structure of sentences. Just as young language learners may use a variety of different cues to identify syntactic structure, there are likely to be a number of different cues that indicate words in speech. Prosody may provide one source of cues. Two types of potential prosodic cues that may contribute importantly to the segmentation of words from speech are perceptually salient syllables and rhythm. To determine whether hypothesized cues actually contribute to word-level segmentation, we need to determine the answer to three questions:(1) Are the proposed cues available in the input? (2) Are infants aware of the proposed cues? (3) Do infants actually use the proposed cues in segmenting speech? Over the last few years, we have obtained an increasing amount of knowledge about all three of these questions. Research from several labs, including my own, has provided evidence that infants are sensitive to prosodic cues (e.g., Echols, Crowhurst & Childers 1997; Jusczyk, Cutler & Redanz 1993). Recent research also indicates that infants can make use of these cues to segment words from speech (e.g., Echols et al. 1997; Newsome & Jusczyk 1995; Morgan 1996). Furthermore, evidence is accumulating regarding the prosodic cues that are available in speech (e.g., Kelly & Martin 1994) and, in particular, in the speech directed to infants (e.g., Albin & Echols 1996; Bernstein Ratner 1986; Fernald & Mazzie 1991; Fernald & Simon 1984; Morgan 1986).
26
CATHARINE H. ECHOLS
Prosodic cues to word-level segmentation In this chapter, I will discuss research from our lab showing that prosodic cues are present in the input, are perceived by infants, and are used to segment words from the speech stream. I also will propose an account that interrelates the two types of cues, salient syllables and rhythmic cues. I will begin with research pertaining to the second type of cue, that is, rhythmic cues, after providing some background on what motivates this line of research. For some time, I have been arguing that one route into word-level segmentation is the following:Certain syllables, specifically stressed and final syllables, may be especially salient. Because these syllables are salient, they are readily extracted from the speech stream and stored as part of the initial representation for a word. Thus, on this account, children avoid the difficult problem of identifying boundaries between each of the words in a sentence by extracting the salient syllables and leaving behind the remaining speech input as unanalyzed sound (Echols & Newport 1992, see also Gleitman & Wanner 1982, and Peters 1983). In earlier work, I provided a fair amount of data that appeared to support this position. For example, children are far more likely to omit from their early productions syllables that are both unstressed and nonfinal than syllables that are either stressed or final (Echols & Newport 1992). However, most of the data were from children’s productions. A number of researchers, many of whom also are represented in this volume have presented alternate accounts for those data (e.g., Allen & Hawkins 1980; Demuth 1996; Fikkert 1994; Gerken 1994a, 1994b; Pater & Paradis 1996; Wijnen, Krikhaar & den Os 1994). Specifically, these researchers have argued that production factors, such as tendencies to adhere to a trochaic (strong-weak) metrical template, can account for children’s weak syllable omissions. It now appears that there is good evidence that metrical factors do contribute to weak syllable omissions. However, even if metrical factors contribute to weak syllable omissions at some point in development, that does not exclude the possibility that perceptual or attentional factors could contribute to the initial segmentation of words from speech. In fact, the metrical template account suggests one way in which perceptual/attentional factors also may contribute to segmentation:There could be a perceptual/attentional component to the trochaic preferences proposed for production:As Jusczyk (in this volume) has noted, it could be very useful to infants to notice the regularity in English wherein disyllabic words tend to adhere to a trochaic stress pattern:Infants would then expect strong-weak sequences to cohere as a unit, and so may tend to extract such units from the speech stream. (Such a bias might be particularly strong for infants acquiring a Texas dialect, in
CONTRIBUTIONS OF PROSODY TO SEGMENTATION
27
which words that typically have other stress patterns become trochaic, as in GUItar.) Jusczyk (this volume) presents some work supporting the idea that infants can take advantage of English rhythmic regularities for segmentation, though with a caveat. We also have obtained some such evidence.
Rhythm as a segmentation cue: Experiment 1 A first study explored the question of whether trochaic units cohere for young infants (see also Echols, Crowhurst & Childers 1997). In this study, Englishhearing 7- and 9-month old infants (32 of each age) were familiarized to threesyllable nonsense sequences containing stress on the medial syllable; the sequences were composed of CV syllables (e.g., mobúti, dabíga). Following familiarization, infants were tested on sequences containing changes in stressed or unstressed syllables and their reactions to those changes were assessed. The general set-up is similar to that for a head-turn preference procedure (Fernald 1985; Kemler Nelson et al. 1995). I will describe the general procedure, then turn to specific hypotheses and results for individual studies. Set-up and procedure The experimental set-up is shown in Figure 1. The infant is seated on a parent’s lap facing a green light and, behind that, a one-way mirror. On either side of the infant are pegboard partitions. The partitions conceal audio speakers that are used for the presentation of speech stimuli. In front of each concealed speaker is a red light. The parent wears headphones, over which masking speech is played. Two researchers are concealed behind the one-way mirror, one to play the speech sounds and the other to control the lights and time the infants’ looking behavior. The green center light is flashed first to get the infant’s attention to the center, then the red light to one side of the infant begins to flash. A trial begins with the presentation of a speech stimulus once the infant turns to the appropriate side. In the first experiment, the stimuli were played repeatedly during each familiarization and test trial until the infant looked away from the speaker for 2 s or until 30 s of looking time had accumulated. (In subsequent experiments, stimuli were played a fixed number of times during familiarization trials.) After four training (or familiarization) trials, four test trials were presented, with equal numbers of training and test trials being presented over the left and right speakers. During the training period, participants were familiarized to two trisyllabic nonsense-word speech sequences (e.g., b6gúdi and d6bíg6), with one
28
CATHARINE H. ECHOLS
E E Camera
Computer (for timing)
Speaker
green light
One-way mirror
red light
red light
I P
pegboard partition
Speaker
Computer (for stimulus presentation)
Infant Parent pegboard partition
Figure 1. Experimental set-up
being presented consistently on the left of the infant and one presented from the speaker on the right. Infants were then tested with variants of the familiarization sequence containing a 250 ms pause either prior to or subsequent to the stressed syllable. For example, for the familiarization set described above, an infant might hear during the test trials b6_gúdi over the speaker to the left and d6bí_g6 over the speaker to the right. (Side of presentation and pause location was counterbalanced across participants). The prediction was that, if infants expected trochaic units to cohere, then they should perceive those sequences that maintain the coherence of the trochaic units as more natural than those that do not. Prior research in which pauses were inserted into speech has shown that infants prefer speech that maintains the coherence of linguistic units, such as clauses or phrases, over that in which these units are disrupted (e.g., Hirsh Pasek et al. 1987; Kemler Nelson et al. 1989; Jusczyk et al. 1992). Consequently, we predicted that infants should prefer the
CONTRIBUTIONS OF PROSODY TO SEGMENTATION
29
sequences containing pre-stress pauses, which preserve the trochee, over those containing trochee-disrupting post-stress pauses. Because 9-month olds are closer to beginnings of language than 7-month olds, and have more experience with their native language, we expected that these older infants would be more likely to respond to the disruption than the younger infants. Results and discussion Results supported the predictions:As can be seen in Figure 2, the 9-month old infants listened longer for the sequences containing the pre-stress pauses than those containing the post-stress pauses, F(1,28) = 4.64, p < .05. In contrast, the 7-month olds listened equally long for the two sequences.
Mean Looking Time (secs)
10 8 6 4 2 0
9-month olds
7-month olds Prestress Pause Poststress Pause
Figure 2. Mean looking times for sequences containing pre-stress versus post-stress pauses
These findings are consistent with the prediction that 9-month old infants expect trochaic sequences to cohere as a unit:These infants preferred sequences with pre-stress pauses, which thus retained the coherence of the trochaic sequence, over those with post-stress pauses, in which the coherence of the trochee was violated. In contrast, the 7-month olds showed no evidence of a preference, an
30
CATHARINE H. ECHOLS
observation consistent with the possibility that they are not yet aware of this property of the native language. Interestingly, the timing of this change is in keeping with the developmental timetables for other sensitivities to properties of the native language, such as consonant contrasts (Werker & Lalonde 1988; Werker & Tees 1984); it also is consistent with evidence of developing preferences for English-typical rhythm patterns (Jusczyk et al. 1993). Furthermore, these findings corroborate data from Morgan (1996), using different methodologies, showing that 9-month olds but not 6-month olds are biased to treat trochaic sequences as coherent. However, evidence from children acquiring languages with typical rhythm patterns different from those found in English would be necessary to confirm that the observed developmental change is indeed a response to properties of the ambient language. Rhythmic cues: Experiment 2 The results of Experiment 1 suggest that infants expect the trochaic unit to cohere; however, they do not show that this expectation is actually used in wordlevel segmentation, that is, these results do not address the third of the three questions posed at the beginning of this chapter. Consequently, another experiment was conducted to assess whether infants could extract trochaic sequences from a longer string of speech more successfully than rhythmic units that are less typical of their native language (i.e., weak-strong, or iambic, sequences). (Additional details of the procedure and results are discussed in Echols et al. 1997, and Childers & Echols 1996.) The participants in this experiment were 32 English-hearing 9-month old infants; 7-month olds did not participate in this experiment because Experiment 1 left no reason to expect that the younger infants had discovered that trochaic rhythm is typical of English, so it would be highly improbable that they could use such knowledge for word-level segmentation. Set-up and procedure The set-up and procedure of this second study was similar to that of the first. However, there is an important difference in the logic of the design. Experiment 2 makes use of the habituation logic, wherein the goal is to bore infants so that they will exhibit a novelty response for any noticeable change. (In this sense, the logic of Experiment 2 also differs from the logic of similar experiments reported by Jusczyk and colleagues.) Specifically, in Experiment 2, infants were presented, during an initial familiarization period, with two different four-syllable nonsense sequences, one from each side; one sequence contained an embedded
CONTRIBUTIONS OF PROSODY TO SEGMENTATION
31
trochaic disyllable and one an embedded iambic disyllable. Each sequence was repeated a sufficient number of times that infants tended to become disinterested in them. During the test portion of the experiment, then, infants heard the trochaic and iambic disyllables now extracted from the longer string of speech, as well as a trochaic and an iambic sequence that had not previously been heard (i.e., a distracter). Examples of familiarization and test stimuli are shown in Table 1. Stimuli were counterbalanced such that “words” serving as targets for half of the infants were distracters for the other half of the infants; side of presentation and ordering of test trials also were counterbalanced. Table 1. Speech stimuli for Experiment 2
Familiarization Target test item Distracter test item
Trochaic
Iambic
p6 méId6r són méId6r wótb6n
mús t6rpót n6d t6rpót f6lú
The prediction was that, during familiarization, infants would extract the trochaic sequence from the longer string. At test then, they should recognize the trochaic sequence as familiar, so should find it uninteresting. Consequently, they should prefer the novel trochaic distracter. In contrast, infants should fail to extract the iambic sequence from the longer string:They should fail to recognize this “word” as familiar at test, so should be equally interested in the iambic target and the iambic distracter; in essence, both would be perceived as equally “new.” Results and discussion The prediction that infants would extract trochaic sequences more readily than iambic sequences was supported:As can be seen in Figure 3, infants looked significantly longer for the trochaic distracter than for the trochaic target, t(31) = 2.88, p < .008; for the iambic stimuli in contrast, looking time to the target did not differ from that to the distracter, t(31) = .46, ns. This finding is consistent with the view that infants are extracting, and thus becoming bored with, the trochaic targets, with the result that they prefer the novel stimulus; they apparently fail to extract the iambic target, so treat the extracted sequence as if it were new at test, and therefore are as interested in it as they are in the truly novel distracter. Though suggesting that trochaic disyllables are extracted more readily than iambic disyllables from the stream of speech, this finding should not be taken to
32
CATHARINE H. ECHOLS
Mean Looking Time (secs)
10 8 6 4 2 0
Trochaic
Condition
Iambic Target Distracter
Figure 3. Mean looking times for extracted target and distracter sequences
suggest that infants are incapable of extracting iambic sequences. In another version of the experiment (Childers & Echols 1996), we increased the amount of familiarization time that infants experienced. In this version of the experiment, infants showed an overall novelty preference, preferring not only the trochaic distracter over the target, t(31) = 2.35, p < .05, but also the iambic distracter over the target t(31) = 3.35, p < .01 (see Figure 4). With enough exposure, infants apparently can extract “words” with rhythmic patterns that are both typical and atypical of English. Rhythm appears to be a valuable cue for extracting words from the speech stream, increasing the efficiency of segmentation, but it can not be the only cue that these young infants use to identify words in the speech stream. Nonetheless, this set of findings does indicate not only that Englishhearing infants are sensitive to typical rhythm patterns of the native language, but also that they can use that knowledge to help them to identify words in speech. Given these indications that, by 9 months of age, infants have identified typical rhythm patterns in their native language and can use that knowledge to extract words from the speech stream, we might then ask where that knowledge originates. Jusczyk (in this volume) has proposed one possible explanation, that because names (and especially nicknames) in English are very likely to conform
CONTRIBUTIONS OF PROSODY TO SEGMENTATION
33
Mean Looking Time (secs)
10 8 6 4 2 0
Trochaic
Condition
Iambic Target Distracter
Figure 4. Mean looking times for extracted target and distracter sequences following expanded familiarization period
to a trochaic disyllable, and because infants appear to learn their names very early, names may provide an initial indication for infants of the typical English word form. The account that I will propose differs from this one. Before turning to an alternate account for the origin of a rhythmic bias, I should return to the question of whether stressed and final syllables are salient. In particular, I will explore the question of why we might even expect final syllables to be salient. It is easy to see why stressed syllables should be salient: Stressed syllables in English tend to be louder, longer and higher pitched than unstressed syllables, and some subset of these cues tends to be associated with stress in other languages (Lehiste 1970). Why, however, might final syllables be salient? Indeed, it may seem a bit circular to propose final salience as a cue to word-level segmentation; it would seem that the child would need to know which syllables were final (thus requiring knowledge of word boundaries) in order for final salience to serve as cue. However, word-final syllables also have the potential for being sentence-final; indeed, as Fernald and Mazzie (1991) have shown, this is especially true when parents are introducing a new word to a child, presumably a common situation in early word learning. Although explicit cues to word boundaries frequently are absent (Cole & Jakamik 1980; Hayes &
34
CATHARINE H. ECHOLS
Clark 1970), sentence boundaries tend to be rather well marked, for example with pauses or clear lengthening of final syllables. Sentence-final syllables may be relatively easily to break off and retain in memory, possibly as a result of the “recency effect” that is well-documented in memory research. I will suggest another reason for why final syllables are salient. As noted above, final syllables tend to be lengthened even in adult-directed speech (Klatt 1976). Drema Albin, then an undergraduate in my lab, noted that this final lengthening may be especially exaggerated in infant-directed speech. We decided to test this possibility by analyzing productions of mothers that were directed toward their 6- and 9-month old infants, and comparing the final lengthening in those productions to that observed in adult-directed utterances (see also Albin & Echols 1996).
The salience of final position Design and procedure We recorded the speech of 16 mothers; half were mothers of 6-month olds and half were mothers of 9-month olds, and all were native speakers of English. The recordings were collected in semi-naturalistic contexts in the mothers’ homes. To increase the ease of making comparisons, researchers brought a consistent set of toys to the homes; toys were selected that had names that varied in length and stress pattern. During the researchers’ visit, Mothers were asked to identify the objects to the researcher and then to introduce them to the child. The seven target words were extracted from the mothers’ utterances and transferred into a soundfile on a Macintosh computer. The duration, pitch and amplitude were analyzed from the soundfiles using Signalyze, a speech analysis package. Duration was measured (in ms) for the whole word and for each syllable; syllable boundaries were identified both visually and auditorily. Measures of pitch peaks and peak amplitude also were taken for each word and for each syllable within a word. Results and discussion As predicted, final syllables were of greater duration in infant-directed than in adult-directed speech, F(1,12) = 39.95, p < .001; they also were higher in peak pitch and in peak amplitude, F(1,12) = 5.93, p < .05 and F(1,12) = 8.23, p < .02, respectively. When analyzed on a word-by-word basis, final syllable duration
35
CONTRIBUTIONS OF PROSODY TO SEGMENTATION
was significantly greater in infant-directed speech for all but one of the target words (see Figure 5). However, this finding could indicate little more than what is already known:Infant-directed speech is slower and higher pitched than adultdirected speech. The crucial question is whether final syllables are selectively lengthened in infant-directed speech. To answer this question, we compared the ratio of final syllable to whole word in infant-directed versus adult-directed speech. The ratio was greater in infant-directed than in adult-directed speech, that is, the final syllables took up a larger part of the word in infant-directed speech than in adult-directed speech, F(1,12) = 14.38, p < .005. Thus, even considering the lengthening that typically occurs in infant-directed speech, the lengthening of final syllables is exaggerated. The comparable measure for peak pitch did not reach significance, suggesting that pitch was higher across the whole word in infant-directed than in adult-directed speech, not only on the final syllable. 600 550 duration (in msec)
500 450 400 350 300 250 200 150 100
hippo
alligator*
bracelet*
paintbrush*
* indicates significant difference with at least p < .05
elephant*
kangaroo*
adult-directed infant-directed
Figure 5. Final syllable mean duration in infant- and adult-directed speech
It should be noted that most of the words contributing to the analyses both of infant-directed and adult-directed speech were in utterance-final position. This does not render these data meaningless:As noted above, Fernald and Mazzie (1991) have shown that words being introduced to young children are very likely
36
CATHARINE H. ECHOLS
to be placed in final position. Furthermore, the situation in which a new word is being introduced to a child will tend to be a highly important word-learning situation. However, the segmentation task is simplified when a novel word occurs in final position, if only because merely a single boundary must be identified. If final lengthening is to be truly useful for segmentation, then it should be present in utterance-internal position as well as utterance-final position. Indeed, we have some findings to support this possibility. Although there were insufficient adult-directed instances of the target words in utterance-medial position to permit a comparison between adult- and infant-directed speech, it was possible to compare the length of syllables in final position in utterance-internal words against those in nonfinal positions in utterance-internal words. In utterance-internal locations, final unstressed syllables were significantly longer than nonfinal unstressed syllables, t(9) = 3.20, p < .02. Comparable analyses for pitch and amplitude did not reach significance. These findings are useful to a theoretical account that depends on the salience of both stressed and final syllables because they suggest that, instead of requiring two different mechanisms for the salience, the salience both of stressed and final syllables may derive from acoustic features. In other words, “stressed and final” may be reducible to “prosodically highlighted”. Having simplified the account in this way, we can return to the questions of the salience of stressed syllables and of their role in the identification of rhythmic properties of the native language.
The perceptual salience of stressed syllables In prior research, I have provided evidence from children’s early productions consistent with the proposal that stressed and final syllables are particularly salient to young language learners:Stressed and final syllables are more likely than unstressed nonfinal syllables to be included in the productions of one-word speakers (Echols & Newport 1992; see also Klein 1981). Furthermore, even where unstressed nonfinal syllables are incorporated into early productions, they tend to be less accurate than stressed or final syllables, suggesting that the representation for those syllables may be less precise (Echols 1993; Echols & Newport 1992). In imitation studies using nonsense word stimuli, designed so that stress level is manipulated, two-year olds more frequently omit stressed than unstressed syllables (Hura & Echols 1996; see also Blasdell & Jensen 1970). However, all of this prior research is based on children’s productions; it can be argued (and has been argued) that children may represent those unstressed
CONTRIBUTIONS OF PROSODY TO SEGMENTATION
37
nonfinal syllables, but may fail to produce them due to various production constraints (e.g., Allen & Hawkins 1980; Gerken 1994a, 1994b). What is needed to document the perceptual salience of stressed and final syllables are studies showing that children perceive these syllables more accurately than unstressed, nonfinal syllables. Some prior research does speak to this issue. In a study with two-month old infants, Jusczyk and Thompson (1978) found that stress was not necessary for infants to distinguish a contrast between [b] and [g] in two-syllable sequences. In contrast, Karzon (1985) found not only that stress was necessary for 1- to 4-month old infants to distinguish [r] from [l] in trisyllabic sequences, but that only the exaggerated stress found in infant-directed speech was sufficient to permit discrimination of the change. The differing results could derive from several features of the studies, including the length of the stimuli and the type of contrast. In any event, the participants in both studies were very young, far from the beginnings of language learning; young infants may focus on different attributes of the signal than infants who are approaching language. Consequently, we conducted a study to investigate the perception of stressed and final syllables by 9-month old infants. Design and procedure The participants in this study were 32 English-hearing 9-month old infants. The set-up was similar to that employed in Experiment 1. Half of the infants participated in a medial stress condition and half in a final stress condition. Infants in the medial stress condition heard a trisyllabic nonsense word sequence with stress in medial position (e.g., mobúti) whereas those in the final stress condition heard similar sequences with final stress (e.g., mobutí). Following familiarization, infants in both conditions heard two types of test trials, one with a change in the medial syllable and one with a final syllable change (i.e., a change in a stressed and in an unstressed syllable). For example, if familiarized with mobúti, the infant would hear two types of test stimuli, modúti and mobúpi (changes are underlined). Stimuli were constructed so that the syllable that was in medial position for half of the infants was in final position for the other half (i.e., another group of infants would be familiarized with motíbu and tested with mopíbu and motídu). An example sequencing of trials is shown in Table 2. Side of presentation was counterbalanced. It was expected that infants should become disinterested in the familiarization sequence; at test, then, infants should show increased interest (as evidenced by longer looking times) in those stimuli containing noticeable changes, and they
38
CATHARINE H. ECHOLS
Table 2. Example sequencing of speech stimuli for stress experiment Left Familiarization trials 1. 2. 3. 4. Test trials
1. 2. 3. 4.
mobúti mobúti
modúti modúti
Right
mobúti mobúti
mobúpi mobúpi
Note. Stimuli were presented multiple times on each trial
should show less interest in any stimuli for which they failed to recognize changes. They should be most likely to notice changes in salient syllables, that is, in stressed and final syllables. Results and discussion Predictions were supported, at least in part. Infants attended significantly longer to stimuli containing changes in final syllables, t(31) = 2.2, p < .05, and marginally longer to stimuli containing changes in stressed syllables, t(31) = 1.9, p ≈ .067. As can be seen in Figure 6, the effects of stress and position are additive, that is, infants attended least to changes in an unstressed nonfinal syllable, about equally to changes in stressed and in final syllables, and most to changes in syllables that were both stressed and final. These results tend to support the view that stressed or final syllables are attended to and represented more precisely by 9-month old infants than syllables that are unstressed and nonfinal.
Distinguishing final salience from attention to trochees In the previous experiment, unstressed final syllables were always part of a trochaic sequence. Consequently, it is not possible to determine whether those syllables were salient because they were final, or whether those syllables were represented precisely by the infants because they were extracted and stored as
CONTRIBUTIONS OF PROSODY TO SEGMENTATION
39
Mean Looking Time (secs)
10 8 6 4 2 0
Nonfinal
Condition
Final Stressed Unstressed
Figure 6. Mean looking times for consonant changes in syllables of different stress levels and positions
part of a trochaic sequence. An approach similar to that used to identify attention to stressed and final syllables can be used to distinguish segmentation based on attention to stress and final position from segmentation based on trochaic rhythm. Design and procedure In this version of the experiment, all familiarization stimuli were stressed on the initial syllable (e.g., móbuti). Changes during the test trials were either in the medial unstressed syllable or in the final unstressed syllable (e.g., móduti and móbupi, respectively). In addition, because infants might attend both to the final syllable and to the trochaic sequence, thus resulting in no differences in looking behavior across the two types of test trials, a no-change control (e.g., móbuti) was added. The sequencing of trials was therefore modified to permit two nochange control test trials to be added, one presented on each side. (Because these additional trials increased the length of the experiment, the familiarization period was reduced in this version.) If the infants are focusing on stressed and final syllables, and preferentially storing those salient syllables, then they should show
40
CATHARINE H. ECHOLS
longer looking for the final change than for the medial change or the no-change control. If infants are adhering to a strategy of extracting trochees, then they should attend longer to the medial change stimuli than to the two other types. Finally, it is possible that infants are attending both to trochees and to final syllables, in which case looking to both types of stimuli should exceed that to the control. Results and discussion In a first set of analyses, it did not appear that any differenceswere present. However, we had used two different sets of target sequences, each containing a different set of consonant contrasts. In addition to the stop consonants with place contrasts present in the móbuti series, we had a series based on lífosa, in which fricatives contrasted on the basis of voicing (e.g., test stimuli, lívosa and lífoza). We noticed that performance was poorer with the fricatives/voicing contrast. When we examined only those stimuli that used the place contrast, clear results emerged: Ascan be seen in Figure 7, infantslooked significantly longer for the change in the syllable that contributed to a trochaic rhythm pattern than to the final change stimuli or the control, t(23) = 2.2, p < .05 and t(23) = 3.1, p < .05, respectively. These results are consistent with the prediction that 9 month old infants will tend to extract and store trochaic sequences; they do not support the view that 9-month oldsfocuson and extract final syllables. Given the prior findings from children’searly productionss uggesting that attention to final syllables may be important well into the one-word stage, it may seem surprising that the 9-month old infants in this study did not appear to attend to final syllables. Given that words in English tend to adhere to a trochaic rhythm pattern, many of the final syllablesretained in early productionswould be part of a trochaic sequence in the adult target. Perhapsthere isnothing more to the apparent salience of final syllables than a tendency to attend to trochaic sequences. However, although many of the final syllables retained in the utterancesdes cribed in Echols and Newport (1992) were part of a trochee, children frequently retained unstressed final syllables that were not part of a trochaic sequence (see also Pater & Paradis 1996, though they propose a different explanation for what they describe as “the elephant problem”). It also is possible that final syllables are retained in productions but not preferentially attended to in these perception studies because the processes determining the form of productions are not the same as those underlying segmentation. However, there also is another explanation. In the account proposed earlier in this chapter, final syllables are salient because they are prosodically highlighted (i.e.,
CONTRIBUTIONS OF PROSODY TO SEGMENTATION
41
Mean Looking Time (secs)
12 10 8 6 4 2 0
Medial
Final
Control
Condition Figure 7. Mean looking times for consonant changes in unstressed syllables in different positions
lengthened). Unfortunately, because of the simple syllable structures and due to efforts in recording the stimuli to maintain as much consistency as possible across syllables, very little final lengthening was present in these stimuli. Consequently, the final syllables in these stimuli may have been at a disadvantage compared to final syllables in typical infant-directed speech. We presently are conducting additional research to explore this possibility.
A developmental account of word level segmentation Building on the data reported here, coupled with a fair amount of speculation, I will propose an account for the origins of word-level segmentation. In the beginning, stressed and final syllables will, by virtue of their prosodic highlighting, be salient to young children. Children will attend preferentially to these syllables and will tend to extract and store them in their initial representations for words. Because the trochaic disyllable is a highly common word form in English, and because stress tends to fall on the penultimate syllable even in longer English words, the extraction of stressed and final syllables will, with a high degree of frequency, result in the extraction of a trochaic sequence. Over time, children will come to recognize this common stress pattern, will come to expect
42
CATHARINE H. ECHOLS
that it characterizes words in English, and then will use that knowledge (along with other cues) to help them to identify words in the speech stream. When they begin to produce words, they may use this knowledge to construct a metrical template upon which they form words (e.g., Gerken 1994a). This proposed account would predict that children should start out attending to and extracting stressed and final syllables, then move to the more language specific strategy of extracting trochaic sequences. Based on our prior research, suggesting that 7-month olds do not yet expect trochaic sequences to cohere, we might expect that 7-month olds also would not attend to post-stress medial syllables. We have some very preliminary support for this prediction. We have begun to conduct, with 7-month old infants, a study like the trochaic-versus-final study reported above. The data so far suggest that these younger infants are not as attentive to the unstressed syllable that forms the second part of the trochaic sequence as are the 9-month olds; instead, the 7-month olds show some tendency to attend to the final syllable. However, with only 12 infants, none of the differences are significant, and thus must be treated with substantial caution.
Conclusion I have provided some evidence regarding each of the questions that I posed at the beginning of this chapter. In support of the first question, we have shown that final syllables in speech directed to infants are prosodically highlighted, indicating that final salience is indeed present in the speech signal as a possible segmentation cue. We also have contributed evidence pertaining to the second question, that of whether infants are aware of the proposed cues:W e have shown that infants attend to stressed and final syllables and that, by 9 months of age, they have come to expect trochaic sequences to cohere. These findings contribute to a burgeoning body of research indicating that, by the latter half of the first year, infants have a substantial amount of knowledge about properties of their native languages, including many cues that will be valuable in word identification and other aspects of language learning (e.g., Echols et al. 1997; Friederici & Wessels 1993; Jusczyk et al. 1993; Jusczyk, Luce & Charles-Luce 1994; Morgan 1996). Finally, the trochaic/iambic segmentation study provides evidence that English-hearing infants not only are aware that trochaic rhythm is typical of their native language, but they can use that knowledge to assist in segmenting novel words from the speech stream. In summary, then, cues useful for word-level segmentation are available in the speech stream, infants are sensitive to various prosodic cues, and they can
CONTRIBUTIONS OF PROSODY TO SEGMENTATION
43
make use of such cues to succeed at one of the first and most fundamental linguistic tasks they face, that of identifying words in speech. Two important prosodic cues are (1) perceptually salient syllables, such as stressed and final syllables, and (2) rhythm. Final position, like stress, can be described as a prosodic cue because final syllables are prosodically highlighted by virtue of exaggerated lengthening in infant-directed speech. Both salient syllables and rhythm may be valuable for word level segmentation; indeed, sensitivity to the second and more language-specific cue, rhythm, may develop from the relatively language-general cues provided by prosodically highlighted syllables. Thus, prosodic cues may be important for word identification across early language development, though the relative roles of particular cues may change as the child acquires a greater amount of experience with the native language. It will be exciting to identify the complex interactions of attention to different cues as they unfold over early language development.
Acknowledgments I thank my students and other collaborators, Drema Albin, Jane Childers, Marlena Creusere, Megan Crowhurst, Susan Hura, C. Nathan Marti, Lorin Mueller and Elissa Newport, for their valuable contributions to the research reported herein. I also am grateful to the many undergraduate students who assisted with this research and to the infants participants and their parents. Steve Piché and Mike Harmon deserve thanks, respectively, for developing the timing program and for assistance with the experimental set-up; Jerry Manheimer provided valuable statistical advice. Special thanks go to Barbara Höhle and Jürgen Weissenborn for the huge investment in time and effort required in organizing a conference on this topic and bringing about this volume. This research was supported by grant 003658–368 from the Advanced Research Program of the Texas Higher Education Coordinating Board, by a research grant from the University Research Institute at the University of Texas, and by NICHD grant HD30820.
References Albin, D. and Echols, C. H. 1996. “Stressed and Word-Final Syllables in Infant-Directed Speech.” Infant Behavior and Development 19(4):401–418. Allen, G. D. and Hawkins, S. 1980. “Phonological Rhythm: Definition and Development.” In Child Phonolgy: Vol. 1: Production, G. Yeni-Komshian, J. F. Kavanagh and C. A. Ferguson (eds.). New York:Academic Press. Bernstein Ratner, N. 1986. “Durational Cues which Mark Clause Boundaries in MotherChild Speech.” Journal of Phonetics 14:303–309.
44
CATHARINE H. ECHOLS
Blasdell, R. and Jensen, P. 1970. “Stress and Word Position as Determinants of Imitation in First Language Learners.” Journal of Speech and Hearing Research 13(1):193–202. Childers, J. B. and Echols, C. H. 1996. “Infants’ Use of Rhythmic Cues in Word-Level Segmentation.” Proceedings of the 20th Boston University Conference on Language Development. Boston, MA:Cascadilla Press. Cole, R. A. and Jakamik, J. 1980. “A Model of Speech Perception.” In Perception and Production of Fluent Speech, R. A. Cole (ed.). Hillsdale, NJ:Erlbaum. Demuth, K. 1996. “The Prosodic Structure of Early Words.” In Signal to Syntax: Bootstrapping from Speech to Grammar in Early Acquisition, J. L. Morgan and K. Demuth (eds.). Mahwah, NJ:Erlbaum. Echols, C. H. 1993. “A Perceptually-Based Model of Children’s Earliest Productions.” Cognition 46(3):245–296. Echols, C. H., Crowhurst, M. J. and Childers, J. B. 1997. “The Perception of Rhythmic Units in Speech by Infants and Adults.” Journal of Memory and Language 36(2):202–225. Echols, C. H. and Newport, E. L. 1992. “The Role of Stress and Position in Determining First Words.” Language Acquisition 2(3):189–220. Fernald, A. 1985. “Four-Month-Old Infants Prefer to Listen to Motherese.” Infant Behavior and Development 8(2):181–195. Fernald, A. and Mazzie, C. 1991. “Prosody and Focus in Speech to Infants and Adults.” Developmental Psychology 27(2):209–221. Fernald, A. and Simon, Th. 1984. “Expanded Intonation Contours in Mother’s Speech to Newborns.” Developmental Psychology 20(1):104–1 13. Fikkert, P. 1994. On the Acquisition of Prosodic Structure. Dordrecht:ICG Printing. Friederici, A. D. and Wessels, J. M. 1993. “Phonotactic Knowledge of Word Boundaries and its Use in Infant Speech Perception.” Perception and Psychophysics 54(3):287–295. Gerken, L. 1994a. “A Metrical Template Account of Children’s Weak Syllable Omissions.” Journal of Child Language 21(3):565–584. Gerken, L. 1994b. “Young Children’s Representation of Prosodic Structure:Evidence from English-Speakers’ Weak Syllable Omissions.” Journal of Memory and Language 33(1):19–38. Gleitman, L. R. and Wanner, E. 1982. “Language Acquisition:The State of the State of the Art.” In Language Acquisition: The State of the Art, L. R. Gleitman and E. Wanner (eds.). Cambridge, England:Cambridge University Press. Hayes, J. R. and Clark, H. H. 1970. “Experiments on the Segmentation of an Artificial Speech Analog. In Cognition and the Development of Language, J. R. Hayes (ed.). New York:W iley. Hirsh-Pasek, K., Kemler Nelson, D.G., Jusczyk, P. W., Wright Cassidy, K., Druss, B. and Kennedy, L. 1987. “Clauses are Perceptual Units for Young Infants.” Cognition 26(3):269–286.
CONTRIBUTIONS OF PROSODY TO SEGMENTATION
45
Hura, S. L. and Echols, C. H. 1996. “The Role of Stress and Articulatory Difficulty in Children’s Early Productions. Developmental Psychology 32(1):165–176. Jusczyk, P. W., Cutler, A. and Redanz, N. J. 1993. “Infants’ Preference for the Predominant Stress Patterns of English Words.” Child Development 64(3):675–687. Jusczyk, P. W., Hirsh-Pasek, K., Kemler Nelson, D. G., Kennedy, L. J., Woodward, A. and Piwoz, J. 1992. “Perception of Acoustic Correlates of Major Phrasal Units by Young Infants.” Cognitive Psychology 24(2):252–293. Jusczyk, P. W., Luce, P. A. and Charles-Luce, J. 1994. “Infants’ Sensitivity to Phonotactic Patterns in the Native Language.” Journal of Memory and Language 33(5):630–645. Jusczyk, P. W. and Thompson, E. 1978. “Perception of a Phonetic Contrast in Multisyllabic Utterances by Two Month Olds.” Perception and Psychophysics 23(2):105–109. Karzon, R. G. 1985. “Discrimination of Polysyllabic Sequences by One- to Four-MonthOld Infants.” Journal of Experimental Child Psychology 39(2):326–342. Kelly, M. H. and Martin, S. 1994. “Domain-General Abilities Applied to Domain-Specific Tasks:Sensitivity to Probabilities in Perception, Cognition and Language. Lingua 92:105–140. Kemler Nelson, D. G., Hirsh-Pasek, K., Jusczyk, P. W. and Wright Cassidy, K. 1989. “How the Prosodic Cues in Motherese Might Assist Language Learning.” Journal of Child Language 16(1):55–68. Kemler Nelson, D. G., Jusczyk, P. W., Mandel, D. R., Myers, J., Turk, A. and Gerken, L. 1995. “The Headturn Preference Procedure for Testing Auditory Perception.” Infant Behavior and Development 18(1):1 11–116. Klatt, D. H. 1976. “Linguistic Uses of Segmental Duration in English:Acoustic and Perceptual Evidence.” Journal of the Acoustical Society of America 59:1208–1221. Klein, H. B. 1981. “Early Perceptual Strategies for the Replication of Consonants from Polysyllabic Lexical Models.” Journal of Speech and Hearing Research 24(4):535–551. Lehiste, I. 1970. Suprasegmentals. Cambridge, MA:MIT Press. Morgan, J. L. 1986. From Simple Input to Complex Grammar. Cambridge, MA:MIT Press. Morgan, J. L. 1996. “A Rhythmic Bias in Preverbal Speech Segmentation.” Journal of Memory and Language 35(5):666–688. Newsome, M. and Jusczyk, P. W. 1995. “Do Infants Use Stress as a Cue in Segmenting Fluent Speech?” In Proceedings of the 19th Boston University Conference on Language Development, D. MacLaughlin and S. McEwen (eds.). Boston, MA: Cascadilla Press. Pater, J. and Paradis, J. 1996. “Truncation Without Templates in Child Phonology.” Proceedings of the 20th Boston University Conference on Language Development. Boston, MA:Cascadilla Press. Peters, A. M. 1983. The Units of Language Acquisition. Cambridge, England:Cambridge University Press.
46
CATHARINE H. ECHOLS
Werker, J. F. and Lalonde, Ch. E. 1988. “Cross-Language Speech Perception:Initial Capabilities and Developmental Change.” Developmental Psychology 24(5):672–683. Werker, J. F. and Tees, R. C. 1984. “Cross-Language Speech Perception:Evidence for Perceptual Reorganization within the First Year of Life.” Infant Behavior and Development 7(1):49–63. Wijnen, F., Krikhaar, E. and den Os, E. 1994. “The (Non)Realization of Unstressed Elements in Children’s Utterances:Evidence for a Rhythmic Constraint.” Journal of Child Language 21(1):59–83.
Implicit Memory Support for Language Acquisition Cynthia Fisher
University of Illinois
Barbara A. Church S. U. N. Y. at Buffalo
How do children acquire the representations necessary to identify words in their language? What representations are necessary, and what can these tell us about language processing during the course of language acquisition? These are fundamental questions about the acquisition of language, but they are also questions about perceptual learning and memory in young children. To identify words and build phrases in their native language, children must create long-term representations of the sound patterns of words, and use them to identify new instances of the same words in ongoing speech. To understand a sentence, children must accomplish this feat for multiple words in a single utterance, and compose them in memory, preserving relations between words such as order and adjacency. Many perceptual problems must be solved along the way, including the segmentation of words from continuous speech, and compensation for many sources of variability in the speech signal (e.g., Jusczyk 1997; Klatt 1980; Nusbaum & Goodman 1994). Theories of syntax acquisition have typically abstracted away from the analysis of speech, modeling how a learner might achieve a grammar based on input described as strings of words rather than sounds. However, this separation of speech perception from the rest of language acquisition has been eroding in recent years. For example, the view broadly known as “prosodic bootstrapping” suggests that acoustic cues associated with large-scale prosodic groupings in speech might provide a partial bracketing of speech input into relevant units (e.g., Fisher & Tokura 1996; Gleitman, Gleitman, Landau & Wanner 1988; Morgan 1986). If so, the language input children take in might in some respects be more informative than simple strings of words. On the other hand, an accurate representation of a string of words is almost certainly difficult for very young children to attain. Even when words are
48
CYNTHIA FISHER & BARBARA A. CHURCH
familiar, children are less skilled at word identification than adults are (e.g., Nittrouer & Boothroyd 1990; Walley 1993). Moreover, systematic biases influence which parts of utterances young children succeed in identifying. Fernald (1994) found that 15-month-olds were more likely to comprehend familiar words in utterance-final than medial position, and did better with medial words if they were greatly lengthened. Thus stress and position in utterances partly determine which items children identify in speech (e.g., Echols 1993). This is not mere noise in the input, but systematic variation in the perceptibility of parts of utterances. Some have suggested that such variations provide vital aid for language acquisition by allowing the learner to begin with a biased subset of speech data (e.g., Gleitman et al. 1988; Newport 1990). The apparently simplifying assumption that children begin syntax acquisition with accurate strings of words may systematically misrepresent the lexical information in the input. In this chapter we review some new findings on young preschoolers’ representations of speech in a word identification context. We document longterm perceptual priming for the sounds of words in 2- and 3-year-olds, and argue that the simple and powerful learning mechanism underlying auditory repetition priming could play a central role in both (a) building up perceptual representations of words in long-term memory, and (b) permitting children’s identification of words to benefit flexibly from a repetitive discourse context. By studying the properties of this learning mechanism in word and sentence processing, future research can explore fundamental questions about how children represent and learn from speech input during the process of language acquisition.
What’s so hard about word recognition? As adult native listeners we identify words rapidly and with apparent ease. However, as mentioned above, skilled word recognition takes a long time to develop. Even young school-aged children perform less well than adults in tests of spoken word recognition (see Nittrouer & Boothroyd 1990, and Gerken, Murphy & Aslin 1995, for reviews). Preschoolers can make some startling errors, such as often failing to differentiate a nonsense word from a known word (e.g., Gerken et al. 1995). Even in optimal circumstances, in which children’s identification of a small set of highly familiar words is assessed through visual fixation of the (repeatedly) named target, 24-month-olds take longer to identify words than young adults do (Swingley 1998), and 15-month-olds take longer still (Fernald, Pinto, Swingley, Weinberg & McRoberts 1998). Why might children be slower and more error-prone than adults in tests of
IMPLICIT MEMORY SUPPORT FOR LANGUAGE ACQUISITION
49
word recognition? Though young infants discriminate nearly every phonetic contrast presented to them (e.g., Goodman & Nusbaum 1994), some have suggested that toddlers’ relatively poor word recognition performance reveals a fundamental lack of phonetic detail in infants’ long-term representations of the sounds of words (e.g., Hallé & de Boysson-Bardies 1996). However, given the complexity of the task of identifying a spoken word, there are many potential sources of error. Models of word recognition commonly depend on processing at many levels. For example, in one model of word recognition based on developmental evidence (WRAPSA; Jusczyk 1997), processing includes (a) acoustic/phonetic analysis, assumed to be a built-in part of the auditory system; (b) the creation of word-like phonological representations including information about stress pattern, syllable structure, and segmental content; and (c) the detection of the best match between the phonological representations of these candidate words and a long-term store of words’ sound patterns. Errors could occur at any or all of these levels; thus overall performance improvements with development cannot be taken as straightforward evidence of qualitative changes in long-term representations of the sounds of words. First, attentional factors can affect listeners’ ability to succeed in phoneticand phonological-level analyses of input. Jusczyk et al. (1990) found that the range of sounds presented in a habituation task affected how fine a phonetic distinction infants could make. This suggests that infants’ attention to detail in speech can be influenced by context. Similarly, Merriman and Marazita (1995) found that 2-year-olds were better able to infer that nonsense words were new, and must refer to unnamed objects, when each nonsense word was preceded by a story stocked with phonologically similar words. Apparently the context helped children to accurately represent the sounds of new words. Manipulations relevant to attention also influence speech perception in adults:Gordon, Eberhardt and Rueckl (1993) found that the relative weight of two acoustic cues to consonant voicing changed when listeners performed a concurrent task. Differences between young children and adults in the allocation of attention to phonetic or phonological levels of analysis could affect word identification. If candidate words produced by these analyses were under- or miss-specified, for example, then they would yield rough or errorful matches to word representations in a long term memory store. Second, lexical selection involves not only detecting a close match between a word candidate and a representation in long-term memory, but also the inhibition of similar words (e.g., Marslen-Wilson 1987; Goldinger, Luce & Pisoni 1989). This process is likely to be subject to developmental change. Elderly adults can do less well than young adults in spoken word recognition,
50
CYNTHIA FISHER & BARBARA A. CHURCH
particularly in identifying words with many high-frequency phonologically similar neighbors (Sommers 1996). Elderly listeners may have difficulty selecting a lexical item while suppressing highly frequent and similar competitors. These findings can be interpreted as due to cognitive changes associated with aging, including general slowing and less efficient inhibitory processes. Such changes have their analogs in child development, which in turn have been similarly linked to various differences between the capabilities of children and young adults (e.g., Diamond 1985). Given the lexical competition assumed in models of word recognition, developmental improvements in the efficiency of inhibition would improve word recognition performance. Third, of course, it might be true that the long-term representations themselves are systematically inaccurate in young children’s auditory lexicons. Very young children may rely on “holistic” long-term representations of words, without the phonemic sub-structure assumed to underlie adults’ word representations (e.g., Jusczyk 1997; Charles-Luce & Luce 1995; Walley, Smith & Jusczyk 1986). It has also been suggested that holistic representations could be less specific, lacking the phonetic detail required to differentiate minimal pairs in the adult lexicon (e.g., Charles-Luce & Luce 1995; Hallé & de Boysson-Bardies 1996; Stager & Werker 1997). Qualitative differences between children’s and adults’ word representations could come about in a number of ways:One possibility is that systematic inaccuracies in pre-lexical analyses could cause words to be encoded in long-term memory in a partially represented form. Another possibility is that when infants start to learn words, they move into a new, lexical mode of perception, in which they must learn what level of detail is needed to differentiate words (e.g., Hallé & de Boysson-Bardies 1996). CharlesLuce and Luce (1995) argued that under-specified long-term word representations would suffice to differentiate words from one another in the auditory lexicons of young children, presumed to be much more sparsely populated than adults’. In principle, holistic representations might persist throughout the preschool years, providing an account of older preschoolers’ relatively poor performance in word recognition as well as toddlers’ (see Walley et al. 1986, and Gerken et al. 1995, for reviews). However, though speed and accuracy improve with age, there are many qualitative similarities between preschoolers’ and adults’ word recognition, suggesting that some absolute performance differences stem from more general cognitive changes during development. For example, Gerken et al. (1995) and Graham and House (1971) found that 3- and 4-year-olds’ perceptual confusions among words were more like adults’ than the holistic representation hypothesis would suggest. In addition, some characteristics of word recognition in adults, such as greater attention to initial segments (e.g., Walley et al. 1986),
IMPLICIT MEMORY SUPPORT FOR LANGUAGE ACQUISITION
51
and use of semantic and syntactic constraints (Cole & Perfetti 1980; Nittrouer & Boothroyd 1990) are found in preschoolers. Recent studies using eye-movement latencies to assess word recognition also reveal very similar patterns in on-line use of phonetic information in toddlers and adults (Swingley 1998). Finally, the just-mentioned influence of higher-level constraints suggests yet another source of error in young children’s word recognition. While older preschoolers do use semantic, syntactic, and discourse information in word recognition, they do so more efficiently as they get older (Cole & Perfetti 1980; Nittrouer & Boothroyd 1990). Toddlers and young preschoolers, who may identify and understand little of the surrounding linguistic context, will be correspondingly less able to use that context to constrain their identification of words. In sum, there are many possible explanations for developmental changes in word recognition performance. One point of agreement among researchers in this area is that there are not enough data to decide the case. For our purposes, the moral is simply that word recognition is a demanding task, with success far from guaranteed for children throughout the preschool years. But this poses something of a conundrum. Children are acquiring their native languages as preschoolers, learning words hand over fist, and showing some knowledge of grammatical regularities at least by two years (e.g., Bloom 1970; Gerken & McIntosh 1993; Valian 1986). These achievements depend on identifying words in fluent speech, and keeping track of the linguistic and extra-linguistic contexts in which words occur. How do children learn so much from speech, even though their absolute performance in word recognition can be very poor? One way to approach this question is to investigate particular learning mechanisms which could play a role in the development of representations for word recognition. The approach and initial results described below provide one way to separate questions of representation and learning from differences in absolute performance between children and adults. As we shall see, young preschoolers can under some conditions learn as much as adults from exposure to a familiar word, though they do not perform as well in identifying words based on their sounds.
Requirements for representations used in word recognition Before turning to the research on auditory priming, we briefly set out some basic problems which must be solved in the identification of spoken words. There are many sources of ambiguity in ordinary speech, which can be summarized under two headings — the problems of word segmentation and the problems of
52
CYNTHIA FISHER & BARBARA A. CHURCH
contextual variability. These yield some basic constraints on the kinds of learning mechanisms which could support the development of an auditory lexicon. Segmentation First, speech often lacks clear markers of the location of word boundaries (e.g., Klatt 1980). Words are not separated by pauses, or routinely signaled by other features occurring only at word boundaries. To some degree, adults can infer word boundaries from acoustic cues such as the duration of segments in different word positions (e.g., Gow & Gordon 1995; Nakatani & Dukes 1977), or the metrical patterns typical of words in their language (e.g., Cutler & Norris 1988; Nakatani & Schaffer 1978). However, such cues leave considerable residual ambiguity. Thus theories of word identification typically model word segmentation at least in part as an outcome of rather than a prerequisite to lexical access (e.g., Klatt 1980; Gow & Gordon 1995; Marslen-Wilson 1987; McClelland & Elman 1986). Word units in fluent speech are only partially detectable based on pre-lexical cues; therefore to some extent we segment speech by detecting the sound patterns of familiar words. What about children, who must learn those sound patterns to begin with? Recent research suggests that infants use the distribution of sound patterns across multiple utterances to create word-like sound units (e.g., Goodsitt, Morgan & Kuhl 1993; Saffran, Newport & Aslin 1996). In order for this kind of distributional analysis to occur, information about multiple instances of a sound pattern in different contexts, and about the contexts in which it appears, must be represented. This information must be encoded and used flexibly: a previouslyheard sound pattern must be identified as the same in a new context, while retaining sufficient information about the surrounding context to establish the word’s distributional properties. This type of learning requires a memory mechanism that allows partial matches of information across contexts without losing context information. Variation due to context Second, in casual speech, speakers systematically miss various phonetic targets and blend words at their edges (e.g., Klatt 1980). Thus to identify a word in context, the hearer must take the context into account. Words produced in fluent speech are difficult to identify when excised from their supporting context, perhaps especially in child-directed speech. Bard and Anderson (1982) found that content words in spontaneous speech to young children were actually less
IMPLICIT MEMORY SUPPORT FOR LANGUAGE ACQUISITION
53
intelligible out of context than those taken from speech to an adult, and attributed the unintelligibility of words in child-directed speech to their greater predictability in context. Additional distortions of the sounds of words are associated with the intonational and rhythmic structure of utterances. The same word sounds very different if it occurs at the end as opposed to in the middle of an utterance (e.g., Fisher & Tokura 1996), in an utterance given an approving as opposed to a prohibiting tone (e.g., Fernald et al. 1989), in the middle rather than at the end of a conversational turn (e.g., Bolinger 1978), or as the first rather than the second mention of the same word in a discourse (e.g., Fisher & Tokura 1995; Fowler & Housum 1987). Problems of variability are still relevant when a word appears in isolation. A word must be identified as the same across different voices and tones of voice, varying considerably in pitch and other acoustic properties. This catalog of sources of variation in natural speech has consequences for the nature of the perceptual learning mechanism we seek:Long-term representations of the sounds of words must in some sense be abstract enough to encompass variability due to voice, intonation, and linguistic context. Yet these representations must also include enough phonetically relevant detail to discriminate near lexical neighbors, and to permit the child to learn about the various systematic sources of variability in the sounds of words. The learning mechanisms responsible for establishing representations of the sounds of words during development must be able both to abstract over and to incorporate phonetic details and information about words’ surrounding context.
Memory as part of perceptual representation systems Recent work in the adult memory literature suggests a candidate learning system that has just the properties we argued above would be needed to create appropriately flexible long-term representations of the sounds of words. Studies in a longterm auditory priming paradigm reveal a powerful learning mechanism which continually updates adults’ representations of the sounds of words to reflect auditory experience (e.g., Church & Schacter 1994). Every time a word is heard, lasting changes are made to representations of its sound which facilitate the identification of the same word when heard again. This research fits into the Perceptual Representations Systems (PRS) framework, which hypothesizes that whenever a stimulus is attended to, perceptual representations relevant to its identification are encoded into the memory stores that normally serve the
54
CYNTHIA FISHER & BARBARA A. CHURCH
identification process (e.g.,Schacter 1994). On this view,long-term learning is an integral part of all perceptual identification processes,and the one-trial facilitation known as long term priming can reveal the operations of basic mechanisms for learning about the perceptual world (see also Cohen and Eichenbaum 1993; Rajaram,Srinivas & Roediger 1998). Long-term auditory priming has several properties which support its relevance for the initial acquisition of an auditory lexicon. First,long-term auditory priming is a phenomenon of implicit rather than explicit memory (e.g., Schacter,Church & Treadwell 1994). To benefit from a prior exposure to a word,the listener need not recognize that the word was heard before. While much research makes it clear that explicit retrieval of past events is more difficult for children than adults,research in the visual domain suggests that implicit memory mechanisms are in place early in childhood (e.g.,Greenbaum & Graf 1989; Naito 1990; Parkin & Streete 1988). Second,several sources of evidence suggest that perceptual representations of words distinct from their meanings support long-term auditory priming. For example,Schacter , McGlynn, Milberg and Church (1993) found normal auditory word priming in a patient with word-meaning deafness,even for words whose meanings he was unable to retrieve from their sounds. This perceptual learning mechanism could be used to create auditory representations for words before their meanings are acquired. Also,of course,the priming effect is long-term (e.g.,Schacter & Church 1992); if the underlying learning process operates similarly in young children,it could support the acquisition of long-term memory representations of the sounds of words.1 Several lines of evidence suggest that the learning mechanism supporting long-term auditory priming has properties which make it plausible as a key mechanism for creating and adapting representations for the recognition of words. We pointed out above that representational systems used to identify words must be flexible: They must both encode phonetic detail and reflect phonetic context,even while permitting abstraction over variability in sound and context. Long-term auditory word priming reveals representations which can include very detailed acoustic information,while also permitting abstract or partial matches to new stimuli (Church & Schacter 1994; Goldinger 1996; Schacter & Church 1992). In auditory word identification tasks,the greatest long-term facilitation due to repetition is found when the prime and target words match in every respect,including the speaker’s voice (e.g.,Church & Schacter 1994; Goldinger 1996; Sheffert 1998),details of pronunciation (e.g.,full vs. reduced vowel in an unstressed syllable; Church,Dell & Kania 1996),and fundamental frequency (Church & Schacter 1994). Changes in any of these features,including small shifts in fundamental frequency or tiny changes in formant frequency that
IMPLICIT MEMORY SUPPORT FOR LANGUAGE ACQUISITION
55
listeners cannot discriminate across the delays involved (Church & Schacter 1994), can reduce priming. Auditory priming can also be reduced by changing a semantically unrelated coarticulated context word (Church & Poldrack, in preparation). However, significant priming still occurs across all of these changes. Thus the representations underlying auditory priming in adults have components abstract enough to support the recognition of words across speakers, instances, and contexts, even while retaining many details. These properties of long-term auditory priming suggest that the learning mechanism mediating it could contribute to the creation of a long-term store of word-sounds in children, as well as to the life-long adaptability of speech processing to context. Thus the PRS framework and other views which make learning an integral part of speech perception (e.g., Cohen & Eichenbaum 1993; Nusbaum & Goodman 1994) permit a strong prediction about the continuity of speech perception mechanisms throughout development. This does not imply that speech perception performance should be unchanging throughout development: As we saw above, it is not. Rather, the basic mechanisms which update speech representations to reflect auditory experience in adults could be the same ones which create those representations in childhood.
Does the same learning mechanism operate in young children? To test the relevance for early language development of the learning mechanisms mediating long-term auditory priming in adults, the first step is simply to discover whether the phenomenon of long-term auditory priming can be found in young preschoolers at all, and whether it shares some of the relevant properties of auditory word priming in adults. If a child hears a word just once, does this provide any facilitation for the later recognition of the same word? Or does the rapid and long-lasting facilitation found in adults depend on fully adult-like representations of lexical items? If there is facilitation, can we find evidence that, as for adults, long-term auditory priming is primarily mediated by perceptual rather than meaning-based representations? Church and Fisher (1998) adapted an auditory word priming paradigm for young preschoolers. An initial study compared long-term auditory repetition priming across three age groups:2.5- and 3-year-olds, and college-aged adults. A second study compared 2-year-olds and adults. As reviewed above, much research has shown that preschoolers are slower and less accurate at word identification than adults; our question was whether these groups learn in the same ways from attending to a word.
56
CYNTHIA FISHER & BARBARA A. CHURCH
In an initial study phase, children and adults listened to a set of taped words, recorded in isolation. These words were attributed to a toy robot, and the children’s attention was attracted to the words by asking them to judge whether the robot said the words “really well”, and to reward him with robot cookies. This was followed by a brief distractor task, lasting about 2 minutes, during which the child and the experimenter put away the materials from the study phase. In the test phase, the children listened to more taped words, including both studied and new items, and attempted to repeat them after the robot. The test words were mildly low-pass filtered, producing a slightly muffled sound. The purpose of the filtering was to vary the difficulty of the identification task. The severity of the filter was adjusted to equate baseline identification performance in child and adult versions of the task.2 The measure of interest was whether children, like adults, accurately identify and repeat more studied than non-studied words. Thus the filter identification task permits us to disentangle issues of representation and learning from differences in absolute performance between children and adults. Do children gain any facilitation for word recognition as a result of a single repetition? The studied test words in this procedure were filtered versions of the same tokens presented in the study phase; the listener’s task was to identify them at test from degraded perceptual information. As shown in Figure 1, all age groups accurately repeated more filtered test words that they had heard in the study phase of the experiment than words that were not presented in the study phase. Moreover, this priming effect did not differ significantly in magnitude across age groups. Thus 2-, 2.5- and 3-year-old children, like adults, get a significant perceptual boost in word identification from having heard the word once a few minutes before, and the magnitude of this facilitation does not differ significantly from preschool to college age. The same memory mechanism that has been shown to influence word identification in adults seems to be operational during the preschool years. This is true despite the enormous changes in vocabulary knowledge between age 2 and adulthood. Apparently the rapid perceptual learning known as long-term auditory priming does not depend on adult-like lexical knowledge. This provides the first evidence for our hypothesis, that auditory word priming is mediated by a basic and early-developing learning mechanism that could play a central role in the establishment of a long-term memory store for word identification. What makes us think this is a perceptual priming effect? Words in a mental lexicon are complex objects:A word has a phonological representation, but also semantic and syntactic representations. In principle, any or all of these types of representations could mediate the facilitation we see in long-term auditory
IMPLICIT MEMORY SUPPORT FOR LANGUAGE ACQUISITION
57
Proportion correctly repeated
0.8
0.6
Studied New
0.4
0.2
0.0
2
Adult
2.5
3
Adult
Children’s and adults’ baseline rates of correct identification were matched by increasing the severity of the low-pass filtering in the test phase for the adults. Each age group showed significant long-term priming, and the magnitude of the priming effect did not differ across age groups.
Figure 1. Mean proportion Studied and New words correctly repeated (by children) or transcribed (by adults)
priming. However, previous research with adults has shown that long-term auditory word priming is not increased by encoding tasks which direct the listener’s attention to the meaning rather than the sound of each word. For example, whether the listeners study words by judging clarity of pronunciation, as in our task, or by judging what semantic category the word falls into, they get the same long-term facilitation in later identifying the studied words. A semantic study task does enhance explicit memory performance, however. Adults more accurately judged whether or not a particular item has been presented in a study phase if the study task directed their attention to the meaning rather than the sound of the words (Schacter & Church 1992). This dissociation suggests that representations of sounds play the primary role in long-term auditory word priming. If the long-term facilitation were based on semantic representations, we would expect increased priming when the study task encourages a focus on meaning. The same pattern appears for 3-year-olds (Church & Fisher 1998), suggesting
58
CYNTHIA FISHER & BARBARA A. CHURCH
that representations of the sounds of words rather than their meanings mediate long-term priming in preschoolers as well as in adults. In a semantic study task, children listened to a robot saying a set of words, and for each word were asked to choose the thing the robot named from two objects placed in front of them. The non-semantic study task was described above. All children then completed a distractor task as before, and participated in either an implicit or an explicit test phase. The implicit test was the filter identification and repetition task described previously. Children in the explicit memory test heard the studied and new words clearly presented (not filtered), and judged whether they had heard the robot say each word before. As found for adults, the amount of priming was not significantly affected by the type of encoding judgment. Choosing the object named by each word in the study task, which presumably encouraged attention to its meaning, generated no more facilitation in identifying that word again than a judgment about how each studied word was pronounced. The encoding judgment did, however, significantly affect recognition memory. Children who made object choices in the study phase were much more accurate in discriminating old from new words than children who did not. These data are consistent with findings in the adult literature (e.g., Schacter & Church 1992), and they suggest that auditory priming in preschoolers, like adults, reflects the encoding of information about the sound patterns of words. Taken together, these findings support the hypothesis that the simple phenomenon of long-term auditory priming reflects the operation of a memory mechanism which permits long-term learning about the sounds of words during language development. First, this learning phenomenon appears in young preschoolers, and seems not to develop in its essential operation between preschool and college age, despite large changes in lexical knowledge. Second, we have reviewed evidence that long-term auditory repetition priming, both in children and adults, is mediated largely by perceptual rather than by meaning-based representations of words. This begins to suggest an answer to the puzzle laid out in the introduction:How do children learn so much about language as preschoolers if their word recognition is so defective? The repetition priming data suggest that preschoolers can learn just as much from hearing a familiar word as adults do, even though their performance in recognizing familiar words can be much poorer. Like adults, young preschoolers modify their system for spoken word identification whenever they attend to speech. This incremental, experience-based tuning of perceptual representations and processes for word identification could itself constitute a fundamental mechanism for the development of a long-term store of the sound patterns of words in the native language.
IMPLICIT MEMORY SUPPORT FOR LANGUAGE ACQUISITION
59
How flexible is auditory word priming in young children? We began with the problem of how young children represent and learn from the speech they hear, and suggested that the familiar phenomenon of auditory priming is produced by a learning mechanism that has the right properties to play a central role in this process. In particular, we argued above that the inherent flexibility of long-term auditory word priming in adults suggests that the associated learning mechanisms may contribute to the creation of a long-term store of word-sounds in children, as well as to the life-long adaptability of speech perception. Does word priming in young children show the same flexibility? If not, then the perceptual facilitation documented above could play little role in word identification under ordinary circumstances. In this section we will review some preliminary investigations of word priming, both short-term and long-term, across different linguistic contexts. Changing linguistic context constitutes a strong test of the flexibility of auditory repetition priming in children. When sentence context changes, the acoustic variability is two-fold:The repeated word itself changes due to coarticulation and other factors, and the surrounding context changes. If word priming in young children can encompass both of these kinds of variability, then it could help to build representations for ordinary word recognition. Findings of word priming across sentence contexts would also suggest a possible solution to a likely representational problem for language learners. In order to understand a sentence, or learn about the distributional regularities of a language, the child must identify more than one word per sentence. There are reasons to believe that this is something of a perceptual bottleneck for young children. Long sequences of sounds reduce infants’ ability to make phonetic discriminations (e.g., Karzon 1985). Perceptual biases mentioned above influence what parts of long utterances infants and toddlers more readily identify:Stressed and utterance-final syllables are more salient than unstressed and non-final syllables (e.g., Echols 1993; Fernald 1994; Karzon 1985). However, these biases are unlikely to help children identify multiple words per utterance, since nonfinal content words are likely to be unstressed in ordinary conversation. Speakers tend to position new words at the ends of utterances and previously-mentioned words nearer the beginnings of utterances (e.g., Prince 1992), and to shorten and de-stress repeated words (Fowler & Housum 1987). Fisher and Tokura (1995) found both of these tendencies in speech to infants as well as to adults. Thus stress and final position typically confer their advantages on the same word — the new word in the sentence. Neither of these documented perceptual biases will help children to identify more than one word per sentence.
60
CYNTHIA FISHER & BARBARA A. CHURCH
In the current context the solution to this problem should be clear:In conversations among adults, the reduction of given words in speech is practicable in part because listeners who have heard a word once need less acoustic information to identify it again (Fowler & Housum 1987). Fisher and Tokura (1995) pointed out that if the same is true of child listeners, then the ordinary presentation of open-class words as new and then given could increase the likelihood that young children will recognize multiple words per utterance. Thus if auditory word priming in preschoolers is flexible enough to encompass the variability due to differing sentence contexts, then the long-term learning underlying repetition priming could play a role in making children’s representations of linguistic input sensitive to the linguistic context. Conversations with children are characterized by the repetition of words and phrases (e.g., Hoff-Ginsberg 1985; Snow 1972). Given this, priming should strongly affect what children accurately represent about sentences they hear, by compensating for the perceptual obstacles to identifying repeated words. A similar suggestion has been made for young children’s production of speech: Bloom, Miller and Hood (1975) found that telegraphic speakers produced their longest utterances when repeating parts of previous sentences. Apparently, using a word already retrieved for a prior utterance freed resources for retrieving and uttering more words per utterance. This is the production analog of our suggestion, that repetition priming will lighten the processing load for a sentence, permitting the child to identify more of its words. In order for priming to influence sentence representation in conversation, it must be possible for items encountered in one sentence context to prime repetitions in another. Well-known studies have suggested that words presented in a sentence context do not produce long-term priming (Oliphant 1983). However, more recent work suggests that though sentence contexts may reduce priming effects in isolated word tests, robust priming can be found when both the study and test phases involve sentences (for a review see Roediger & McDermott 1993). To address this question for preschoolers, we adapted the priming paradigm described above, embedding words in sentences at both study and test (Fisher & Church, in preparation). The study and test sentence contexts were designed to honor the ordinary presentation of words in conversation as new and then given (e.g., Fisher & Tokura 1995). At study, the target words were in final position in short sentences, and given primary stress (e.g., “I saw glasses!”). In test sentences the target words appeared in medial position in long sentences, and were relatively unstressed (e.g., “Frank dropped the glasses in the bedroom.”). The stressed and final position typically given to first-mentioned words should permit
IMPLICIT MEMORY SUPPORT FOR LANGUAGE ACQUISITION
61
their easy identification by preschoolers, and thus this initial exposure was predicted to prime the identification of the same words in more challenging sentence contexts. An initial study explored the short-term priming of words from one sentence context to another. Phrase repetition frequently occurs in adjacent sentences in child-directed speech (e.g., Hoff-Ginsberg 1985; Snow 1972). Thus a relatively short-term task should provide a good estimate of the potential power of repetition priming in children’s representations of sentences. In the short-term priming version of the sentence repetition task, each study sentence (containing the target word or a non-target word) was heard just before the related sentence rather than during an initial study phase. Study and test sentences were separated by only a few seconds. In the short term, it is very likely that representations of properties of the words other than their sounds are primed. However, these influences will be present in ordinary conversations as well, and will therefore contribute to the sensitivity of word recognition processes to linguistic context. As shown in Figure 2, children accurately identified more content words in sentences containing a primed target. In pairwise comparisons, significant facilitation was found for the target word itself, and for no other content words. Thus, at least in the short term, hearing a word in one sentence makes it easier to identify the same word in a different sentence. This is true despite the very large acoustic differences found between words in these different positions (Fisher & Tokura 1995), and the change in surrounding contexts. Crucially, the other content words were clearly at least as likely to be identified when the target was primed. This suggests that the facilitation found for the primed target does not come at the expense of the other words in the sentence, and thus that repetition priming increases the likelihood that the child can identify multiple words per sentence. Preliminary data have also been collected in a long-term version of this procedure:Children participate in the single-word study task described above, followed by a distractor task, and then attempt to repeat long sentences containing studied or new medial words. These data closely resemble those presented for the short-term task in Figure 2, showing a tendency toward facilitation in identifying all content words except the final word in the sentence. These findings provide striking evidence for the abstraction of auditory word priming in preschoolers. A word in one context primes its identification when it reappears in quite a different sentence. This is true despite the large changes in the word’s sound and context, from sentence-final to sentence-medial, from stressed to unstressed. Given this, we have evidence for two conclusions: First, auditory word priming in preschoolers is abstract enough to encompass the
62
CYNTHIA FISHER & BARBARA A. CHURCH
Proportion correctly repeated
1.0 0.8
0.6 Studied New
0.4
0.2 0.0
Subject
Verb
Target
Last
Frank dropped the glasses in the bedroom.
Figure 2. Mean proportion content words correctly repeated in sentences with Studied or New targets
sort of variability among tokens of the same word ordinarily found in connected speech. Second, straightforward repetition priming can play an important role in sentence representation during the process of acquisition. Repetition influences preschoolers’ identification of words in sentences, permitting them to identify words in perceptually disadvantaged positions in sentences. This facilitation does not detract from the identification of other words in the sentence; therefore, as predicted, the acoustic treatment of given and new words will help the child to identify more words per sentence. Learning from auditory experience, as an integral part of speech recognition systems, permits word identification in children to benefit from context. These experiments document a simple, almost mechanical way in which the discourse context can affect what children take in of the speech they hear. We often think of discourse cues as a source of information for acquisition, but are likely to consider these cues as consisting primarily of delicate inferences the child might make based on prior utterances and knowledge of the world and of speakers’ likely intentions. However, a discourse also consists of a collection of
IMPLICIT MEMORY SUPPORT FOR LANGUAGE ACQUISITION
63
words and structures used, and concepts successfully evoked, in the preceding utterances. Simple priming effects of a variety of kinds — for the sounds and the meanings of words, and perhaps even for the structures in which the words occur (e.g., Bock, Loebell & Morey 1992) — should influence children’s representations of input speech, and therefore their comprehension of it. In principle, some discourse effects could be modeled as based on natural consequences of processing speech, rather than separate inferences about the speaker’s intent. A similar argument has been made for some of the speaker’s contributions to discourse coherence, including the shortening of repeated words in context (Bolinger 1978; Fisher & Tokura 1995).
Concluding remarks and future directions In this paper we have suggested what we think is a promising framework for studying young children’s lexical representations of input speech. By focusing on learning from speech rather than on absolute performance in identifying words, we have uncovered a very basic way in which young preschoolers’ word identification processes resemble those of adults. For both young learners and adults, these processes incorporate learning mechanisms which update long-term representations of the sounds of words to reflect ongoing linguistic experience. We have argued that the learning processes mediating long-term auditory word priming in adults and in preschoolers could be the same ones which create those representations and permit their flexible use in childhood. Current views of memory which treat learning as an integral part of perceptual processing systems (e.g., Schacter 1994; Cohen & Eichenbaum 1993) make a strong prediction about the continuity of adaptive processes mediating speech recognition during acquisition (Nusbaum & Goodman 1994). Our findings provide prima facie evidence for this prediction, and suggest the beginning of a solution to the learning problem with which we began:If children perform relatively poorly in word recognition, and do so throughout the preschool years, how is it that they are such effective language learners? Our findings suggest that learning about words can to some extent be disentangled from word recognition performance. A single repetition of a word facilitates its subsequent identification to about the same degree whether the listener is preschool- or college-aged. This long-term facilitation is based primarily on perceptual representations, and the benefit from word repetition is abstract enough to encompass the variability in acoustic form and context found when words appear in different sentences.
64
CYNTHIA FISHER & BARBARA A. CHURCH
Given these initial findings, we can consider priming as a result of the normal process of creating and using long-term memory representations for word recognition. This basic learning phenomenon could play two related roles in language acquisition. The learning mechanisms which support auditory priming could (1) help to create representations for word recognition, and (2) provide a simple way for the recognition of words in sentence contexts to profit from a repetitive conversational context.
Acknowledgments This research was partially supported by NSF grant SBR 98073450, NIH grant HD/OD34715–01, by the University of Illinois Research Board, and by a postdoctoral fellowship to the second author from the Beckman Institute at the University of Illinois. This chapter was prepared while the first author was a visiting researcher at the Institute for Research on Cognitive Science at the University of Pennsylvania, and at Haskins Laboratories, New Haven, CT; we thank colleagues there for their many helpful comments.
Notes 1. Ratcliff and McKoon (1997) suggest that priming reflects a bias rather than a representational modification. On their bias model, priming causes the perceptual system to interpret ambiguous information as evidence for the recently encountered word. This model is based on their findings that (i) repetition aids identification of the repeated word only to the extent that it causes misidentification of similar words, and (ii) repetition has its effect only when choice options are highly similar. However, recent evidence contradicts both of these empirical claims: Several studies have found facilitation for identifying words similar to the studied word (e.g., Neaderhiser & Church 1998; Rueckl 1990), and have found that repetition facilitates word identification even when competitors are not similar to the target (e.g., Bowers 1999; Neaderhiser & Church 1998). Priming can also occur without bias under some circumstances (e.g., in amnesia; Keane et al. 1998). Finally, the bias view yields no natural account of priming for nonwords (e.g., Rueckl 1990), or reductions in auditory word priming with a change in voice or other aspect of a word’s acoustic realization (reviewed below). 2. The task was modified for adult subjects:(a) There was no robot, and the adults merely judged clarity of pronunciation in the study phase; (b) adults did arithmetic problems for 2 minutes during the distractor phase; (c) at test, adults heard much more severely filtered words. Adults also wrote down their responses rather than repeating the words aloud during the test phase. Children’s repetitions were IPA-transcribed from audiotapes, and coded for accuracy, following simple rules designed to permit some childhood mispronunciations, while maintaining a strict criterion of accuracy.
IMPLICIT MEMORY SUPPORT FOR LANGUAGE ACQUISITION
65
References Bard, E. and Anderson, A. 1982. “The unintelligibility of speech to children.” Journal of Child Language 10:265–292. Bloom, L. 1970. Language development: Form and function in emerging grammars. Cambridge, MA:MIT Press. Bloom, L., Miller, P. and Hood, L. 1975. “Variation and reduction as aspects of competence in language development.” In Minnesota Symposia on Child Psychology, Volume 1, A. Pick (ed.). Minneapolis:University of Minnesota Press. Bock, K., Loebell, H. and Morey, R. 1992. “From conceptual roles to structural relations: Bridging the syntactic cleft.” Psychological Review 99(1):150–171. Bolinger, D. 1978. Intonation across languages. In Universals of Human Language, Volume 2: Phonology, J. H. Greenberg (ed.). Stanford, CA:Stanford University Press. Bowers, J. S. 1999. “Priming is not all bias:Commentary on Ratcliff and McKoon (1997).” Psychological Review 10:582–596. Charles-Luce, J. and Luce, P. A. 1995. “An examination of similarity neighborhoods in young children’s receptive vocabularies.” Journal of Child Language 22:727–735. Church, B. A., Dell, G. S. and Kania, E. 1996. “Representing phonological information in memory:Evidence from auditory priming.” Paper presented at the meeting of the Psychonomics Society, Chicago. Church, B. A. and Fisher, C. 1998. “Long-term auditory word priming in preschoolers: Implicit memory support for language acquisition.” Journal of Memory & Language 39:523–542. Church, B. A. and Poldrack, R. (in preparation). “Auditory associative priming:The role of coarticulation.” [ms, S. U. N. Y. Buffalo] Church, B. A. and Schacter, D. L. 1994. “Perceptual specificity of auditory priming: Implicit memory for voice intonation and fundamental frequency.” Journal of Experimental Psychology: Learning, Memory, & Cognition 20:521–533. Cohen, N. J. and Eichenbaum, H. E. 1993. Memory, amnesia, and the hippocampal system. Cambridge, MA:MIT Press. Cole, R. A. and Perfetti, C. A. 1980. “Listening for mispronunciations in a children’s story:The use of context by children and adults.” Journal of Verbal Learning and Verbal Behavior 19:297–315. Cutler, A. and Norris, D. 1988. “The role of strong syllables in segmentation for lexical access.” Journal of Experimental Psychology: Human Perception and Performance 14:1 13–121. Diamond, A. 1985. “Development of the ability to use recall to guide action, as indicated by infants’ performance on AB.” Child Development 56:868–883. Echols, C. 1993. “A perceptually-based model of children’s earliest productions.” Cognition 46:245–296. Fernald, A. 1994. “Infants’ sensitivity to word order.” Paper presented at the Linguistic Society of America, Boston, MA.
66
CYNTHIA FISHER & BARBARA A. CHURCH
Fernald, A., Pinto, J. P., Swingley, D., Weinberg, A. and McRoberts, G. W. 1998. “Rapid gains in speed of verbal processing by infants in the 2nd year.” Psychological Science 9:228–231. Fernald, A., Taeschner, T., Dunn, J., Papousek, M., de Boysson-Bardies, B. and Fukui, I. 1989. “A cross-language study of prosodic modifications in mothers’ and fathers’ speech to preverbal infants.” Journal of Child Language 16:477–501. Fisher, C. and Tokura, H. 1995. “The given-new contract in speech to infants.” Journal of Memory and Language 34:287–310. Fisher, C. and Tokura, H. 1996. “Acoustic cues to grammatical structure:Cross-linguistic differences.” Child Development 67:3192–3218. Fowler, C. A. and Housum, J. 1987. “Talkers’ signaling of “new” and “old” words in speech and listeners’ perception and use of the distinction.” Journal of Memory and Language 26:489–504. Gerken, L. and McIntosh, B. J. 1993. “Interplay of function morphemes and prosody in early language.” Developmental Psychology 29:448–457. Gerken, L., Murphy, W. D. and Aslin, R. N. 1995. “Three- and four-year-olds’ perceptual confusions of spoken words.” Perception & Psychophysics 57:475–486. Gleitman, L. R., Gleitman, H., Landau, B. and Wanner, E. 1988. “Where learning begins: initial representations for language learning.” In Linguistics: The Cambridge survey, Vol. III. Language: Psychological and biological aspects, F. J. Newmeyer (ed.). New York:Cambridge University Press. Goldinger, S. D. 1996. “Words and voices:Episodic traces in spoken word identification and recognition memory.” Journal of Experimental Psychology: Learning, Memory, & Cognition 22:1 166–1183. Goldinger, S. D., Luce, P. and Pisoni, D. B. 1989. “Priming lexical neighbors of spoken words:E ffects of competition and inhibition.” Journal of Memory & Language 28:501–518. Goodman, J. and Nusbaum, H. 1994. The development of speech perception. Cambridge, MA:MIT Press. Goodsitt, J. V., Morgan, J. L. and Kuhl, P. K. 1993. “Perceptual strategies in prelingual speech segmentation.” Journal of Child Language 20:229–252. Gordon, P. C., Eberhardt, J. L. and Rueckl, J. G. 1993. “Attentional modulation of the phonetic significance of acoustic cues.” Cognitive Psychology 25:1–42. Gow, D. W., Jr. and Gordon, P. C. 1995. “Lexical and prelexical influences on word segmentation:Evidence from priming.” Journal of Experimental Psychology: Human Perception and Performance 21:344–359. Graham, L. W. and House, A. S. 1971. “Phonological oppositions in children:A perceptual study.” Journal of the Acoustical Society of America 49:559–566. Greenbaum, J. and Graf, P. 1989. “Preschool period development of implicit and explicit remembering.” Bulletin of the Psychonomic Society 27:417–420. Hallé, P. A. and de Boysson-Bardies, B. 1996. “The format of representation of recognized words in infants’ early receptive lexicons.” Infant Behavior and Development 19:463–481.
IMPLICIT MEMORY SUPPORT FOR LANGUAGE ACQUISITION
67
Hoff-Ginsberg, E. 1985. “Some contributions of mothers’ speech to their children’s syntactic growth.” Journal of Child Language 12:167–185. Jusczyk, P. W. 1997. The discovery of spoken language. Cambridge, MA:The MIT Press. Jusczyk, P. W., Bertoncini, J., Bijeljac-Babic, R., Kennedy, L. J. and Mehler, J. 1990. “The role of attention in speech perception by young infants.” Cognitive Development 5:265–286. Karzon, R. G. 1985. “Discrimination of polysyllabic sequences by one- to four-month-old infants.” Journal of Experimental Child Psychology 39:326–342. Keane, M. M., Verfaellie, M., Gabrieli, J. D. E. and Wong, B. M. 1998. “Amnesic patients do not show normal bias effects in a forced-choice perceptual identification task.” Society for Neuroscience Abstracts 24:1524. Klatt, D. H. 1980. “Speech perception:A model of acoustic-phonetic analysis and lexical access.” In Perception and production of fluent speech, R. A. Cole (ed.). Hillsdale, NJ:Erlbaum. Marslen-Wilson, W. D. 1987. “Functional parallelism in spoken word recognition.” Cognition 25:71–102. McClelland, J. and Elman, J. 1986. “The TRACE model of speech perception.” Cognitive Psychology 18:1–86. Merriman, W. E. and Marazita, J. M. 1995. “The effect of hearing similar-sounding words on young 2-year-olds’ disambiguation of novel noun reference.” Developmental Psychology 31:973–984. Morgan, J. 1986. From simple input to complex grammar. Cambridge, MA:MIT Press. Naito, M. 1990. “Repetition priming in children and adults:Age-related dissociation between implicit and explicit memory.” Journal of Experimental Child Psychology 50:462–484. Nakatani, L. H. and Dukes, K. D. 1977. “Locus of segmental cues for word juncture.” Journal of the Acoustical Society of America 62:714–719. Nakatani, L. H. and Schaffer, J. A. 1978. “Hearing “words” without words:Prosodic cues for word perception.” Journal of the Acoustical Society of America 63:234–245. Neaderhiser, B. J. and Church, B. A. 1998. “Can perceptual bias account for priming in forced-choice tasks?” Paper presented at the 39th Annual Meeting of the Psychonomics Society, Dallas, TX. Newport, E. 1990. “Maturational constraints on language learning.” Cognitive Science 14:1 1–28. Nittrouer, S. and Boothroyd, A. 1990. “Context effects in phoneme and word recognition by young children and older adults.” Journal of the Acoustical Society of America 87:2705–2715. Nusbaum, H. and Goodman, J. 1994. “Learning to hear speech as spoken language.” In The development of speech perception, J. Goodman and H. Nusbaum (eds.). Cambridge, MA:MIT Press. Oliphant, G. W. 1983. “Repetition and recency effects in word recognition.” Australian Journal of Psychology 35(3):393–403.
68
CYNTHIA FISHER & BARBARA A. CHURCH
Parkin, A. J. and Streete, S. 1988. “Implicit and explicit memory in young children and adults.” British Journal of Psychology 79:361–369. Prince, E. F. 1992. “The ZPG letter:Subjects, definiteness, and information-status.” In Discourse description: Diverse linguistic analyses of a fund-raising text, W. C. Mann and S. A. Thompson (eds.), Philadelphia:John Benjamins Publishing Co. Rajaram, S., Srinivas, K. and Roediger, H. L. 1998. “A transfer-appropriate processing account of context effects in word-fragment completion.” Journal of Experimental Psychology: Learning, Memory, & Cognition 24:993–1004. Ratcliff, R. and McKoon, G. 1997. “A counter model for implicit priming in perceptual word identification.” Psychological Review 104:319–343. Roediger, H. L., III, and McDermott, K. B. 1993. “Implicit memory in normal human subjects.” In Handbook of Neuropsychology, Vol. 8, H. Spinnler and F. Boller (eds.). Amsterdam:Elsevier . Rueckl, J. G. 1990. “Similarity effects in word and pseudoword repetition priming.” Journal of Experimental Psychology: Learning, Memory, & Cognition 16(3):374–391. Saffran, J. R., Newport, E. L. and Aslin, R. N. 1996. “Word segmentation:The role of distributional cues.” Journal of Memory and Language 35:606–621. Schacter, D. L. 1994. “Priming and multiple memory systems:Perceptual mechanisms of implicit memory.” In Memory systems 1994, D. L. Schacter and E.Tulving (eds.). Cambridge, MA:MIT Press. Schacter, D. L. and Church, B. A. 1992. “Auditory priming:Implicit and explicit memory for words and voices.” Journal of Experimental Psychology: Learning, Memory, and Cognition 18:915–930. Schacter, D. L., Church, B. A. and Treadwell, J. 1994. “Implicit memory in amnesic patients:Evidence for spared auditory priming.” Psychological Science 5:20–25. Schacter, D. L., McGlynn, S. M., Milberg, W. P. and Church, B. A. 1993. “Spared priming despite impaired comprehension:Implicit memory in a case of word-meaning deafness.” Neuropsychology 7(5):107–1 18. Sheffert, S. M. 1998. “Voice-specificity effects on auditory word priming.” Memory & Cognition 26:591–598. Snow, C. 1972. “Mothers’ speech to children learning language.” Child Development 43:549–565. Sommers, M. S. 1996. “The structural organization of the mental lexicon and its contribution to age-related declines in spoken-word recognition.” Psychology and Aging 11:333–341. Stager, C. L. and Werker, J. F. 1997. “Infants listen for more phonetic detail in speech perception than in word-learning tasks.” Nature 388:381–382. Swingley, D. 1998. “Speech processing in very young children.” Paper presented at the Eleventh International Conference on Infant Studies, Atlanta, GA. Valian, V. 1986. “Syntactic categories in the speech of young children.” Developmental Psychology 22:562–579. Walley, A. C. 1993. “More developmental research is needed.” Journal of Phonetics 21:171–176.
IMPLICIT MEMORY SUPPORT FOR LANGUAGE ACQUISITION
69
Walley, A. C., Smith, L. B. and Jusczyk, P. W. 1986. “The role of phonemes and syllables in the perceived similarity of speech sounds for children.” Memory & Cognition 14:220–229.
How Accessible is the Lexicon in Motherese? Nan Bernstein Ratner & Becky Rooney The University of Maryland, College Park
A longstanding problem for language acquisition researchers is understanding how the naive language learner accomplishes the task of segmenting the ongoing speech signal into its component words. Without accurate identification of word or morpheme boundaries, grammatical acquisition could not proceed. Because conversational speech consists of concatenated units, and adult models of speech perception presume that lexical knowledge aids in the identification of word onsets and endings, the language learning infant is presumably faced with a Catch-22 situation:to learn language one must parse the signal, but parsing requires knowledge of the language. While much is known about the characteristics of child-directed speech (CDS), the question of segmentation has not been very extensively addressed. Much of the work examining cues to segmentation in CDS has concentrated on possible phonetic characteristics of the register that might aid children in identifying word and clause boundaries. Bernstein Ratner (1993) summarizes the substantial differences in realization of phonetic targets and usage of boundaryblurring phonological rules that distinguish the CDS register. However, such phonetic adjustments are relative, rather than absolute, and cannot be considered to provide infallible segmentation cues to the infant language-learner. In this paper, we will argue that speech to children at the earliest stages of language production facilitates segmentation through certain structural properties of CDS. These include a heavy reliance on short utterances, redundant formulaic frames, and high lexical redundancy. In particular, we will argue that children are most likely to encounter the items which then appear in their initial core lexicon in single-word utterances addressed to them. Lexical items appearing in singleword utterances in CDS are further likely to be contrastively embedded in other extremely short utterances.
72
NAN BERNSTEIN RATNER & BECKY ROONEY
Method Subjects Data for this investigation come from the Bernstein corpus of the Child Language Data Exchange System (CHILDES, MacWhinney 1995). The corpus consists of almost 10,000 utterances addressed to nine young normal language learning female infants (13–20 months) studied longitudinally. Each was taped on three occasions over a six month period of their early language development. Of the twenty-seven mother-child samples, 16 samples, representing 7 mother-child dyads, provided examples of interactions between mothers and children who were either in a very late preverbal stage, or who had single word expressive output. This particular subsample was of special interest to us because it reflects maternal input to children who face the preliminary burden of isolating initial candidates for the expressive lexicon. For the Bernstein corpus, this subsample consisted of over 6,000 maternal utterances. Method of analysis Maternal utterances were analyzed using CLAN software (MacWhinney 1995). The WDLEN program generated a histogram of maternal utterances lengths. This procedure highlighted the very high frequency of very short utterances (VSUs; defined here as utterances 1–3 words long), which should present language learning children with either no or very few segmentation decisions in their efforts to locate words in the input. As shown in Figure 1, fully 24% of maternal utterances consisted of a single word. An additional 16% were two words long, and an additional 19% were three words long. Thus, 59% of child-directed utterances in this sample consisted of maternal turns which, on an abstract level, should not pose extraordinary problems in performing the task of segmenting the input signal into words. Of course, short utterances would not be easily parsible if they consisted of a large number of lexical entries. A Type-Token (TTR) analysis of the VSUs demonstrated a low degree of lexical variety. As Figure 2 indicates, TTR for the VSUs were quite low, ranging from .273 in single word utterances, to .203 in two word utterances, and an extremely low value of .153 for three-word maternal utterances. Concretely put, the number of unique types in one word utterances formed a 230 word lexicon, only 353 types were found in two word utterances, and the total number of words participating in maternal three word utterances
73
HOW ACCESSIBLE IS THE LEXICON IN MOTHERESE?
Two word 16.0%
One word 24.0%
Three word 19.0%
> Three words 41.0%
1 0.8 0.6 0.4 0.2 0
Figure 2. Type–token ratio (TTR) characteristics of VSUs in CDS
yrs Ch
ild
lan g(
3–8
nce s 3w
ord
utt
era
era utt ord 2w
1w
ord
utt
era
nce
nce
s
s
)
redundant → totally unique TTR values
Figure 1. Distribution of maternal utterance lengths
74
NAN BERNSTEIN RATNER & BECKY ROONEY
was a corpus of 3,261 words, representing only 498 unique forms. These values can be compared to TTR ranges of .3 to .5 seen in prior analyses of CDS (see Rondal 1985), and with the much higher TTR values to be expected in older children’s (Miller 1981) and adult-adult conversation. Next, we investigated the degree to which single word utterances could bootstrap lexical segmentation of two and three word utterances. In a single word utterance, particularly given the monosyllabic characteristics of the lexicon in motherese, one might presume that the child has no segmentation decisions to make, if a default segmentation strategy is the syllable. Once identified as units, items in one word utterances allow de facto segmentation of longer utterances in which they participate, essentially allowing a given-new distinction. We performed a lexical overlap analysis of the VSU’s, first reducing all tokens (words) in the corpus to uninflected roots. Although inflected forms are of course found in speech to young children exposed to English, in almost all cases, the uninflected form was more frequently seen. As typical examples, “bunny” was seen 60 times, but “bunnies” and “bunny’s” were seen only twice and four times, respectively. Only inherently multiple noun concepts (“eye”, “block”) were seen more often as plurals than singular forms. “Go” was almost ten times more frequent than any of its inflected forms. We then performed an analysis to determine which forms were common to single, two and three word maternal utterances. We used the CLAN FREQ utility which computes the frequency of word forms in CHAT transcripts. Results are shown in Figure 3. common 32%
unique 68%
common 43%
unique 69%
unique 57% 1 & 2 words
common 31%
2 & 3 words
1 & 3 words
Figure 3. Common lexicon across VSUs of differing lengths
Figure 3 displays lexical types as opposed to the frequency (tokens) with which the word was observed in maternal speech. As can be seen, there is a high degree of overlap in the lexicon of VSUs. Almost one third of lexical items used were common entries in single and two word utterances. Thus, in one third of two word utterances, infants could be expected to encounter a piece of given
HOW ACCESSIBLE IS THE LEXICON IN MOTHERESE?
75
information which could reliably be parsed from a minimally longer string. Forty percent of the total lexicon which comprised two and three word maternal utterances was redundant. Finally, almost a third of lexical types seen in single and three-word utterances were common to both utterance lengths. Thus, there is a strong tendency for short utterances to utilize a common lexical pool, which should enable relatively easier parsing of items in the input. What are the characteristics of the lexical items in VSUs? A token analysis shows that such utterances are most likely to be composed of nouns, verbs, modifiers and deictics. The token frequency with which certain types of words appear in VSUs changes markedly depending upon the length of utterance. For instance, in single word utterances addressed to children, nominal forms predominate. A single word utterance is most likely to be a nominal. The remainder of single word utterances consisted of an extremely small class of vocatives (look, see, hi, bye-bye, deictics), routines (i.e., peek-a-boo, upsy-daisy), and acknowledgments. Two word utterances were more variable in form, although they utilized a large number of items seen in single word utterances. Two word utterances contained a high proportion of nouns (25% of tokens), which participated in noun–verb, verb–noun, modifier–noun combinations, as well as wh-contractions, and deictics. Three word utterances contain only two new form classes:articles and auxiliaries, both of which have very low type-token ratios. The goal of studying tendencies in adult input to children is to determine whether they may reasonably play a role in the actual pattern of acquisition demonstrated by children. Figure 4 plots the relative distribution of grammatical forms in VSUs against attested patterns of lexical acquisition taken from Nelson (1973) and Benedict (1979). There are striking parallels between the form class of words used redundantly, and embedded in short utterances to languagelearners, and children’s own early output. While children’s early lexicons contain a slightly higher proportion of nominal forms than does maternal speech, there is relatively good agreement between the frequency with which certain form classes appear in maternal speech and the frequency with which they are observed in early expressive lexicons. Beyond the restricted lexicon and form class characteristics of VSUs, utterance frames were apparent in this subset of maternal utterances. A high proportion of two and three word utterances were redundantly framed, such as, “There’s a/an Noun“, “Wanna verb/noun?“, “What’s deictic?“, etc. The fact that the frame members are often unstressed and heavily concatenated should logically aid the child in determining the boundaries of content words in the input, which are then used to compile the initial expressive lexicon. The frames may then additionally bootstrap an early repertoire of multi-word utterances in
76
NAN BERNSTEIN RATNER & BECKY ROONEY
Percent total lexicon
60 50 40 30 20 10
1,2 & 3 word CDS 1 & 2 word CDS
er oth
tor fun c
ctic dei
ti ena cat con
fie r mo di
ver bs
nou
ns
0
First 50 word child lexicon
Figure 4. Lexicon common to input and acquisition
the child which could be used in the absence of true syntactic knowledge (a speculation raised by Locke 1993).
Discussion Contrary to some hypotheses about the troublesome nature of input to children and its utility in helping the child to solve linguistic problems, it appears from these data that, as children begin their initial stages of language production, adult speech to them provides many cues to word segmentation. Many utterances (more than half of those directed to them) contain few items to be parsed from the signal, and the repetitive nature of both vocabulary and syntactic frames should allow the child to readily identify an initial lexicon to be used in bootstrapping further linguistic acquisition. Although this is a rather simplistic analysis of group data, we believe it contains useful information for tracking relationships between input and acquisition in children, particularly if we can show relationships between the items in VSUs of individual parents and their child’s subsequent initial lexicon. In the debate over the contributions of particular features of input language
HOW ACCESSIBLE IS THE LEXICON IN MOTHERESE?
77
to the language acquisition process, there is often dispute over how far childdirected speech can carry the child toward a full and adult-like grammatical competence that includes the ability to say and avoid saying things for which there has been no overt model or instruction. This paper does not wish to suggest that the features of CDS that we have just discussed enable the child to gain a full appreciation of the complexities of English or any other language to be acquired. Rather, we would like to propose that in order for the infant to either set parameters, weight probabilities, or do whatever it is that various models of language acquisition propose, the infant must first determine the boundaries, referents and function of a basic core of discrete units in the input. Contrary to some assumptions, this would not appear to be a totally hopeless task. Recently, Brent and Cartwright (1996) have shown that the Bernstein corpus can be parsed to a high degree of accuracy using a computer algorithm which makes few assumptions about the possible nature of word boundaries. Using minimal assumptions about English syllable shapes (words must contain at least one vowel, and legal word boundary segments and clusters are limited to those that begin and end input utterances), Brent and Cartwright’s distributional regularity algorithm was able to parse approximately 75% of the words in the 10,000 utterance corpus accurately, by using what amounts to the given-new strategy discussed in this paper. This is arguably a higher level of accuracy than the children in this sample achieved, but then, the children in the sample had much more data to work from than three 45-minute samples apiece. Brent and Cartwright’s analysis relied upon an orthographic representation of the input and one might be concerned that either variability in the realization of articulatory targets, or limitations in infant perceptual memory might mitigate against the infant’s ability to utilize a given-new distinction for this set of input. Although not a focus of this paper, recent analysis of the Bernstein Ratner corpus by Charles-Luce and Luce (1995) suggests that the lexicon consists of words with very few phonetic neighbors; few of the words that our study children heard had many perceptually confusible options. It is also evident that the child’s early lexicon reflects certain tendencies in the model to which they are exposed. Such tendencies and regularities may enable the earliest stages of language production (early single word and combinatorial language) to proceed. Recent accounts such as that of Locke (1993) suggest that there may well be an expressive stage of child language acquisition which is essentially pre-grammatical, characterized by little or no evidence of syntactic rule usage in utterance formation, and is enabled by different innate perceptual and cognitive capacities than is later language development. Our data suggest ways in which the young infant can exploit tendencies in the language he
78
NAN BERNSTEIN RATNER & BECKY ROONEY
or she hears to compile an early communicative repertoire prior to the onset of grammatical analysis and to identify important properties of the language to be acquired. Redundant frames probably enable the acquisition of content words, in tandem with their presentation in single and two word utterances. In turn, the identification ofsuch content words allows subsequent parsing of new surrounding elements that belong to different grammatical classes. Each successful segmentation effort carries the child ever forward in avoiding missegmentations ofgiven elements from new, thus bootstrapping the child into a more accurate and adult-like mapping ofthe input signal.
References Benedict, H. 1979. “Early lexical development: comprehension and production.” Journal of Child Language 6: 183–200. Bernstein Ratner, N. 1993. “From signal to syntax: but what is the nature ofthe signal?” In From Signal to Syntax: Bootstrapping from speech to grammar in early acquisition, J. Morgan and K. Demuth (eds.). Mahwah, NJ: Erlbaum. Brent, M. and Cartwright, T. 1996. “Distributional regularity and phonotactic constraints are useful for segmentation.” Cognition 61: 93–125. Charles-Luce, J. and Luce, P. 1995. “An examination ofsimilarity neighborhoods in young children’s receptive vocabularies.” Journal of Child Language 22: 727–735. Locke, J. 1993. The child’s path to spoken language. Cambridge, MA: Harvard University Press. MacWhinney, B. 1995. The CHILDES project (second edition). Mahwah, NJ: Erlbaum. Miller, J. 1981. Assessing language production in children: experimental procedures. Baltimore: University Park Press. Nelson, K. 1973. Structure and strategy in learning to talk. Monographs of the Society for Research in Child Development 38. Rondal, J. 1985. Adult-child interaction and the process of language acquisition. New York, NY: Praeger.
Bootstrapping a First Vocabulary Lila Gleitman & Henry Gleitman
University of Pennsylvania Institute for Research in Cognitive Science
… the knowledge which we acquired before birth was lost by us at birth … afterwards by the use of the senses we recovered what we previously knew.
(Plato, Phaedo)
The term “bootstrapping” has a nice incoherence that has made it useful in discussions of how children acquire a language. The problem isn’t the bootstraps. The problem is that you can’t use them if you’re standing in the boots. How then do infants ground the first steps in the acquisition process? What do they stand on? Acknowledged that language is prefigured in human biology, still the child must recover its peculiar and unique instantiation in English, French, or Malukakan. The only hope for doing so is to begin by use of the senses.
First steps Recognition of the grounding problem led to an immediate understanding of the advance represented in the work of Saffran, Newport and Aslin (1997). They showed that infants will spontaneously and efficiently solve word-segmentation problems by the use of probabilistic, relatively low-level, pattern analyses of speech at roughly the syllabic level. Marcus (1999) demonstrated further that sixmonth olds are inclined also to consider rudimentary algebraic relations among items in the observed speech signal; i.e., whether patterns such as xYx or xxY characterize the input. Grounded, then, in the evidence of the senses, and equipped with memorial and computational procedures of relevant sorts, infants can attain knowledge of the elementary linguistic formatives that they are
80
LILA GLEITMAN & HENRY GLEITMAN
seeking. They can construct discrete units and appreciate sequential relations among these, all bootstrapped without begging the grounding question from the continuously varying sound wave. We and our colleagues have for many years been worrying about the grounding problem at a level that presupposes such findings as Saffran et al. and Marcus:After learners have parsed the sound wave distributionally and prosodically so as to recover its linguistically functioning formatives, how do they assign interpretations to its elements and elementary structures, so discovered? How do they learn what words mean, and how these words function in the semantics of the clause? Learners must derive the semantics of words by considering the contingencies for their use, as revealed by the evidence of the senses. That is, the primitive grounding for word meanings lies in the child’s natural capacity to interpret scenes, properties, and events in the ambient extralinguistic world. In this chapter, we’ll review some studies that suggest the power but also some of the limits of learning that can be supported by this primitive grounding procedure; that is, by matching the occurrence of heard words with the scenes and events that accompany them in adult-to-child interactions. Then we will suggest how the children lever themselves upward — “bootstrap” — from the information acquired at this first stage into more sophisticated representations of input speech. Crucially, the creation of these higher-order linguistic descriptions enables further learning. The outcome of this sequential process is a probabilistic multiple-cue system (a constraint-satisfaction machinery; see Gillette, Gleitman, Gleitman & Lederer 1999; Seidenberg & MacDonald in press) for the interpretation of words. We have used the label both to describe the learning procedure that builds this cue system and for its efficient use, once constructed (probably in the child learner by age three years or so). As we will try to show, restrictions and deficits in early child vocabulary learning are, on this view, attributable to incompletenesses in the cueing system, not — or not so much — to conceptual limitations in the learners.
Early vocabulary growth Wherever studied, caretaker speech to young children is remarkably grammatical (e.g., Newport, Gleitman & Gleitman 1977). As this implies, adults from the beginning use nouns, verbs, adjectives, prepositions, complementizers, and so forth, in talking to their infants. On the simplest but most ridiculous learning story, the child’s output speech should line up with the frequency facts about this
BOOTSTRAPPING A FIRST VOCABULARY
81
input, e.g., the first English word should be the. A more natural supposition is that at least members of content categories (noun, verb, adjective) would appear in child speech in an order and frequency distribution similar to that of the input. The first few words: Unconventional categorization What really seems to happen, in closely studied cases, is that earliest (in the period circa 10–14 months, usually) words uttered by the infant, even if these at first render some regular referential intent, soon go mad (the words, not the infants). We ourselves proudly noticed our grandson aged about 11 months to say “light”, or something that was not too far off “light”, when he looked at a light. We were somewhat deflated, however, upon noting that his vocabulary failed to grow beyond this one item for the next two or three months. More interestingly, the apparent reference of “light” ballooned out of control. Soon it was used for any thing or person or animal, soon thereafter for any pleasant event, finally, if you just gave him a sharp little squeeze. A universal word which meant everything and therefore nothing, so far as we could tell. The first ten or so words uttered are widely reported to contain such “unconventional” items, maybe some animal sounds, what looks like (but can’t responsibly be said to be or for that matter not to be) a few nouns, a locative preposition or two; in short, not much that’s easy to categorize in terms of the adult vocabulary (see Dromi 1987, for these effects and an informative discussion). The first 50–100 words: Predominance of nouns After this slow and rather zany beginning, there is rapid movement toward a vocabulary whose proportion of nouns is strikingly higher than would be predicted from the input, and which is entirely or nearly devoid of verbs as well as function words (Goldin-Meadow, Seligman & Gelman 1976; Bates, Dale & Thal 1995). Though the advantage of nouns over verbs is not as close to categorical in some counts as in others, once corrected for cross-linguistic input frequency differences and differences in data-collection methods, the noun advantage in early speech and comprehension is robust across individuals and across languages, and of great magnitude. All parties are agreed that a major part of the explanation for this phenomenon will allude to the typical object-reference function of a great many nouns (for a useful description in terms of “individuability”, see Gentner & Boroditsky in press). Beyond this, explanation splits into three broad approaches:The first is that the object concepts are developmentally primary, and so are represented
82
LILA GLEITMAN & HENRY GLEITMAN
adequately by even the youngest learners; the verbs, because they express relationships among these objects, are conceptually available only somewhat later (Gentner 1978, 1981; Huttenlocher, Smiley & Ratner 1983). The second approach makes reference to the variable encoding of predicates across languages (e.g., Talmy 1978), compared to the (near) identity of object-term reference across languages. Because the learner has first to discover the variable encoding properties for verbs in his own language, their acquisition is to that extent delayed (Gentner & Boroditsky in press). A final idea is that it is the informational state of the learning procedure rather than language variability or some lack of conceptual sophistication which limits the kinds of word that can be acquired early on (Fisher, Hall, Rakowitz & Gleitman 1994; Gleitman 1990; Landau & Gleitman 1985). All of these approaches have something to recommend them, and they are not logically in opposition. It wouldn’t be surprising if a conspiracy among them turned out to be the best explanatory theory. But here we will examine the third position, which is our own:V erbs are acquired later than nouns because their efficient learning demands sophisticated linguistic representations of the input, representations which must be built by using lowerlevel representations as the scaffolding. Efficient word learning More or less coincident with the first rudimentary signs that children appreciate something of linguistic structure, somewhere around the second birthday, word learning becomes faster (increasing from about .3 words a week to 3 words a week), more precise (no more “Light!”, in fact rarely any misreferences at all), and more categorially catholic (nouns, verbs, adjectives all make their appearance). The temporal contiguity between structural appreciation and efficient, categorially broad, word learning has often been noted (Bates et al. 1995; Gleitman & Wanner 1982; Lenneberg 1967) and conjectured to reflect a causeand-effect relationship. Our work seems to support this position, and is suggestive for how knowledge of syntactic structure can inform vocabulary learning.
The Human Simulation paradigm The findings we review here are from experimentation in which adult subjects try to identify words from partial information (Gillette et al. 1999; Snedeker, Brent & Gleitman 1999; Gleitman, Snedeker & Rosman in press). Conceptually, these experiments are analogous to computer simulations in which a device,
BOOTSTRAPPING A FIRST VOCABULARY
83
endowed with whatever (“innate”) ideas and learning procedures its makers deem desireable to program into it, is exposed to data of the kind naturally received by the target learner it is simulating. The measure of success of the simulation is how faithfully it reproduces the learning function for that target using these natural data. Our test device is a population of undergraduates (hence Human Simulations). Their preprogramming includes, inter alia, knowledge of English. The data received are contextualized mother-to-child speech events. The form in which the adult subjects receive information about these speech events is manipulated across conditions of the experiment. The subjects’ task is to identify the mother’s word meanings under these varying presentation conditions; that is, to “acquire” a first vocabulary. These experiments serve two purposes. The first is to provide an estimate of the psychological potency of various cues to word meaning that are implicit in the real learning situation. For example, what kinds of words can — in principle — be acquired by inspection of the contingencies for their use in the absence of all other cues? The second is to estimate by reference to these outcomes something about the learning procedure used by children. Restating, we attempt to reproduce in adults the learning function of the one- and two-year old child by appropriate changes in the information structure of the input. If successful, this exercise makes plausible that the order of events in child vocabulary acquisition (here, the developmental move from unconventional categories to nominal categories to predicate categories in speech and comprehension) is assignable to information-structure developments rather than to cognitive developments in the learner, for we have removed such possible cognitive inadequacies from the equation. When our college-age subjects fail to learn, it is not because they have some deficit that disbars them from recognizing words like ball or get. In fact, we know that these words are already in our subjects’ vocabularies. All they have to do is to recover what they previously knew. Our first experimental probe asked how well they can do so by using the evidence of the senses. Simulating word to world pairing The stimuli for these experiments were generated by videotaping mothers interacting with their 18 to 24-month old children in an unstructured situation. The maternal speech was transcribed to find the 24 most frequent nouns and the 24 most frequent verbs (listed by frequency order in Table 1). To simulate a condition under which learners were presumed able only to identify recurrences of the same word in the speech stream, ala Saffran et al., and to match these
84
LILA GLEITMAN & HENRY GLEITMAN
with their extralinguistic contexts of use, we selected more or less at random 6 video-clips during which the mother was uttering each of these words. Each video-clip started about 30 seconds before the mother uttered the word, and ended about 10 seconds afterwards. That there were 6 clips for each “mystery word” our subjects were to identify was to simulate the fact that real learners aren’t forced to acquire meanings from a single encounter with a word and its context; rather, by examining the use of the word in a variety of contexts, the observer can attempt to parse out that property of the world common to all these encounters. The 6 video-clips for the word were then spliced together with a brief color-bar between them, and the subjects made a conjecture as to the word meaning after viewing all six samples; this procedure was repeated for all 48 items. The subjects were told for each word whether it would be a noun or a verb, but they did not hear what the mother was saying, for we turned off the audio. Subjects only heard a beep, which indicated the instant during the event when the mother actually had uttered the mystery word. So this manipulation should be thought of as simple word-to-world (rather:beep-to-world) pairing where the only information available to the learner is a sample of the extralinguistic contingencies for the utterance of words. Table 1. Noun and verb stimuli by frequency nouns
piggy, ball, mommy, hat, elephant, plane, bag, kiss, toy, drum, people, nose, hole, daddy, music, hand, tail, hammer, thing, camera, peg, pilot, shoes, swing.
verbs
go, do, put, come, want, see, look, get, turn, play, hammer, have, push, say, throw, pop, like, stand, think, know, make, wait, fall, love.
Note. The word hammer appeared among the 24 most frequent maternal nouns and among the 24 most frequent verbs as well, owing to properties of the situations in which the videotapes were made.
Of course, even if this stripped down situation fairly models an early stage in acquisition, namely, one in which the learner can’t take advantage of the other words in the sentence and their syntactic arrangement (because she as yet knows neither), in the real case learners may receive 7 or 50 or 500 such word-to-world opportunities as the basis for inferring a word meaning. Our subjects received only 6. The only realistic question to ask of the outcomes, therefore, is not about the absolute level of learning, but only whether nouns and verbs are equally easy to identify under these presentation conditions.
BOOTSTRAPPING A FIRST VOCABULARY
85
Nouns before verbs The findings from this manipulation, based on 84 subjects observing the 48 items in 6 contexts each, are that about 45% of the nouns but only 15% of the verbs were correctly identified. Moreover, each of the 24 nouns was identified by at least some of the subjects whereas eight — fully a third — verbs (know, like, love, say, think, have, make, pop) were never correctly identified by any subject. It is easy to see why:Several of these impossible verbs describe invisible mental acts and states while others are so general that they can be used in almost any context. If the only information available to youngest learners is inspection of these contexts of use, how could they be learned? As the results show further, even the 16 remaining verbs were considerably harder to identify from observation of their contexts (23% correct) than the nouns, which were identified correctly 45% of the time. So a noun bias in word identification can be demonstrated for any learner — even an adult — so long as real-world observation of that word’s contingencies of use is the sole cue. If this is the early machinery for mapping between conceptual structure and word identity, we can understand the noun bias. Imageability There is considerable variability in identifiability within the noun class (e.g., ball is correctly identified by just about every subject, but kiss by very few) and within the verb class (e.g., push vs. want). This suggests that sheer concreteness or imageability rather than lexical category is the underlying predictor of identifiability of words from their observed contexts. And indeed a direct test shows this to be the case. We had a new group of subjects rate all the 48 items on a scale of imageability. Owing to the fact that the 48 test items were all highly frequent ones in speech to children under two years, not surprisingly the scale was heavily skewed to the imageable end. Nonetheless, almost every verb of the 24 in our set of commonly used verbs was rated as less imageable than almost any of the 24 nouns. As is obvious from this, an analysis for the predictive value of imageability and lexical class for identifiability showed that, once the imageability variable was removed, there was no effect of lexical class at all. That is, observation serves the learner well only for identifying the words that encode observeables! The correlation, in maternal speech, of this kind of observeability with the noun-verb categorization difference is so strong as to explain the predominance of nouns in early child speech, without having to consider whether — in addition — the children find the object terms less conceptually taxing.
86
LILA GLEITMAN & HENRY GLEITMAN
Why not false verb meanings? The identification behavior of adult subjects seems to reflect that of young children at the stage where their vocabulary consists almost solely of nouns. But several questions remain unresolved by this account. What causes the cognitive “unconventionality” of words that appear among the child’s first 10 or so? And why don’t the slightly older children utter verbs mapped onto the wrong meanings? That is, the phenomenon of early speech is not just that children are more accurate in their noun than in their verb use. Rather, they seem not to be flexing their verb muscles at all. Why? Some progress toward an answer is being uncovered in recent work from Jesse Snedeker (Snedeker et al. 1999; Gleitman et al. 1999). These studies revised the original experimental procedure by not informing the subjects of the lexical class of the mystery word. Subjects knew it would be a noun or a verb but not which. Under these conditions, the correctness scores of subjects were depressed, but the general structure of the findings hardly changed. Nouns were still more than twice as easy as verbs to identify. What is more interesting is the result of a manipulation that required subjects to estimate (on a 5-point scale) after each of their conjectures their confidence that they had correctly identified the real target word. The finding is that subjects were significantly more sure that their responses were correct when they had conjectured a noun than when they had conjectured a verb, even though they received no explicit feedback. Snedeker’s hypothesis is that young child learners, like these experimental subjects, are biased toward noun learning (more correctly, biased to learn imageable words) because they receive implicit feedback that noun conjectures are much more likely to be correct than verb conjectures.1 This would make sense of the fact that the most primitive learners (those acquiring their first dozen or so words) do not show the noun bias and so learn slowly and errorfully; and that, after these first halting steps they move into the period of heavy noundominance in vocabulary acquisition. The noun bias grows in the learner, based on an implicit realization that object-reference items are about the only ones that submit to learning by a word-to-world pairing procedure. Building representations that will support verb learning: The role of selection An important outcome of the earliest stages of vocabulary learning is a well learned stock of concrete common nouns, and an unsystematic smattering of other items. But why doesn’t the learner go on like this forever? The noun bias seems to be disfunctional for acquiring the verbs, the adjectives, and so forth. As
BOOTSTRAPPING A FIRST VOCABULARY
87
we just discussed, apparently the lack of success has dissuaded learners from even trying very hard to acquire items outside the concrete noun category. At least part of the answer is that the known nouns form the scaffold for building improved representations of the linguistic input; this more sophisticated input representation, in turn, supports the acquisition of more abstract vocabulary items. How could this progression work? We know that, cross-linguistically, the melody and rhythm of maternal speech quite regularly delivers a unit of approximately clause size to the ear of the infant listener (Fisher & Tokura 1996). This unit, grounded in prosody, provides a domain within which learners can consider unknown verb meanings in the contexts provided by the nouns learned by the original word-to-world pairing procedure. Thus the acquired noun knowledge can ground a bootstrapping operation that carries the learner to a new representation of input speech. One kind of information latent in such cross-word within-clause comparisons is selectional: Certain kinds of noun tend to cluster with certain kinds of verb. Notoriously, for example, eat is likely to occur with food words. A noun like telephone can be a give-away to a small class of common verbs, such as talk, listen, and call. The number of nouns in the maternal sentence (that is, within the prosodic contour of the clause) provides an additional clue to the verb meaning. This is because in the very short sentences used to infants, the number of nouns is a soft indicator of the number of arguments (Fisher, Hall, Rakowitz & Gleitman 1994; Fisher 1996; Gleitman 1990; Naigles 1990; Naigles, Gleitman & Gleitman 1993). Thus gorp in a prosodically bounded sequence such as “… John … gorp … ” is more likely to mean ‘sneeze’ than ‘kick.’ And ‘kick’ is a better guess than ‘sneeze’ for either “… John … gorp … ball …” or “ … ball … gorp … John …” even if, because of insufficient language-specific syntactic knowledge, one cannot tell one of these last two from the other. We can simulate a learner at this hypothetical stage of language acquisition: one who by prior acquisition of nouns and sensitivity to prosodic bounding information can register the number and the meanings of the nouns that characteristically occur in construction with the unknown verbs. To do so, we showed a new group of adult subjects the actual nouns that co-occurred with “the mystery verb,” within the same six sentences for which the prior subjects had seen the (videotaped) scenes. Within sentence, the nouns were alphabetized. The alphabetical order was chosen (as subjects were informed) because we were here modelling a device which has access to the semantics of noun-phrases that occur in construction with a verb, but has not yet acquired the language-specific phrase structure. Showing the actual serial orders would, for English adults, be tantamount
88
LILA GLEITMAN & HENRY GLEITMAN
to revealing the phrase structure. For example, the presentation form for the six maternal sentences containing the verb call was: 1. Grandma, you. 2. Daddy. 3. I, Markie. 4. Markie, phone, you. 5. Mark. 6. Mark. Rather surprisingly, in light of the fact that these subjects saw no videotape, i.e., had access to no extralinguistic context at all, identifiability scores for verbs were about 9% higher in this condition than in the videotape condition. There is information in the noun choices for a verb, even when the observer has no basis for identifying the structural positions in which these nouns occurred.2 We hasten to acknowledge that there is no stage in language acquisition during which children hear nouns-in-sentences in alphabetic order and out of context. The manipulation was designed to extract — artificially, to be sure — a single information source (noun selectional information) and examine its potency for verb identification, absent other cues. Supposing that children make use of this information source, available as soon as they have acquired the stock of nouns, they can begin building a verb vocabulary. Scene interpretation in the presence of noun knowledge In the next manipulation, we modeled a learner who can coordinate the two sources of evidence so far discussed, inspecting the world to extract salient conjectures about relevant events, and using the known nouns to narrow the choice among them. This hypothetical procedure is a version of the “semantic bootstrapping” schemes of Grimshaw (1981) and Pinker (1984), with one crucial difference. It is sequential, i.e., the search for the verb interpretation takes place in the context of antecedently acquired nouns whose occurrence the learner can register. To model such a stage, new subjects were shown the 6 videotaped extralinguistic contexts along with the nouns the mother uttered in each sentence (in alphabetical order as before), and asked to identify the mystery verbs. Armed with these dual information sources, subjects for the first time identified a respectable proportion (about 30%) of the verbs, a significant improvement over performance when either of these two information sources was made separately
BOOTSTRAPPING A FIRST VOCABULARY
89
available. Noun knowledge evidently can serve as anchoring information to choose between plausible interpretations of the observed events. Phrase structure representations support verb learning The child as so far modeled has a principled basis for distinguishing between verbs that differ in argument number (e.g., sleep, hit, and give) and argument selection (e.g., eat occurs with food nouns, call occurs with telephone). This information together with inspection of scenes provides a principled basis for the construction of (certain aspects of) clause-level syntax. This is because a learner who – understands such nouns as ball and boy, and – who hears an adult say the boy hit the ball, and – who observes some boy hitting some ball can begin to locate the canonical subject of sentences in the language, that is, to label the phrases in a way that is relevant to thematic roles. A correlated language-internal cue to subjecthood is that different categories of nouns probabilistically perform different thematic roles, a factor whose influence can be observed even in rapid on-line parsing performance among adults (Trueswell, Tannenhaus & Garnsey 1994). Specifically, animate nouns are vastly more likely than inanimates to appear in subject position just because they are likely to be the causal agents in events (Dowty 1991). Once the position of the sentence subject is derived by matching up the observed agent with its known noun label, the young learner has a pretty good handle on the clause-level phrase structure of the exposure language. The next experimental conditions tested the efficacy for verb identification of this phrase-structural information. First we showed a new group of subjects nonsense frames for our test verbs. These frames were again constructed from the sentences the mothers were uttering in the test conditions described earlier, by preserving the morphology but converting both the nouns and the verbs of the six maternal sentences to nonse words. For example, two of the six stimuli for call were “Why don’t ver gorp telfa?” and “Gorp wastorn, gorp wastorn!” As Lewis Carroll might have predicted, there was a dramatic improvement under these conditions of verb identifiability, with subjects now identifying just over half the 24 verbs. It may seem surprising that the syntactic environments of verbs should all by themselves be so informative of the verb meanings. Recall that in the manipulation we are describing, subjects saw no video — they knew nothing of the contexts in which these nonsense verbs had been uttered. And they knew none of the co-occurring nouns, for all these had been converted to nonsense too.
90
LILA GLEITMAN & HENRY GLEITMAN
Yet the subjects identified proportionally more verbs (51%) than they did in the prior experiment, in which both video contexts and accompanying nouns were presented (29%). On closer consideration, it makes sense that syntactic information can provide major cues to verb meaning. Verbs differ in the structures in which they appear. Generally speaking, the closer any two verbs in meaning, the more their structural privileges overlap (Fisher, Gleitman & Gleitman 1991). This is because the structural privileges of a verb (the number, type, and positioning of its associated phrases) derive, quirks and provisos aside, from an important aspect of its semantics; namely, its argument-taking properties. The number of argument positions lines up with the number of participants implied by the logic of the verb. Thus a verb that describes a self-caused act of the musculature (e.g., Joe snoring) is liable to surface intransitively, a physical effect of one entity on another (Joe throwing a ball) is likely to be labelled by a transitive verb, and an act of transfer of an entity between two places or persons is likely to be ditransitive (Joe giving a ball to Bill). The type of complement is also derivative of aspects of the verb’s meaning. Thus a verb describing a relation between an actor and a proposition is likely to take clause-like complements (Joe believing that Bill is sad). And the hierarchical arrangements of the noun phrases cues the thematic role of the participant entities. Because verb meanings are compositional at least at the level of these argument-taking properties (Grimshaw 1990), the matrix of verb-to-structure privileges has the effect of providing a coarse semantic partitioning of the verb set. For example, because one can forget things, this verb licenses a noun-phrase complement; and because one can also forget events, the same verb also licenses clausal complements. A vast linguistic literature documents these syntax-semantics relations (see, e.g., Gruber 1967 and Fillmore 1968 for seminal discussions; Croft 1991; Goldberg 1995 and Levin 1993 for recent treatments; for experimental documentation, Fisher et al 1991; and for cross-linguistic evidence concerning caretaker speech, Lederer, Gleitman & Gleitman 1995; Geyer 1998; Li 1994; for learning effects in young children, Bloom 1994; Brown 1957; Fisher et al. 1994; Mintz & Gleitman 1998; Naigles 1990; Naigles et al. 1993; Waxman 1995). In the simulation, subjects were able to use syntax and associated morphology to make inferences about the verb meanings even though they were artificially disbarred from observing the contexts of use and the co-occurring nouns. Coordination of cues in efficient verb learning Two further conditions fill out the set of simulations. Our next pool of adult
BOOTSTRAPPING A FIRST VOCABULARY
91
subjects saw the full sentences that the mothers had been uttering, i.e., we popped the real nouns back into the frames of the previous condition, leaving only the verb as nonsense. From this information, and without video context, subjects correctly identified three quarters of the verbs. Finally, we provided a last group of subjects with the information that we believe is available to the two and three-year old learners:full linguistic representations of the sentences along with their (videotaped) extralinguistic contexts. Now the mystery verbs were no mystery at all, and the subjects succeeded in identifying over 90% of them. The distribution of semantic information in language design Taken together, the experiments just presented model a learning device that is seeking word meanings by convergence across a mosaic of probabilistic linguistic and situational evidentiary sources, a constraint-satisfaction device. However, the learning device is not supplied by nature with the ability to use all these sources of information from the beginning. Rather, it has to construct the requisite representations on the fly during the process of learning, for languages differ not only in how they pronounce the words but in how they instantiate predicate-argument structure in the syntax of sentences. One source of information about word meaning is in place from the beginning of the word learning process and constitutes the ground on which the learner stands to build the full lexicon:This is the ability to interpret the world of scenes, objects, properties, and events, and to suppose that these will map regularly, even if complexly, onto linguistic categories and structures. As we have tried to show, this initial information source serves two interlocking purposes. First, it allows the learner to acquire a very special kind of lexicalsemantic information, that which can be gleaned by pragmatically informed perception:names for object concepts. Second, the information so acquired enters into an incremental process for building the clause-level phrase structure of the language. As we have suggested, the structure building processes are based on the supposition that form-to-meaning relations will be as transparent as possible; for example that the relations between the number of participants in the scene and the number of noun phrases in the clause will be one-to-one, in the best case. These ideas are maximally stripped down renditions of principles that in linguistics go under such names as the projection principle and the theta criterion. Moreover, form-to-meaning transparency also requires, in the best case, that conceptual dominance relations among actors in an event (the agent, the theme or patient, the recipient) map onto structural dominance in the sentence (the subject, the direct object, the indirect object); a thematic hierarchy.
92
LILA GLEITMAN & HENRY GLEITMAN
Each of the data sources from which this knowledge is built is errorful, of course. People sometimes talk of things, even concrete things, when these are absent. The number of nouns in a sentence, even a short sentence, isn’t always a straightforward reflection of argument number. The syntactic mapping of verbargument structure onto surface structure is quite quirky, e.g., butter has one too few arguments and rain has one too many. Yet, as shown for adults in the simulations just described, coordinated use of these several cues rapidly converges to verb identifiability. We suspect that the same is true for child learners. More generally, the burden of the experiments was to show that the lexiconbuilding task is naturally partitioned into a relatively concrete subpart, which requires little linguistic knowledge and thus is acquired first and a more abstract part which is acquired later because its discovery requires support from linguistically sophisticated representations. In these experiments, words like want, know, and see were not merely “facilitated” compared to words like push and go, when phrase-structural information was made available. The abstract verbs (e.g., think or want) that were literally impossible to learn from observation (zero percent correct in the video-beep condition) were significantly the easiest to learn (identified close to 100% of the time), compared to the concrete verbs (e.g., go or push), in all three conditions in which the subjects had syntactic information.
Summing up We had two reasons for quoting Plato at the beginning of this chapter. The first is descriptive of the resemblance between our subjects’ state in the observational learning experimental condition and that of the Platonic child upon its rebirth: Both have prior (conceptual) knowledge, but lack the (linguistic) means to get at it. They are forced to bootstrap their way into the lexicon by beginning with the evidence of the senses and erecting more and more elaborate linguistic scaffolding to support discovery of the abstract terms. The second reason is potentially more serious. We believe that the adults’ behavior in these manipulations models the lexical acquisition feat in realistic ways. Young children at first are at the mercy of perception in acquiring words and so are unable to pick up abstract terminology, notably the mental state verbs. Very often, observers of young children have concluded from the concrete nature of young children’s productive and receptive vocabulary that they lack the ability to contemplate their own and others’ states of mind, knowledge, and desires.3 But as we showed, if adults are deprived of structural evidence as to the meanings of words, and are thrown back on the evidence of their senses, they
BOOTSTRAPPING A FIRST VOCABULARY
93
too fail to identify these same words despite their undoubted conceptual sophistication. In the third year of life children throw up a great linguistic edifice, the clause-level syntax of their native tongue. The learning advantage this structural knowledge confers allows them to decipher the predicate items whose reference is to invisible internal acts and states. More than likely the children were in possession of such abstract notions from much earlier in life. The change is that now they have been enabled, by learning syntax, to discover that the English word for ‘know’ is “know”.
Notes 1. In further detail, after the subject sees the first couple of video-clips and makes a preliminary conjecture about the meaning, in the case of nouns the further video-clips seem (to the subject) to support the preliminary conjecture, and so he gains confidence. In the case of verbs, the very frequent finding is that subjects can glean little or no obvious relationship among the videoclips, i.e., the first clip may show mother and child walking toward a picnic blanket (a representative subject now guesses come); in the next video, the two are sitting on the blanket blowing bubbles (the subject revises the guess to blow); in the next clip they move toward the house, etc. The subject feels quite at sea in offering a final guess, with few settling on the correct answer (which in this case was go). A prime difficulty is that, unlike the case for many noun utterances, the time-lock between event and verb utterances is generally poor. 2. We should emphasize how rough — unstructured — the evidence from this source really is. Consider for example the set of nouns that is heard with eat. To be sure, this set contains edibles such as cookie, but also animate-noun words. The listener would certainly be better off if she knew that cookie, etc., not only occurred in construction with eat, but occurred as its direct object. But this assignment of argument role to the nouns is available only when the learner has built the language-specific clause-level phrase structure representation. As we will argue next, such a representation can be built from the formal resources available at this interim stage, but the present manipulation looks at the learner in a more primitive state. 3. Of course not all students of concept development have taken the view that concepts of belief and desire are late attainments. Several authors working from such findings as the child’s early understanding of pretense maintain that such conceptions are early-acquired or innate (e.g., Leslie 1995).
References Bates, E., Dale, P. S. and Thal, D. 1995. “Individual differences and their implications for theories of language development.” In Handbook of child language, P. Fletcher and B. MacWhinney (eds.). Oxford:Basil Blackwell. Bloom, P. 1994. “Possible names:The role of syntax-semantics mappings in the acquisition of nominals.” Lingua 92:297–329.
94
LILA GLEITMAN & HENRY GLEITMAN
Brown, R. 1957. “Linguistic determinism and the part of speech.” Journal of Abnormal and Social Psychology 55:1–5. Croft, W. A. 1991. Syntactic categories and grammatical relations. Chicago, Il:Univ . of Chicago Press. Dowty, D. R. 1991. “Thematic proto-roles and argument selection.” Language 67:547–619. Dromi, E. 1987. Early lexical development. Cambridge:Cambridge Univ. Press. Fillmore, C. 1968. “The case for case.” In Universals of linguistic theory, E. Bach and R. T. Harms (eds.). NY:Holt, Reinhart, and Winston. Fisher, C. 1996. “Structural limits on verb mapping:The role of analogy in children’s interpretation of sentences.” Cognitive Psychology 31:41–81. Fisher, C., Hall, D. G., Rakowitz, S. and Gleitman, L. R. 1994. “When it is better to receive than to give:structural and conceptual cues to verb meaning.” Lingua 92:333–375. Fisher, C., Gleitman, L. R. and Gleitman, H. 1991. “On the semantic content of subcategorization frames.” Cognitive Psychology 23:331–392. Fisher, C. and Tokura, H. 1996. “Prosody in speech to infants:Direct and indirect cues to syntactic structure.” In Signal to syntax: Bootstrapping from Speech to Grammar in Early Acquisition, J. Morgan and C. Demuth (eds.). Hillsdale, NJ:Erlbaum. Gentner, D. 1978. “On relational meaning:The acquisition of verb meaning.” Child Development 49:988–998. Gentner, D. 1982. “Why nouns are learned before verbs:Linguistic relativity versus natural partitioning.” In Language development, Volume 2: Language, thought, and culture, S. Kuczaj (ed.). Hillsdale, NJ:Erlbaum. Gentner, D. and Boroditsky, L. In press. “Individuation, relativity, and early word learning.” In Language acquisition and conceptual development, M. Bowerman and S. Levinson (eds.). London:Cambridge Univ. Press. Geyer, H. 1998. Subcategorization as a predictor of verb meaning: Evidence from Hebrew, Unpublished MS., Philadelphia:Univ . of Pa. Gillette, J., Gleitman, L. R., Gleitman, H. and Lederer, A. 1999. “Human simulation of vocabulary learning.” Cognition 13, 135–176. Gleitman, L. R. 1990. “The structural sources of word meaning.” Language Acquisition 1(1):3–55. Gleitman, L. R. and Wanner, E. 1992. “Language acquisition:The state of the state of the art.” In Language acquisition: The state of the art, E. Wanner and L. R. Gleitman (eds.). Cambridge:Cambridge Univ. Press. Goldberg, A. 1995. Constructions: A construction grammar approach to argument structure. Chicago:Univ . of Chicago Press. Goldin-Meadow, S., Seligman, M. and Gelman, R. 1976. “Language in the two-year old.” Cognition 4:189–202. Grimshaw, J. 1981. “Form, function, and the language acquisition device.” In The logical problemof language acquisition, C. L. Baker and J. McCarthy (eds.). Cambridge, MA:MIT Press.
BOOTSTRAPPING A FIRST VOCABULARY
95
Grimshaw, J. 1990. “Argument structure.” Linguistic Inquiry Monograph 18, Cambridge, MA:MIT Press. Gruber, J. S. 1967. “Look and see”. Language 43:937–947. Huttenlocher, J., Smiley, P. and Ratner, H. 1983. “What do word meanings reveal about conceptual development?” In The development of word meanings and concepts, T. R. Wannenmacher and W. Seiler (eds.). Berlin:Springer -Verlag. Landau, B. and Gleitman, L. R. 1985. Language and experience: Evidence fromthe blind child. Cambridge, MA:Harvard Univ. Press Lederer, A., Gleitman, L. R. and Gleitman, H. 1995. “Verbs of a feather flock together: Semantic information in the structure of maternal speech.” In Beyond names for things, M. Tomasello and W. Merriman (eds.). Hillsdale, NJ:Erlbaum. Lenneberg, E. H. 1967. Biological foundations of language. New York:W iley. Leslie, A. 1995. “Pretending and believing:Issues in the theory of ToMM.” In Cognition on cognition, J. Mehler and S. Frank (eds.). Cambridge MA:MIT Press. Levin, B. 1993. English verb classes and alternations: A preliminary investigation. Chicago, IL:Univ . of Chicago Press. Li, P. 1994. Maternal verb usage in Mandarin Chinese, Unpublished MS, Philadelphia: Univ. of Pa. Marcus, G. F., Vijayan, S., Rao, S. B. and Vishton, P. M. 1999. “Rule learning by sevenmonth old infants.” Science 283:77–80. Mintz, T. and Gleitman, L. R. 1998. “Incremental language learning:T wo and three year olds’ acquisition of adjectives.” In Proceedings of the 20th Annual Conference of the Cognitive Science Society. Hillsdale, NJ:Erlbaum. Naigles, L. R. 1990. “Children use syntax to learn verb meanings.” Journal of Child Language 15(2):257–272. Naigles, L. R., Gleitman, H. and Gleitman, L. R. 1993. “Children acquire word meaning components from syntactic evidence.” In Language and cognition: A developmental perspective, E. Dromi (ed.). Norwood, NJ:Ablex. Newport, E., Gleitman, H. and Gleitman, L. 1977. “Mother, I’d rather do it myself:Some effects and noneffects of maternal speech style.” In Talking to children: Language input and acquisition, C. E. Snow and C. A. Ferguson (eds.). Cambridge:Cambridge Univ. Press. Pinker, S. 1984. Language learnability and language development. Cambridge, MA: Harvard University Press. Saffran, J. R., Newport, E. L. and Aslin, R. N. 1996. “Word segmentation:The role of distributional cues.” Journal of Memory and Language 35:606–621. Seidenberg, M. and MacDonald, M. C. In press. “A probabilistic constraints-based approach to language acquisition and processing”, Cognitive Science special issue, Connectionism, M.Christiansen, N. Chater, and M. Seidenberg (eds.). Snedeker, J., Brent, M. and Gleitman, L. R. 1999. “The successes and failures of word-toworld mapping.” In Proceedings of the 23rd Boston University Conference, A. Greenhill, H. Littlefield and C. Tano (eds.). Somerville, MA:Cascadilla Press.
96
LILA GLEITMAN & HENRY GLEITMAN
Snedeker, J., Rosman, T. and Gleitman, L. R. In press. “Why it is hard to label our concepts”. In Weaving Meanings, G. Hall and S. Waxman (eds.). Cambridge, MA: MIT Press. Talmy, L. 1978. “Lexicalization patterns:Semantic structure in lexical forms.” In Language typology and syntactic description, Vol. 3: Grammatical categories and the lexicon, T. Shopen (ed.). NY:Cambridge Univ. Press. Trueswell, J. C., Tannenhaus, M. K. and Garnsey, S. M. 1994. “Semantic influences on parsing:use of thematic role information in syntactic ambiguity resolution.” Journal of Memory and Language 33:285–318. Waxman, S. 1995. “The development of an appreciation of specific linkages between linguistic and conceptual organization.” In The acquisition of the lexicon, L. Gleitman and B. Landau (eds.). Cambridge, MA:MIT Press.
Infants’ Developing Competence in Recognizing and Understanding Words in Fluent Speech Anne Fernald
Stanford University
Gerald W. McRoberts Lehigh University
Daniel Swingley
Max-Planck-Institut für Psycholinguistik
Again and again in research on early cognitive development, infants turn out to be smarter than we thought they were. The refinement of experimental techniques for reading infants’ minds has been extremely productive, enabling us to study developing capabilities which are not yet observable in spontaneous behavior. When the task demands are made simple enough, infants demonstrate implicit knowledge across diverse domains ranging from understanding of the physical world and numerical concepts to social cognition (see Wellman & Gelman 1998). In the domain of language understanding as well, such techniques have been used to reveal the early emergence of linguistic competence before it is evident in overt behavior, and many ingenious experiments have demonstrated the considerable speech processing savvy of infants in the first year. These studies show that certain perceptual skills essential for spoken language understanding emerge gradually over the first year, often months before infants are able to display their linguistic knowledge through speech production (see Aslin, Jusczyk & Pisoni 1998). Our own recent research on word recognition by older infants provides further evidence for gradual growth over the second year in aspects of speech comprehension which were previously inaccessible to observation (Fernald, Pinto, Swingley, Weinberg & McRoberts 1998). This emphasis on the “competent infant” in the field of cognitive development has its potential downside, as Haith (1998) makes clear in a thoughtful essay on the recent baby boom in developmental research. By searching for cognitive sophistication at ever-younger ages, and by interpreting data suggesting
98
A. FERNALD, G.W. MCROBERTS & D. SWINGLEY
precursors of knowledge in a particular domain as evidence for mature capacities, we run the risk of overlooking the really interesting developmental stories. Haith makes a plea for developing models for the gradual acquisition of “partial knowledge”, instead of thinking in dichotomous terms that infants at a certain age either have or do not have a mature ability. We need to develop graded concepts to document progress toward cognitive competence, models with many steps along the way. Maratsos (1998) makes a related point in discussing methodological difficulties in tracking the acquisition of grammar. Much of language acquisition goes on “underground”, he argues, since intermediate stages of developing knowledge systems rarely show up directly in the child’s behavior. To understand how the system progresses to mature competence, it is necessary to gain access somehow to these underground stages of partial knowledge. The good news is that in research on early language understanding, the goal of revealing partial accomplishments on the way to mastery may be more attainable than in other domains of cognitive development where criteria for competence are harder to define. Neuropsychological research on infants’ responses to known and unknown words reveals a major shift in cerebral organization between the ages of 13 and 21 months (Mills, Coffey-Corina & Neville 1993). Over this period the patterns of brain activity associated with infants’ responses to familiar words become more lateralized and similar in other ways to those of adults in speech comprehension tasks. However, it is not known whether such developmental changes in neural responses reflect maturation of brain systems dedicated specifically to speech processing, or maturation of brain systems related more broadly to memory and other cognitive capabilities supporting language understanding. While imaging methods provide increasingly powerful tools for investigating graded changes in brain development, establishing functional connections between neuropsychological findings and the emergence of language competence requires the kinds of detailed behavioral experiments that explore infants’ imperfect knowledge as they gradually develop competence. For example, careful parametric studies on the phonetic factors influencing young infants’ success in segmenting sequences of speech sounds are documenting consistent developmental patterns (see Jusczyk 1997). In our own research, the use of on-line measures of spoken word recognition also allows us to analyze fine-grained developmental changes in infants’ speech processing efficiency that are meaningfully related to changes in speech production. Moreover, it is possible to show how variations in features of the input speech can influence infants’ success in recognizing familiar words. The outcome of these and many other research efforts in this area will be an increasingly
DEVELOPING COMPETENCE IN WORD RECOGNITION
99
differentiated picture of the emergence of partial knowledge in receptive language competence.
Overview Our focus in this chapter is on the graded nature of developmental changes in spoken word comprehension by infants in the second year. We also consider how particular features of the speech directed to infants may interact with these changes, enabling infants to recognize familiar words more easily in continuous speech. The next seven sections cover the following points: – Across the first year infants develop certain perceptual and cognitive skills necessary for language understanding. By the age of 8 months they are adept at recognizing recurrent sound patterns in fluent speech, although not yet able to associate sound patterns with meanings. The rudimentary segmentation skills evident at this age are necessary but not sufficient capabilities for understanding speech. – By the beginning of the second year infants gain proficiency in associating sound sequences with referents and in using these sound patterns as they occur in varying contexts to access meanings. Between 15 and 24 months of age, infants improve dramatically in the speed and accuracy of spoken word recognition. – During this period infants also become more efficient in word recognition. By 18 months of age they are capable of processing speech incrementally, recognizing spoken words very rapidly on the basis of the initial acousticphonetic information sufficient to distinguish the word from other alternatives. – One explanation for these developmental gains in the speed, accuracy, and efficiency of spoken word recognition over the second year is that they reflect changes in infants’ lexical representations. – These developmental gains may also reflect changes in cognitive capabilities such as attention, memory, auditory/visual integration, and categorization, capacities which are not specific to language processing but which are essential for spoken word recognition. – When interacting with infants, adults tend to modify their speech along several dimensions. One common characteristic of infant-directed speech in English, the placement of focused words in utterance-final position, facilitates access to familiar words in fluent speech, thus compensating to some
100
–
A. FERNALD, G.W. MCROBERTS & D. SWINGLEY
extent for cognitive limitations which influence infants’ speech processing capabilities. An account of the gradual emergence of receptive language skills in infancy should be concerned not only with the nature of developing lexical representations, but also with the contribution of other factors influencing the accessibility of spoken words to the novice listener. These include intrinsic factors such as non-linguistic perceptual and cognitive capabilities involved in understanding speech, as well as extrinsic factors such as characteristics of language input which enhance or hinder comprehension.
“Word recognition” without comprehension In the language acquisition literature it was not so long ago that infants under 10 months of age were referred to as “prelinguistic” (e.g., Ziajka 1981), a term that now sounds downright disrespectful. Given the observational data available at the time, it was reasonable to assume that infants who can’t speak and who show few signs of understanding what they hear are not yet engaged in linguistic activity. Only at the onset of speech production was there unambiguous observable evidence of a transition from a “prelinguistic” to a “linguistic” mode of processing language. We know better now. Just as in research on other domains of cognitive development, a reliance on overt behavior as the criterion for knowledge led to an underestimation of infants’ language competence. Using habituation and auditory preference procedures, many studies have shown that even very young infants are attentive to some levels of linguistic organization in speech. By the age of 8 months infants are attuned to aspects of the phonological system of the language they are hearing (e.g., Kuhl, Williams, Lacerda, Stevens & Lindblom 1991; Polka & Werker 1994), and are able to recognize recurrent patterns in sequences of speech sounds (Jusczyk & Aslin 1995). Moreover, when identifying word-size units embedded in fluent speech, infants use parsing strategies that reflect awareness of typical sound patterns in the ambient language. For example, English-learning infants more readily extract embedded words that have the strong-weak stress pattern prevalent in English than words with a weak-strong stress pattern (Jusczyk 1997). Even when familiarized only briefly with syllable strings that do not represent actual language samples, 9-month-old infants appear to extract word-like units by noticing which syllables regularly co-occur (Saffran, Aslin & Newport 1996). What such studies show is that over the first year infants are becoming skilled listeners, capable of making detailed distributional analyses of acoustic-
DEVELOPING COMPETENCE IN WORD RECOGNITION
101
phonetic features of spoken language. Although such accomplishments are often cited as evidence for early “word recognition”, they are more appropriately viewed as evidence of pattern detection abilities which are prerequisite for recognizing words in continuous speech. Identifying sequences of sounds as coherent acoustic patterns is obviously an essential step in word recognition, but this can occur without any association between particular sound patterns and meanings. Halle and de Boysson-Bardies (1994) found that French-learning infants showed a listening preference for words likely to be familiar to them in the speech they were hearing, as compared to words less common in infants’ experience. These results indicate that by the age of 10 months, infants have some kind of acoustic-phonetic representation for certain frequently heard sound patterns. However, this selective response to familiar words constitutes word recognition in only a limited sense since it can occur with no evidence of comprehension. Between the ages of 10 and 14 months, infants typically do begin to associate words with meanings. Parents report on vocabulary inventories that infants in this age range speak several words and appear to understand many more. Such checklist data are often interpreted as indicating how many words a child “has acquired” in his or her lexicon, a conclusion which may be reasonable but which masks the graded nature of the accomplishment. A parent filling out monthly checklists might report at 11 months that the infant does not understand the word shoe, and then report at 12 months that the infant does understand the word shoe, just the kind of apparently all-or-none (or none-to-all) transition from ignorance to mastery that Haith (1998) and Maratsos (1998) regard with skepticism. What has happened between 11 and 12 months? The check in the box means that on one or more recent occasions in the intervening month the parent saw the infant look toward, act on, or in some other way behave selectively toward a shoe just after the word shoe was spoken. Although it’s possible that this observable act of understanding reflected a sudden epiphany, at this early stage of word learning it is much more likely that the child’s spontaneous demonstration of competence was preceded by gradual developments in several domains that are not as easily observed. New experimental approaches are helping to illuminate some of the underground changes occurring during this period. For example, three recent studies suggest that infants begin to listen to sounds differently around the time when they start making links between spoken words and meanings. Halle and de Boysson-Bardies (1996) used an auditory preference method to show that 11-month-old infants appear to ignore phonetic detail in familiar words. Since infants at much younger ages are able to make fine-grained phonetic distinctions,
102
A. FERNALD, G.W. MCROBERTS & D. SWINGLEY
they conclude that by 11 months infants are attending to words in a new “lexical mode” in which reduced attention to phonetic detail can be regarded as an advantage rather than as a deficiency. In a related study, Höhle and Weissenborn (1998) found that infants 7–9 months old responded differently than infants 10–15 months old when listening to fluent speech containing familiar words, a difference which they see as reflecting a transition from a purely acousticphonetic form of representation in the younger infants to a more mature form of lexical representation incorporating semantic information. The research of Stager and Werker (1998) with 8- and 14-month old infants also suggests that when infants move from syllable perception to word learning, they are less attentive to the phonetic detail in a new word heard in association with an object, perhaps because of the additional cognitive demands involved in pairing sound patterns with meanings. The findings of these studies have been interpreted as evidence that as infants begin to learn that words have meanings, they begin to tolerate greater variability in the fine acoustic detail. A shift in attentional priorities during this period is a good example of a potentially important underground process which could not be detected without the use of experimental methods revealing gradual changes in infants’ listening biases.
Recognizing and understanding words in the second year At one level it’s obvious that the development of infants’ ability to understand words in continuous speech is a gradual process:at 12 months, infants typically understand only a few words, and by 24 months they may speak hundreds of words and understand many more. The rate of learning new words may accelerate, and sometimes particular words in a child’s vocabulary appear and then briefly disappear, but in general there is steady increase and stability in the words learned. If the parent filling out a vocabulary checklist indicates that ball is “understood and said” at 15 months, then ball will most likely be reported as “understood and said” in subsequent months. Such measures of receptive language competence track age-related changes in the estimated size of the child’s lexicon; however, they reveal nothing about the much less obvious changes in the efficiency with which known words are comprehended. Although ball is produced and understood at 15 months, the child’s ability to recognize that familiar word in continuous speech continues to improve in ways that are not apparent in spontaneous behavior. The use of new high resolution measures of speech comprehension have enabled us to show that between the ages of 15
DEVELOPING COMPETENCE IN WORD RECOGNITION
103
and 24 months, infants become much faster, more reliable, and more flexible in recognizing familiar words (Fernald et al. 1998). The procedure for assessing infant word recognition developed in our laboratory derives from the auditory-visual matching technique introduced by Golinkoff, Hirsh-Pasek, Cauley and Gordon (1987). Seated on the parent’s lap in a test booth, the infant is shown pairs of pictures while listening to speech. Two colorful images depicting objects familiar to the infant are presented on adjacent computer monitors at the beginning of each trial. After a brief period of looking at the pictures in silence, the infant hears a sentence naming one of the objects. Infants’ patterns of fixations to the two pictures on each trial are recorded by a video camera located between the two monitors and then coded off-line (see Swingley, Pinto & Fernald 1998). Patterns of visual fixation have proved to be a valuable measure of spoken language understanding by adults, enabling researchers to monitor the rapid mental processes involved in sentence comprehension (e.g., Tanenhaus, Spivey-Knowlton, Eberhard & Sedivy 1995). However, in previous research with infants, fine-grained analyses of eye movements have been used in only a few studies of infants’ response to visual stimuli in non-linguistic contexts (e.g., Haith, Hazan & Goodman 1988; Hood & Atkinson 1993). In our research, the close observation of infants’ scanning patterns in response to speech enables us to track the time course of word recognition. We use frame-by-frame analysis to identify the point at which the infant initiates a shift in gaze from one picture to the other, measuring eye movements from the beginning of the spoken target word on each trial. In a cross-sectional study of infants at 15, 18, and 24 months of age, we explored developmental changes in the speed of word recognition, calculating infants’ latency to shift to the correct picture in response to a familiar spoken target word (Fernald et al. 1998). The subjects were 72 infants from monolingual English-speaking families, 24 in each age group, and all were reported by their parents to understand the four target words: doggie, baby, ball, and shoe. The target words were comparable in duration and were always presented in the same two carrier phrases: Where’s the ____ ? See the ____ ? Visual stimuli were presented in pairs (ball/shoe or dog/baby) with side of presentation of the objects in each pair counterbalanced across trials, and with each object serving as target on two trials and as distractor on two trials. Since infants could not anticipate the side of presentation of the target object on a given trial, they were equally likely to be looking at the target or distractor object at the onset of the target word. Using only those trials on which the infant was initially looking at the distractor object, we calculated the reaction time to the target word by measuring the infant’s latency to initiate a shift in gaze from the distractor to the target object
104
A. FERNALD, G.W. MCROBERTS & D. SWINGLEY
when the target word was spoken. As shown in Figure 1, the mean response latency decreased reliably with age. The mean RT at 15 months (995 ms) was significantly slower than at 18 months (827 ms) and at 24 months (679 ms).
Age (months)
Where’s the baby?
15 18 24 0
250
500
750
1000
Time (ms) This analysis included only those trials on which the infant was initially looking at the incorrect picture and then shifted to the correct picture when the target word was spoken. The graph is aligned with an amplitude waveform of one of the stimulus sentences (Fernald et al. 1998, with permission from APS).
Figure 1. Mean latencies to initiate a shift in gaze fromthe distractor picture to the target picture, measured from the beginning of the spoken target word, for 15-, 18-, and 24month-old infants
Another way of looking at the data in this study is to compare shifts in gaze on distractor-initial trials, when the correct response is to shift to the other picture, with shifts on target-initial trials, when the correct response is to continue fixating the target object after it has been named. The graphs in Figure 2 show shifts in gaze on distractor-initial and target-initial trials in response to the target word for infants at all three ages. Solid lines show correct responses, while dashed lines show incorrect responses. Note that the increasing distance between the two lines in these graphs reflects increasing accuracy in infants’ ability to recognize target words across the three age groups. Given perfect
1.0
Proportion of trials
Word offset
A 15 mos.
0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0
0
250
500
750 1000 1250 1500 1750 2000 Time from word onset (ms)
500
750 1000 1250 1500 1750 2000 Time from word onset (ms)
1.0
Proportion of trials
Word offset
B 18 mos.
0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0
0
250
1.0
Proportion of trials
Word offset
C 24 mos.
0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0
0
250
500
750
1000 1250 1500 1750 2000
Time from word onset (ms)
Solid lines represent correct responses, shown as the proportion of trials on which infants were initialy fixating the distractor object at the onset of the target word and then shifted to the target picture (distractor-to-target shifts); dashed lines represent incorrect responses, shown as the proportion of trials on which infants were initially fixating the target object and then shifted to the distractor picture (targetto-distractor shifts) (Fernald et al. 1998, with permission from APS).
Figure 2. Shifts in gaze fromone picture to the other in response to the target word, for infants at (A) 15 months, (B) 18 months, and (C) 24 months
106
A. FERNALD, G.W. MCROBERTS & D. SWINGLEY
performance in this procedure, the solid line would asymptote at 1.0 (i.e., on distractor-initial trials subjects would always shift to the other picture), and the dashed line would remain at 0 (i.e., on target-initial trials subjects would never shift to the other picture). Although the 24-month-old infants did not perform perfectly, their tendency to shift accurately from distractor to target picture in response to the target word was much greater than for the younger infants. While response accuracy is reflected in the distance between the two lines in each graph, response speed is reflected in the point of divergence. Consistent with the analysis of mean reaction times, the graphs in Figure 2 show that on trials when infants started out on the distractor picture, the older infants also shifted to the target picture more quickly than did the younger infants as the target word was spoken. These findings show that during their second year infants make impressive improvement in the speed and accuracy with which they are able to recognize familiar words in fluent speech. During the period known as the vocabulary burst, measures of productive vocabulary show that around the age of 18 months many infants begin to acquire new words more rapidly (e.g., Bloom 1973). Spanning this period, our research using on-line measures of infants’ word recognition skills from 15 to 24 months of age revealed parallel developments in spoken language understanding that would be impossible to observe with precision in natural behavior. Progress is evident not just in the acquisition of new vocabulary words, but also in the increased efficiency with which longfamiliar words are recognized and understood.
Incremental processing in infant word recognition If a child can speak the word baby by the age of 15 months and demonstrates understanding by choosing the appropriate picture when hearing the word, what accounts for further improvement over the following months in the ability to recognize that word when it is spoken? According to one prevalent view in the literature on early word recognition, the lexical representations that enable infants and young children to understand spoken language are relatively impoverished in comparison with those of adults. From this perspective, performance differences in the speed and reliability of word recognition at 15 and 24 months most likely reflect differences in the nature of lexical representations available at these ages. In addition to representational differences, it is possible that other cognitive abilities developing rapidly at this time are critical in the emergence of word recognition skills. Here we review recent studies from our laboratory which
DEVELOPING COMPETENCE IN WORD RECOGNITION
107
explore the ability of infants to recognize words using a minimum of acousticphonetic detail. The findings of these studies may help us to understand how these different factors contribute to the increase in speech processing efficiency over the second year. The idea that young children’s lexical representations are poorly specified was originally based on observations of speech production and perception errors. For example, researchers using tasks which tested word understanding found that infants around 18 months often failed to discriminate phonetic contrasts which much younger infants can easily differentiate (e.g., Shvachkin 1973/1948; Garnica 1973). Psycholinguistic experiments with preschoolers showed that older children also perform much worse than adults when asked to detect mispronunciations and do other tasks requiring rapid identification of familiar words (e.g., Cole 1981; Walley 1987). For example, Walley (1988) found that children needed to hear relatively more acoustic-phonetic information than did adults before they were able to identify known words correctly, a result which was interpreted as evidence for immature holistic representations. These and other related findings have led some researchers to conclude that lexical representations are still relatively underspecified and lack segmental detail even at 3 to 4 years of age (e.g., Charles-Luce & Luce 1990; Walley 1993). If young listeners need to hear an entire word before lexical access can occur, then we would not expect infants to show the same efficiency as adults in identifying known words in continuous speech. A robust characteristic of spoken language understanding by adults is the ability to process speech incrementally. Because adults make continuous use of acoustic-phonetic information, they are able to identify familiar words as soon as sufficient detail is available to enable them to disambiguate the word from other alternatives (e.g., Marslen-Wilson 1987). For example, a word onset such as /el/ activates several English words including elf, else, elbow, elevator, elegant, elephant, and others consistent with the initial segments. When the adult listener hears /ele/, most of these candidates can be rejected; a few milliseconds later when the /f/ is heard, the word elephant can be uniquely identified before the end of the word is spoken. According to Walley (1993), young children are not yet able to process speech continuously in this fashion. Because immature lexical representations are “holistic” in nature, she argues, children only slowly develop the ability to process speech incrementally. The Fernald et al. (1998) findings suggested that we should not underestimate infants’ abilities in this regard. As shown in Figure 1, the mean latency to identify familiar words decreased substantially between 15 and 24 months of age. Although the 15-month-olds demonstrated word recognition by looking reliably
108
A. FERNALD, G.W. MCROBERTS & D. SWINGLEY
at the correct picture in response to naming, they typically did not initiate a shift in gaze from the distractor to the target picture until after the target word had been spoken. In contrast, the 24-month-olds typically began to shift to the correct picture before the end of the target word. These findings gave preliminary and indirect support to the idea that the older infants were identifying the spoken word based on word-initial information, just like adults. However, it was not clear from these results how much acoustic-phonetic information infants needed to hear in order to recognize the target word. To address this question more directly, Swingley, Pinto and Fernald (1999) presented 24-month-old infants with paired pictures of familiar objects whose names overlapped substantially (doggie/doll) or did not overlap (e.g., doggie/tree) at word onset. While doggie could potentially be distinguished from tree right from the beginning of the word, doggie and doll were not distinguishable until the occurrence of the second consonant 300 ms into the word. If two-year-olds can process words incrementally, their reaction times should be slower in the case of phonetic overlap. Thus the question of interest was whether infants hearing the target word doggie would be slower to shift to the correct picture when the dog was paired with the doll than when the dog was paired with the tree. The results showed that this was indeed the case:infants rapidly distinguished doggie from tree, but their response was delayed by about 300 ms when distinguishing doggie from doll. The Swingley et al. (1999) study showed that by 24 months of age, children are able to monitor speech continuously, taking advantage of word-initial information to identify a familiar word. The fact that the 300 ms lag in reaction time corresponded to the amount of phonetic overlap between doggie and doll suggested that infants’ responses were roughly time-locked to the occurrence of the phonetic information relevant to distinguishing the two words. However, the mean response latency in this case was around 1000 ms, more than half a second after the point when the target word diverged from the name of the distractor object. Thus it could not be concluded from these results that hearing the first portion of the word, just up to the point of the disambiguating consonant, provided infants with enough information for word recognition, as is often the case for adults. Of course, reaction times in this procedure include not only the time it takes to identify the target word, but also the time required to initiate a shift in gaze. Based on research on infants’ eye movements (e.g., Haith, Hazan & Goodman 1988) we can estimate that oculomotor programming requires around 200–300 ms, but this is a very rough estimate. In any case, given the mean response latencies in this study, it was possible that infants continued to listen for half a second beyond the point where the word became uniquely
DEVELOPING COMPETENCE IN WORD RECOGNITION
109
identifiable, rather than responding as soon as disambiguating information became available. In the next study we explored this question more directly, by presenting infants with truncated target words in which only the first part of the word was available (Fernald, Swingley & Pinto, under review). If infants can identify familiar words when only the word-initial information is presented, this would provide convincing evidence for the early development of incremental processing. The subjects in this study were 64 infants at 18 and 21 months of age, the period associated with the onset of the vocabulary burst. Since our on-line procedure yields continuous measures of speed and accuracy in word recognition, we hoped to be able to document gradual changes in receptive language skills across this period of rapid development in speech production. With this goal in mind, we also explored the relation of our speech processing measures to infants’ productive vocabulary size, as assessed by parental report on the MacArthur Communicative Development Inventory (CDI). The question of interest was whether infants who are faster and more accurate in word recognition are also more advanced in vocabulary development. The auditory stimuli used in this experiment were sentences containing one of six Whole target words known to children of this age (baby, doggie, kitty, birdie, car, ball), or one of six Partial target words, constructed by deleting the final segments of the Whole words (/bei/, /daw/, /kI/, /ber/, /ka/, /baw/). Through careful editing of the waveforms, it was possible to make all the Partial target words comparable in duration (range:291–339 ms). As described earlier, infants’ responses were analyzed using high resolution coding of infants’ fixation patterns to the target and distractor pictures as the Whole or Partial target word was spoken. Three main findings emerged from this study. First, we found that 18- and 21-month-old infants were able to recognize spoken words rapidly and reliably on the basis of partial segmental information. The mean reaction times to both Whole and Partial target words were around 800 ms, as shown in Figure 3. Although the 21-month-old infants were faster overall than the 18-month-old infants, there was considerable overlap in mean response latencies. Moreover, there were no age-related differences in accuracy, although infants performed better overall on Whole word trials (77% correct) than on Partial word trials (70%). Extending the findings of Swingley et al. (1999), these results showed unambiguously that word-initial acoustic-phonetic information was sufficient to enable infants to identify known words. Second, we found that speed of word recognition was associated with accuracy in this task. For this analysis, we divided infants into a Fast group and a Slow group based on their average response latency on Whole word trials. Infants in the Fast group responded more
110
A. FERNALD, G.W. MCROBERTS & D. SWINGLEY
Whole words Where’s
the
Age
bei:
bi ?
18 mo 21 mo 0
150
300
450
600
750
900
600
750
900
Partial words Where’s
the
Age
bei:
18 mo 21 mo 0
150
300
450
Figure 3. Mean latencies to initiate a shift in gaze fromthe distractor picture to the target picture on Whole and Partial word trials for 18- and 21-month-old infants
DEVELOPING COMPETENCE IN WORD RECOGNITION
111
reliably as well as more rapidly to Whole and Partial target words, as compared to infants in the Slow group. Finally, we found that speed of word recognition was also related to lexical development. Those infants who could identify Whole words more rapidly also tended to have larger spoken vocabularies. We know from the Fernald et al. (1998) study that both speed and accuracy tend to increase with age between 15 and 24 months. Although the Slow/Fast grouping did not map exactly onto the 18- and 21-month age groups in this experiment, the majority of younger infants were in the Slow group and the majority of older infants were in the Fast group. Infants in the Fast group were also more advanced in their vocabulary development. For these reasons it seems appropriate to characterize the performance of the Fast group infants as a more mature or advanced pattern of responses than that of the Slow group infants. In the next two sections we consider what factors might account for the increases in speed, accuracy and efficiency associated with the development of these more mature abilities in spoken word comprehension.
What develops as word recognition abilities improve? One explanation for developmental changes in word recognition skills is that older and more advanced infants have lexical representations that are more detailed and complete than those of younger infants. If so, then younger infants with incompletely specified lexical representations should require relatively more acoustic-phonetic information in order to identify a familiar word. But this was not what we found. Infants did perform better overall on Whole word trials than on Partial word trials, perhaps because hearing the truncation in the Partial word stimuli sometimes inhibited the tendency to shift to the target. However, there was no interaction between trial type and either age or the Fast/Slow grouping. That is, Fast and Slow group infants were not differentially affected by the Partial word manipulation, indicating that the Slow group infants were not at a greater disadvantage than Fast group infants on Partial word trials. These findings do not support the idea that lexical representations at this age are phonetically underspecified. Other recent results also suggest that infants around this age have well-specified lexical representations, at least for words with which they are very familiar. Swingley et al. (1999) showed that 24-month-olds reliably distinguished ball and doll when tested in this procedure. And working with even younger infants, Swingley and Aslin (1999) found that 17-month-olds could distinguish correct and incorrect pronunciations of familiar words.
112
A. FERNALD, G.W. MCROBERTS & D. SWINGLEY
The results of these studies appear to contradict two bodies of research cited earlier which both suggest that young children’s lexical representations are somehow qualitatively different from those of adults. First, there are the claims that infants at the end of the first year shift from purely acoustic-phonetic representations to lexical representations which incorporate semantic information but with loss of phonetic detail (e.g., Halle & de Boysson-Bardies 1996; Stager & Werker 1998; Höhle & Weissenborn 1998). The proposal that infants from 10 to 14 months attend less to phonetic detail when processing speech in a “lexical mode” focuses on the period when infants are just beginning to associate words with meanings. Our findings showing efficient use of phonetic detail by infants in the second year are not inconsistent with this hypothesis, since the infants we studied were older and linguistically more advanced, and we tested them only on highly familiar words. Although our data on incremental speech processing by 18-month-olds do not support the idea that infants at this age have lexical representations for known words that are different in degree of phonetic specification from those of adults, it is possible that such differences would become apparent under more challenging conditions. For example, it could be that when hearing unfamiliar rather than well known words, 18-month-olds would have more difficulty establishing representations for new words that are phonetically similar than for words that are phonetically distinct, a task that would not be challenging for an adult. This is an area where graded models will be helpful, to document intermediate stages in infants’ progression toward building a lexicon in which new words are increasingly likely to overlap phonetically. The use of on-line comprehension measures should enable us to assess whether representations for unfamiliar words are initially less well specified phonetically than those for familiar words already established in the child’s spoken vocabulary. The second set of studies potentially inconsistent with our word recognition findings are those showing that older children recognize spoken words less reliably and less efficiently than do adults, and appear to need relatively more acoustic-phonetic information to identify a familiar word (e.g., Walley 1987, 1993). Of course, it is obvious from their spontaneous behavior that most children by two years of age can distinguish and understand hundreds of words, although such observations don’t give any indication of speed or efficiency of processing. Our findings that infants are able to process speech quickly and continuously do pose a challenge to Walley’s more negative conclusions. It seems clear that children’s competence in spoken word recognition was underestimated to some extent in her experiments, in light of the evidence that 18-month-old infants can use incremental word recognition strategies comparable to those of adults. However, Walley’s conclusion that children continue to
DEVELOPING COMPETENCE IN WORD RECOGNITION
113
develop in this domain is quite plausible. For example, she found that 4-year-old children were able to make use of word-initial information in word recognition in situations where the context was highly constraining, but were less successful when the context was not constraining. The word recognition task used in our research provided a very restricted and supportive context, since only two pictures were presented and one was labeled. In less narrowly constrained circumstances, infants’ speed and accuracy might be less impressive. This too is an area where graded models will be valuable, as we explore the range of contextual factors that influence lexical access at different points in development. Once again, the appropriate question is not “When can the child identify this particular word in an adult-like fashion?” but rather “As the child becomes more competent in word recognition, what are the factors influencing the accessibility of well known and newly learned words?”
The contribution of other cognitive functions to competence in word recognition Whether or not improvement in the ability to recognize spoken words reflects developmental differences in the nature and robustness of lexical representations, other factors are undoubtedly involved. More advanced performance in word recognition also reflects the maturation of cognitive processes which are not specifically associated with speech processing, but which are required for successful performance in this task. For example, in our study of infants’ ability to recognize partial words (Fernald et al., under review), we found patterns of differential accuracy on distractor-initial and target-initial trials which could be related to such factors. When the correct response on Whole word trials was to shift from the distractor to the target picture, the less advanced Slow group infants failed to shift on 23% of the trials, while the more advanced Fast group infants failed to shift on only 9% of the trials. On target-initial trials, however, when the correct response was to remain on the target picture, there were no differences between groups. Given that both kinds of response are correct, does shifting appropriately from the distractor to the target picture reveal more about an infant’s word recognition abilities than staying appropriately on the target picture as the word is spoken? Consider what the infant must know and do in order to respond correctly on target-initial and distractor-initial trials. The processing demands are initially the same regardless of what picture the infant is looking at when the trial begins. The infant must encode the visual image, listen to the stimulus
114
A. FERNALD, G.W. MCROBERTS & D. SWINGLEY
sentence with attention to the target word, and decide whether the spoken word matches the fixated picture. If the target word matches the name of the picture in view, the appropriate response is to remain on that picture; however, if there is a mismatch, it is appropriate to reject the picture in view and shift to the alternative picture. The process of recognizing that a spoken target word matches the picture in view and then staying put on the target picture is cognitively less demanding than rejecting a mismatch between the target word and the distractor picture and then initiating an eye movement to the other picture. In either case a correct response requires recognition of the picture in view and would not occur reliably if the name of the object were not represented in some way in the infant’s mental lexicon. However, a correct response in the case of a mismatch also imposes additional processing demands beyond word recognition, which may involve any or all of the following capabilities:attentive engagement in the task, rapid encoding of the visual images, ability to integrate the visual and auditory input, association of the pictures with the appropriate words, ability to disengage from one picture in order to attend to the other, and mobilization of a shift in gaze. In addition, developmental differences in infants’ ability to form categories, as well as in their experience with particular objects, could also influence the speed and accuracy of their word recognition performance. For example, the more advanced infants may have been more adept and thus quicker in identifying the pictures of babies used in this procedure as exemplars of the category “baby”. These are only some of the perceptual, cognitive, and motor skills which could contribute to the greater speed, accuracy, and efficiency of the more advanced infants in studies using this word recognition procedure, and all involve processes which are not specifically tied to language understanding. This is not to say that these non-linguistic capabilities are unrelated to language development more broadly, since all may be involved in spoken language understanding in the child’s daily life as well as in the experimental procedure used in our research. For example, research on the relation of early cognitive skills to later intellectual functioning has shown that individual differences in visual information processing in infancy are related to receptive language abilities in childhood (Rose, Feldman, Wallace & Cohen 1991). Another study implicating basic cognitive skills in language acquisition found that developmental changes in the ability to form object categories are related to the rate of vocabulary acquisition in the second year (Gopnik & Meltzoff 1987). The point to be made here is that the speed, accuracy, and efficiency with which infants recognize familiar words in fluent speech improve with age and experience not only because of increasing detail and stability in emerging lexical representations, but also because many
DEVELOPING COMPETENCE IN WORD RECOGNITION
115
other cognitive processes supporting spoken word recognition are developing at the same time.
Does infant-directed speech facilitate recognition of familiar words in fluent speech? The view that there are multiple factors influencing the emergence of word recognition skill implies that perceptual and cognitive limitations may account in part for the poorer performance of younger infants in identifying familiar spoken words. Is it possible that speech to infants can be presented in ways that mitigate to some extent the effects of these processing limitations? An obvious place to begin looking for an answer is in the naturally occurring linguistic and prosodic modifications characteristic of adult speech to infants. Across many cultures, infant-directed (ID) speech shares common features when compared to adultdirected (AD) speech. For example, ID utterances tend to be shorter and more repetitive, and are often spoken more slowly and with exaggerated intonation (e.g., Fernald, Taeschner, Dunn, Papousek, Boysson-Bardies & Fukui 1989). Noting the prevalence of such modifications in speech to children across diverse languages, Ferguson (1977) speculated that this special speech register might serve both an “affective” and a “clarification” function. More recent research on the nature of ID speech and its effects on infant listeners provides support for both of these hypothesized functions. Evidence for the affective influence of ID speech comes from perceptual studies showing that the melodic qualities of ID prosody engage attention and elicit emotion in young infants (e.g., Fernald 1985; Werker & McLeod 1990). Research on the “clarification” function has focused primarily on descriptive analyses of acoustic-phonetic properties of mothers’ speech. Several studies have found that adults enunciate words more clearly when speaking to an infant than when speaking to another adult (e.g., BernsteinRatner 1984; Kuhl et al. 1997). These results suggest that words in ID speech might be perceptually more accessible to infants, consistent with Ferguson’s hypothesis that one function of the modifications in speech to children is to enhance understanding. A prominent characteristic of speech addressed to infants in English is that object words especially relevant to the conversation are frequently placed at the end of the utterance rather than in the middle. In a study of speech to 14-monthold infants and adults, Fernald and Mazzie (1991) asked English-speaking mothers to make up a story using a picture book in which six target objects were
116
A. FERNALD, G.W. MCROBERTS & D. SWINGLEY
the focus of attention. When speaking to an infant, mothers used more exaggerated intonation and consistently positioned focused words in utterance-final position, whereas in speech to an adult mothers used more variable strategies to mark focused words. In the ID speech sample, 75% of the focused words were introduced as the last word in the utterance, while only 53% of the focused words occurred in this position in AD speech. An even higher proportion of focused words occurring in final position was found in a study by Aslin, Woodward, LaMendola and Bever (1996). When English-speaking mothers were asked to teach three new words to their 12-month-old infants, 89% of the target words came at the end of the utterance. Fernald and Mazzie also conducted a control experiment to see whether adults used strategies similar to those in ID speech when asked to teach another adult how to do an unfamiliar assembly task. Indeed, when teaching new technical terms to another adult, subjects very frequently placed the unfamiliar words in utterance-final position, just as they did when highlighting new words in speech to infants. By positioning focused noun labels at the ends of utterances, speakers seem to be intuitively exploiting listening biases that make final elements in an auditory sequence easier to detect and remember, a phenomenon that is well established in research on auditory processing and memory (e.g., Watson 1976). If even adults can benefit from having new auditory information presented in this format, it seems likely that infants facing the challenges of word segmentation in continuous speech would benefit even more. Although several studies such as those described above have documented features of mothers’ speech which could plausibly help the infant in the task of word recognition, there has been very little research which tests this hypothesis directly. Here we review an experiment in which we investigated the influence of word position on infants’ ability to identify familiar spoken words in fluent speech (Fernald, McRoberts & Herrera 1992). The findings of this study provide further evidence for the gradual emergence of word recognition capabilities over the second year. They also demonstrate how variations in the structure of the input speech can influence the infant’s success in understanding spoken words at different points in development. The hypothesis tested in this study was that infants would recognize words in utterance-final position more reliably than words in utterance-medial position. Subjects were eighty 15- and 19-month-old English-learning infants, tested in the word recognition procedure described earlier. In a between-subjects design, six target words familiar to the infants at both ages were presented either in utterance-medial or in utterance-final position. The carrier phrases for the Medial
DEVELOPING COMPETENCE IN WORD RECOGNITION
117
condition (There’s a ____ over there) and the Final condition (Over there there’s a ____) contained exactly the same words, differing only in order. As predicted, we found that infants at both 15 and 19 months were more successful in recognizing Final target words than Medial target words. Figure 4 shows the proportions of looking time to the target picture averaged over two 1000 ms intervals following the onset of the target word. This way of presenting the data allows comparison of accuracy in relation to speed in the Final and Medial conditions at the two ages. The results in the Final condition are consistent with those of the Fernald et al. (1998) study, in which target words were also presented in utterance-final position. For the 19-month-old infants, the mean looking time to the target picture in response to Final target words was already significantly above chance in the first 1000 ms following target word onset, and rose to 77% during the second 1000 ms interval. The 15-month-old infants responded less rapidly and less accurately, although their mean looking time in response to Final words eventually reached 66% in the second interval, significantly above chance. In the Medial condition, however, the younger infants were at chance for both the first and second intervals; in contrast, the older infants were above chance in both intervals, indicating that they responded relatively quickly as well as accurately to target words in utterance-medial position. To summarize, infants recognized Final target words more reliably than Medial target words, and the older infants were faster and more accurate overall than the younger infants. Although 15-month-olds performed well when familiar target words were presented at the end of the utterance, they were unable to recognize the same familiar words embedded in the middle of the utterance. By the age of 19 months, infants could identify medial target words quickly and correctly, but this was still a harder task than identifying words in final position. Several kinds of perceptual factors could contribute to the increased difficulty of recognizing words embedded in the middle of an utterance. Medial words may be masked by subsequent words in the utterance, and it could be that younger infants are more vulnerable to such masking effects. Relative duration may also play a role, since words in medial position are typically shorter than words in final position (e.g., Delattre 1966). Indeed, the utterance-final target words used in this experiment were around 60% longer than the equivalent words in utterance-medial position. Thus it is not clear from these results whether the poorer performance on medial target words was an effect of word position per se, or whether it occurred because the medial target words were shorter than the final target words. It is interesting to note that English-speaking mothers regularly increase the duration of vowels in content words when talking to children, and this lengthening occurs with words in utterance-medial as well
118
A. FERNALD, G.W. MCROBERTS & D. SWINGLEY 15 months 18 months
Mean percent looking time to target object
100 Medial words
Final words 77
75 59 50
48
67
66 55
52
58
25
0–1 sec
1–2 sec
0–1 sec
1–2 sec
Time interval from onset of target word
Figure 4. Mean percent looking time to target picture in response to Medial and Final target words
as utterance-final position (e.g., Albin & Echols 1996; Swanson, Leonard & Gandour 1992). While we need further experimental evidence to determine whether vowel lengthening enhances recognition of embedded target words, our data clearly show that the strategy of positioning focused words in final position in ID speech has perceptual advantages for the infant. This common pattern of word order in ID speech could serve as a kind of support system for the inexperienced listener, compensating for processing limitations that make it difficult for infants to succeed in segmenting, recognizing, and understanding a familiar word when it is followed by other words in continuous speech.
DEVELOPING COMPETENCE IN WORD RECOGNITION
119
Summary and conclusions Learning to understand spoken language is a long, slow process. Documenting children’s progress toward more mature competence in speech comprehension has been challenging for researchers because progress often consists of gradual changes in processing efficiency which are difficult to observe in spontaneous behavior. Using fine-grained temporal measures of infants’ response to spoken words, our research has shown that the speed and accuracy of understanding increase steadily over the second year (Fernald et al. 1998). We have also found that infants are able to process the speech signal incrementally in some contexts, identifying familiar words on the basis of word-initial information (Fernald et al., under review; Swingley et al. 1999). These results reveal that by the end of the second year, children’s speech processing capabilities are surprisingly mature in some respects. However, other results make it clear that these capabilities are far from mature in other respects. We found that infants’ success at word recognition was strongly influenced by the position of the word in the utterance, even when familiar words were repeatedly presented in the same short carrier phrase (Fernald et al. 1992). Target words occurring in the middle of the phrase were recognized much less reliably than target words at the end of the phrase, a variation in word order that would pose no problem for an adult listener. The difference in infants’ performance on medial and final target words provides an interesting example of what Haith (1998) refers to as “partial knowledge”. At 15 months, infants were unsuccessful in understanding a familiar word embedded in the middle of the sentence, although they could recognize the same word reliably when they heard it at the end of the utterance. Since these infants clearly had a lexical representation for the word which could be accessed sucessfully under more favorable conditions, other perceptual and cognitive factors must account for the difficulty of the task. Were the 15-month-olds unable to identify the boundaries of the target word when it was followed by other sounds rather than by silence? Since Jusczyk and Aslin (1995) have shown that much younger infants can recognize a word embedded in continuous speech as familiar, it seems unlikely that segmentation was the problem for the 15-month-old infants, especially since all the target words were highly familiar. However, given the shorter duration of words in medial position, as well as the potential for masking or other forms of interference resulting from the words that followed, medial target words undoubtedly posed a greater perceptual challenge than final target words. And under perceptually more difficult conditions, the additional demands of accessing the meaning of the spoken word and deciding which object it matched were too much for the 15-month-old infants.
120
A. FERNALD, G.W. MCROBERTS & D. SWINGLEY
According to this view, an account of the gradual emergence of competence in spoken language understanding in young children should be concerned not only with the development of lexical representations, but also with the contribution of other perceptual and cognitive factors which influence the accessibility of spoken words to the listener. When adults listen to speech under unfavorable signal-to-noise conditions, word recognition becomes effortful and fewer resources are available for other concurrent and subsequent processes necessary for discourse comprehension (Pichora-Fuller, Schneider & Daneman 1995). Conversely, as the infant gradually develops the ability to identify words in continuous speech a little more reliably and a little more quickly, more cognitive resources become available for attending to other words in the utterance and the relations among them. Understanding speech is a complex process that involves much more than detecting and identifying individual words, since the listener must also integrate successively heard words into phrases and sentences to arrive at a coherent and correct representation of the speaker’s meaning. Small, gradual gains in the speed and efficiency of speech processing can have large benefits for the language learner.
References Albin, D. D. and Echols, C. H. 1996. “Stressed and word-final syllables in infant-directed speech.” Infant Behavior and Development 19:401–418. Aslin, R. N., Jusczyk, P. W. and Pisoni, D. B. 1998. “Speech and auditory processing during infancy:Constraints on and precursors to language.” In Cognition, perception, and language (Vol. II), D. Kuhn and R. Siegler (eds.). Handbook of Child Psychology, W. Damon (ed.). New York:John Wiley & Sons. Aslin, R. N., Woodward, J. Z., LaMendola, N. P. and Bever, T. G. 1996. “Models of word segmentation in fluent maternal speech to infants.” In Signal to syntax: Bootstrapping from speech to grammar in early acquisition, J. L. Morgan and K. Demuth (eds.). Mahwah, NJ:Erlbaum. Bernstein-Ratner, N. 1984. “Phonological rule usage in mother-child speech.” Journal of Phonetics 12:245–254. Bloom, L. 1973. One word at a time. The Hagues, Netherlands:Mouton. Charles-Luce, J. and Luce, P. A. 1990. “Similarity neighbourhoods of words in young children’s lexicons.” Journal of Child Language 17:205–215. Cole, R. A. 1981. “Perception of fluent speech by children and adults.” Annals of the New York Academy of Sciences 379:92–109. Delattre, P. 1966. “A comparison of syllable length conditioning among languages.” International Journal of Applied Linguistics 4:183–198.
DEVELOPING COMPETENCE IN WORD RECOGNITION
121
Ferguson, C. A. 1977. “Baby talk as a simplified register.” In Talking to children: Language input and acquisition, C. E. Snow and C. A. Ferguson (eds.). Cambridge: Cambridge University Press. Fernald, A. 1985. “Four-month-old infants prefer to listen to motherese.” Infant Behavior and Development 8:181–195. Fernald, A. and Mazzie, C. 1991. “Prosody and focus in speech to infants and adults.” Developmental Psychology 27:209–221. Fernald, A., McRoberts, G. W. and Herrera, C. 1992. Prosodic features and early word recognition. Paper presented at the 8th International Conference on Infant Studies, Miami, FL. Fernald, A., Pinto, J. P., Swingley, D., Weinberg, A. and McRoberts, G. W. 1998. “Rapid gains in speed of verbal processing by infants in the 2nd year.” Psychological Science 9:228–231. Fernald, A., Swingley, D. and Pinto, J. P. (under review). “When half a word is enough: Infants can recognize spoken words using partial phonetic information.” Fernald, A., Taeschner, T., Dunn, J., Papousek, M. et al. 1989. “A cross-language study of prosodic modifications in mothers’ and fathers’ speech to preverbal infants.” Journal of Child Language 16:477–501. Garnica, O. K. 1973. “The development of phonemic speech perception.” In Cognitive development and the acquisition of language, T. E. Moore (ed.). New York:Academic Press. Golinkoff, R. M., Hirsh-Pasek, K., Cauley, K. M. and Gordon, L. 1987. “The eyes have it:Lexical and syntactic comprehension in a new paradigm.” Journal of Child Language 14:23–45. Gopnik, A. and Meltzoff, A. 1987. “The development of categorization in the second year and its relation to other cognitive and linguistic developments.” Child Development 58:1523–1531. Haith, M. M. 1998. “Who put the cog in infant cognition? Is rich interpretation too costly?” Infant Behavior and Development 21:167–179. Haith, M. M., Hazan, C. and Goodman, G. S. 1988. “Expectation and anticipation of dynamic visual events by 3.5-month-old babies.” Child Development 59:467–479. Halle, P. A. and de Boysson-Bardies, B. 1994. “Emergence of an early receptive lexicon: Infants’ recognition of words.” Infant Behavior and Development 17:1 19–129. Halle, P. A. and de Boysson-Bardies, B. 1996. “The format of representation of recognized words in infants’ early receptive lexicon.” Infant Behavior and Development 19:463–481. Höhle, B. and Weissenborn, J. 1998. “Sensitivity to closed-class elements in preverbal children.” In Proceedings of the 22nd Boston University Conference on Language Development, A. Greenhill, M. Hughes, H. Littlefield and H. Walsh (eds.). Somerville:Cascadilla Press. Hood, B. M. and Atkinson, J. 1993. “Disengaging visual attention in the infant and adult.” Infant Behavior and Development 16:405–422. Jusczyk, P. W. 1997. The discovery of spoken language. Cambridge, MA:MIT Press.
122
A. FERNALD, G.W. MCROBERTS & D. SWINGLEY
Jusczyk, P. W. and Aslin, R. N. 1995. “Infants’ detection of the sound patterns of words in fluent speech.” Cognitive Psychology 29:1–23. Kuhl, P. K., Andruski, J., Chistovich, L., Kozhevnikova, E., Ryskina, V., Stolyarova, E., Sundberg, U. and Lacerda, F. 1997. “Cross-language analysis of phonetic units in language addressed to infants.” Science 277:684–686. Kuhl, P. K., Williams, K. A., Lacerda, F., Stevens, K. N. and Lindblom, B. 1992. “Linguistic experience alters phonetic perception in infants by 6 months of age.” Science 255:606–608. Maratsos, M. 1998. “The acquisition of grammar.” In Cognition, perception, and language (Vol. II), D. Kuhn and R. Siegler (eds.). Handbook of Child Psychology, W. Damon (ed.). New York:John Wiley & Sons. Marslen-Wilson, W. D. 1987. “Functional parallelism in spoken word-recognition.” Cognition 25:71–102. Mills, D. L., Coffey-Corina, S. A. and Neville, H. J. 1993. “Language acquisition and cerebral specialization in 20-month-old infants.” Journal of Cognitive Neuroscience 5:317–334. Pichora-Fuller, M. K., Schneider, B. A. and Daneman, M. 1995. “How young and old adults listen to and remember speech in noise.” Journal of the Acoustical Society of America 97:593–608. Polka, L. and Werker, J. F. 1994. “Developmental changes in perception of nonnative vowel contrasts.” Journal of Experimental Psychology: Human Perception & Performance 20:421–435. Rose, S. A., Feldman, J. F., Wallace, I. F. and Cohen, P. 1991. “Language:A partial link between infant attention and later intelligence.” Developmental Psychology 27:798–805. Saffran, J. R., Aslin, R. N. and Newport, E. L. 1996. “Statistical learning by 8-month-old infants.” Science 274:1926–1928. Shvachkin, N. K. 1948/1973. “The development of phonemic speech perception in early childhood.” In Studies of child language development, C. A. Ferguson and D. I. Slobin (eds.). New York:Holt, Rinehart, and Winston. Stager, C. L. and Werker, J. F. 1997. “Infants listen for more phonetic detail in speech perception than in word-learning tasks.” Nature 388:381–382. Swanson, L. A., Leonard, L. B. and Gandour, J. 1992. “Vowel duration in mothers’ speech to young children.” Journal of Sppech and Hearing Research 35:617–625. Swingley, D. and Aslin, R. N. 1999. Mispronunciation detection in 18 to 21 month-old infants. Paper presented at the Biennial Meeting of the Society for Research in Child Development, Albuquerque, New Mexico. Swingley, D., Pinto, J. P. and Fernald, A. 1998. “Assessing the speed and accuracy of word recognition in infants.” In Advances in Infancy Research (Vol. 12), C. RoveeCollier, L. Lipsitt and H. Hayne (eds.). Stamford, CT:Ablex. Swingley, D., Pinto, J. P. and Fernald, A. 1999. “Continuous processing in word recognition at 24 months.” Cognition 71:73–108..
DEVELOPING COMPETENCE IN WORD RECOGNITION
123
Tanenhaus, M. K., Spivey-Knowlton, M. J., Eberhard, K. M. and Sedivy, J. C. 1995. “Integration of visual and linguistic information in spoken language comprehension.” Science 268:1632–1634. Walley, A. C. 1987. “Young children’s detections of word-initial and -final mispronunciations in constrained and unconstrained contexts.” Cognitive Development 2:145–167. Walley, A. C. 1993. “The role of vocabulary development in children’s spoken word recognition and segmentation ability. Special Issue:Phonological processes and learning disability.” Developmental Review 13:286–350. Watson, C. S. 1976. “Factors in the discrimination of word-length auditory patterns.” In Hearing and Davis: Essays honoring Hallowell Davis, S. K. Hirsh, D. H. Eldredge, I. J. Hirsh and S. R. Silverman (eds.). St. Louis, MO:W ashington University Press. Wellman, H. M. and Gelman, S. A. 1998. “Knowledge acquisition in foundational domains.” In Cognition, perception, and language (Vol. II), D. Kuhn & R. Siegler (eds.). Handbook of Child Psychology, W. Damon (ed.). New York:John Wiley & Sons. Werker, J. F. and McLeod, P. J. 1990. “Infant preference for both male and female infantdirected-talk:A developmental study of attentional and affective responsiveness.” Canadian Journal of Psychology 43:230–246. Ziajka, A. 1981. Prelinguistic communication in infancy. New York:Praeger .
Lemma Structure in Language Learning Comments on representation and realization Cecile McKee & Noriko Iwasaki University of Arizona
1.
Introduction
In the study of syntactic development, it is often assumed that performance directly indicates competence. And where children’s non-target sentences are analyzed with little attention to the processes that manipulate language knowledge in real time,1 it is easy to take errors as reflecting incomplete or incorrect linguistic knowledge. But we find a considerable discrepancy between learners’ competence and performance; in particular, we think that learners’ speech patterns underdetermine their syntactic knowledge. (Research reaching similar conclusions includes Demuth 1994; McKee 1994, and Weissenborn 1994.) In this paper, we investigate this position further by applying a model of language production to analyses of some closed class elements in learners’ utterances. We thus illustrate the case for seriously attending to the development of dynamic production processes while studying the development of static language knowledge. Misused and missing elements in children’s utterances challenge investigations of grammatical development. We will argue that these phenomena are better understood in the context of a production model. Misused elements often suggest limitations in learners’ grammatical knowledge base, but such errors can also reflect limitations in the processing of grammatical knowledge. Missing elements, both licit and illicit, are difficult to analyze when we are unsure of the state of the learner’s underlying knowledge. But treating the developing system like the target or fully developed system can reveal linguistic sophistication, even where significant omissions are evident. Our primary theoretical goal is to show how the multiple processes and levels of representation in a model of production provide new hypotheses for acquisitionists to use in identifying areas of potential
126
CECILE MCKEE & NORIKO IWASAKI
difficulty for learners. This brings us back to competence and performance, where we need to distinguish the representation of knowledge from its realization and to recognize the complexity of the relation between underlying representations and overt realizations of language knowledge (Fodor & Garrett 1966). Research on adult language production has clarified these complexities through models of these relations (e.g., Fromkin 1971; Garrett 1975; Levelt 1989). We focus on Levelt’s lemma-driven production model below, with emphasis on the notion of lemma and its interaction with some grammatical elements.2 It is important to appreciate that this model entails more than claims about structure; it offers specific hypotheses about how different types of structural information are processed in real time and about how the dynamic systems used in producing language are constrained. Briefly, Levelt’s model posits that two classes of lexical elements are processed differently. First, open class elements (corresponding to lexical categories) are distinguished from closed class elements (roughly corresponding to functional categories). The former includes elements like nouns, verbs, and adjectives; the latter includes elements like verb inflections, pronominal clitics, and case particles. Second, not only do open and closed class elements encode different types of information; they are also retrieved at different times during sentence generation. The abstract representations of open class elements are retrieved before closed class elements are established in syntactic frames (see below); next, open class elements are realized; and finally, closed class elements are realized. Critical to these hypotheses regarding the type and timing of lexical units is the distinction between lemmas and lexemes. A is a lexical unit containing both semantic and syntactic information. Each lemma has a pointer to its phonological form; this phonological component of a lexical entry is the . Current evidence indicates that lexemes are retrieved after lemmas.3 The distinction between lemmas and lexemes was originally motivated by speech errors spontaneously produced by English-speakers. But there are other motivations for this distinction, both empirical and conceptual. Comprehensive reviews of these can be found in Garrett (1988) and Levelt (1989). Particularly persuasive empirical motivations come from word substitution errors:Some are related in meaning but not in sound; these can be seen as lemma-based substitutions. Others are related in sound but not in meaning; these can be seen as lexemebased substitutions. An example related in meaning is shoulders substituting for the target elbows; an example related in sound is garlic substituting for the target gargle (Garrett 1976). It is also important to appreciate how lemmas relate to phrase structure, and how that in turn relates to closed class elements. Particular lemmas are selected
LEMMA STRUCTURE IN LANGUAGE LEARNING
127
on the basis of the match between their semantic components and conceptual subparts of the message being communicated in a sentence.4 According to Levelt’s model, the selected lemma’s syntactic information then guides two kinds of structure-building procedures:A lemma such as give first calls up its procedure (e.g., a VP-building process), and then that calls up certain procedures (e.g., an IP-building process). Thus, the model hypothesizes that both types of structure-building procedures occur in response to the lemma:The categorial procedure is directly initiated by the syntactic category of the lemma, while the functional procedure is indirectly initiated by the lemma (via its phrase). Many closed class elements are represented as abstract features in the resulting syntactic frames, and their phonological realization occurs still later in the production process. Thus, their retrieval depends on a phrase structure. This helps explain why the realization of closed class elements is differently vulnerable than that of open class elements. To exemplify, Garrett (1980) reported that, with the exception of prepositions and pronouns, closed class elements are not involved in sound exchange errors (like the error larietal pobe for the target parietal lobe). Our last point here concerns time constraints in sentence production. It is not enough for speakers to know lemmas and structure-building procedures. Critically, they also have to coordinate these different types of information extremely quickly. In other words, the timely retrieval of syntactic information is essential to building sentence frames. The integration of lemmas and procedures with the subsequent retrieval and use of phonological information presents an additional layer of planning. These various interactions of course pose challenges to learners.
2.
The developing production system
Directly considering language acquisition now, we ask why some elements seem selectively compromised during development. Children learning their first language face at least two tasks:They must add lemma and lexeme representations to their mental lexicons. They must also acquire structure-building procedures, learning how these relate to lemmas as well as to other procedures. Thus, their grammatical development includes the acquisition of both the declarative knowledge in each lemma and the associated procedural knowledge (including both categorial and functional procedures). Performance characterized by misused or missing elements could easily reflect multiple sources. Incomplete or otherwise incorrect information in these lexical or syntactic stores generates grammatical
128
CECILE MCKEE & NORIKO IWASAKI
errors. Poor coordination of lemmas and procedures also generates grammatical errors. Stages when information is lacking surely exist, as when children add lexical entries without (sufficient) syntactic information or when they have not fully mastered certain procedures. But stages when learners’ procedural efficiency is affected by temporal pressures in producing sentences must also exist, as when their processors are not fully automatic. It is possible for a learner to represent certain abstract syntactic information but to lack overt realization of that representation. To more directly link the development of the production system to classic performance-competence issues, (1) summarizes four possible relations between the representation of abstract sentence structure and the overt realization of functional elements. (1)
a. b. c. d.
[+representation, [−representation, [−representation, [+representation,
+realization] −realization] +realization] −realization]
We take (1a) to describe, uncontroversially, a target utterance in which the overt realization of a closed class element signals underlying representation of some syntactic relation(s); (1b) might describe an utterance lacking closed class elements because of no representation of some syntactic relation; and (1c) might describe so-called “frozen” expressions. We are most interested in (1d), and in comparing it with (1b). In cases where we find little or no overt evidence of underlying syntactic information, what factors bear on our analyses of learners’ utterances? In considering the production processes that affect the realization of syntactic knowledge, we first describe four scenarios in which a learner’s processing limitations result in misused or missing elements. In each case, we will suggest that attention to consistency (i.e., across words, structures, or occasions) might distinguish between a lack of declarative knowledge and inefficient processes (of lexical retrieval or of retrieval of structure-building procedures). First, let us consider lemma-learning as a locus of difficulty. For each new lemma, a learner risks a temporarily inappropriate lexical entry with insufficient or incorrect information.5 A child relying on such faulty lemmas will probably misuse some words. We illustrate this scenario with a Japanese-learner and the no, na, ∅ (no overt marker) paradigm for modified NPs shown in (2)–(5).6 This paradigm presents an instance in which markers are determined by both the syntactic category and the function of a modifier. In production model terms, the categorial status of the modifier — i.e., the noun in (2), the adjective in (3), the
LEMMA STRUCTURE IN LANGUAGE LEARNING
129
adjectival noun in (4), and the sentence in (5) — determines the functional procedure that will ultimately be responsible for eliciting closed class elements. (Iwasaki et al. 1997 have shown that such categorial information is part of the Japanese lemma. See note 3.) (2)
[NP [NP N] [N′ N]]
e.g.,
(3)
[NP [AP A] [N′ N]]
e.g.,
(4)
[NP [ANP AdjN] [N′ N]] e.g.,
(5)
[NP [IP ] [N′ N]]
e.g.,
haiiro no neko gray cat kuro-i neko black-- cat kiree na neko pretty cat morat-ta neko receive- cat ‘a/the cat (someone) was given’
In (2), a noun modifies neko ‘cat’; in this environment, the genitive marker no is inserted. In (3), an adjective modifies neko; in this environment, no case particle is inserted. In (4), an adjectival noun (or keiyoodoosi) modifies neko; in this environment, the copula na is used. In (5), a sentence modifies neko; in this environment, no case particle is inserted. Imagine now a learner who produces (6). She might be described either as lacking general grammatical knowledge about adjectival nouns or as lacking the specific lemma information that kiree ‘pretty’ is an adjectival noun. Can these possibilities be distinguished? If our hypothetical learner correctly uses other adjectival nouns, one could infer a lexical rather than a grammatical limitation. (Such reasoning assumes that, all else being equal, grammatical limitations elicit the same pattern across all elements with an identical syntactic status.) This example of a learner’s misused particle thus suggests incomplete lemma-learning. (6)
*kiree no neko pretty cat
Second, let us consider lemma retrieval as a source for errors like (6). A learner might generate an inaccurate structural representation because she fails to access part of a lemma’s information even though she has fully mastered it. As above, we would expect inconsistent performance. Here though, one might observe correct and incorrect use of the same words during the same stage of development. Referring again to the kiree example, our hypothetical learner might sometimes say the correct (4) and sometimes the incorrect (6). This pattern suggests the successful retrieval of the lemma information indicating that this
130
CECILE MCKEE & NORIKO IWASAKI
word is an adjectival noun on some occasions and failure to retrieve that same information on other occasions. Third, let us consider procedure retrieval as the locus of a processing problem. This more complex situation brings up (at least) two distinctions:the kind of procedure that might fail (i.e., categorial or functional) and different results of that failure (e.g., the absence of a target element or the presence of a non-target element). Let us begin with the nonretrieval of a lemma’s categorial procedure (e.g., a VP-building process). This should have a fatal effect on sentence generation, namely no phrase for the lemma to plug into. But what if a successfully launched categorial procedure fails to call up a related functional procedure (e.g., an IP-building process for a VP)? This is a more interesting source for the [+representation, −realization] problem. Our hypothetical learner might either omit elements required by that functional procedure, or apply a default procedure whose effects include misused elements. To show how the nonretrieval of a functional procedure might result in missing elements, recall that closed class elements begin as abstract features in syntactic frames; they are phonologically realized late in the production process. Here too then, if the relevant functional procedure has actually been mastered, performance patterns due to its sporadic nonretrieval should be inconsistent across words and structures. We illustrate this scenario with the Italian verbobject agreement shown in (7) and (8). In considering the following two sentences with the transitive verb mangiare ‘to eat’, note that verb-object agreement only occurs when the object is a preverbal pronominal clitic. (Gender and number marking, in this case masculine and plural, are indicated in the gloss by ; ∅ indicates no gender and number agreement.) (7)
(8)
ha mangiato i panini has eaten-∅ the- sandwich- ‘(She/he) ate the sandwiches.’ li ha mangiati them- has eaten- ‘(She/he) ate them.’
Imagine first a learner who says (9a) in a situation where (8) should be used. His morphology shows verb-object agreement, suggesting a representation of the relevant syntactic relation early in the process of producing this utterance. It would seem that the functional procedure whose eventual phonological realization should include the preverbal clitic li has applied, and the (very late) process of phonologically realizing that marker has not occurred. It is the realization of part of the results of the functional procedure that suggests it has applied.
LEMMA STRUCTURE IN LANGUAGE LEARNING
131
Contrast this case with that of a learner who says (9b), again in a situation where (8) should be used. His morphology shows no verb-object agreement. The of the functional procedure whose phonological realization should include the preverbal clitic li might explain the form of this utterance. If our hypothetical learner says the correct (8) sometimes and the incorrect (9b) sometimes, we would have further support for a process-based account of the latter.7 (9)
a.
mangiati eaten-
b.
mangiato eaten-∅
To show how a functional procedure’s nonretrieval might result in misused elements, consider structure-building that occurs despite underspecified syntactic information (underspecified because of unavailable declarative knowledge as in our lemma-learning and lemma retrieval scenarios or because of unavailable procedural knowledge as in our description of (9b); see note 5). After all, when essential information is not available by a certain time, the production system must crash or go on with omissions or go on with a guess. A default procedure could instantiate a guess that gives the system a regular and speedy alternative to crashing. Moreover, a default procedure that mimics what is required in the least restricted linguistic environment would produce the fewest overt errors. (As an example, the marker no occurs in many more syntactic environments than na does; this functional diversity makes it harder to identify a no error.) Thus, the overuse of a closed class element might result from a default procedure triggered by the nonretrieval of a target process. Learners of the paradigm in (2)–(5), for example, might apply a functional procedure that realizes no across too many situations. The error in (6), *kiree no neko, thus has another possible source. The plausibility of this scenario is supported by reports that no is overused among various learners of Japanese (child L1:Clancy 1985; Murasugi 1991; Yokoyama 1990; child L2:Shirahata 1993; adult L2:Sekiguchi 1995).
3.
Illustrations from children’s utterances
In this section, we study representative examples of utterances that sometimes suggest a lack of target grammatical knowledge. Again, our goal is to challenge the typical claim that misused or missing elements in such utterances necessarily indicate incorrect or incomplete morphosyntactic knowledge by showing how these phenomena can occur during the processing of correct or complete syntactic information. Specifically, in examining some Japanese and Italian morphosyntax discussed in previous research, we will seek explanation for
132
CECILE MCKEE & NORIKO IWASAKI
misused and missing elements through the type and timing of lexical information in a lemma-driven production model. The specific cases we examine in this section relate to the [+representation, −realization] problem. Crucially, our approach to each case will assume that the child’s utterances are generated by whatever mechanisms produce similar patterns in adults. In other words, we assume continuity in the developing processing systems (as many acquisitionists do for the developing competence grammar; see Pinker 1984). Further, we think that the nature of a speaker’s utterances, along with other indications of his competence, should be considered before his age is taken into account. Putting it another way, we avoid the a priori assumption that performance errors necessarily indicate limitations in the learner’s lexical and/or grammatical competence. 3.1 Misused elements We return here to the Japanese case particle no and consider in detail specific utterances in which it is misused.8 We will argue that these errors might arise from a default procedure. (The target paradigm was illustrated in (2)–(5). An important point of our earlier description is the hypothesis that the availability of the prenominal modifier’s syntactic category is critical in building modified NPs.) Consider now (10)–(12) below (Murasugi 1991:221–223). These utterances were produced by Murasugi’s subject Emi during a period when she overused no. Murasugi studied Emi’s development longitudinally; complementing that research is experimental data from 62 children (aged 1;8 to 5;8). Most of Murasugi’s experimental subjects did not overgenerate no. But for Emi, at least between 2;11 and 4;2, these utterances were representative. (Unfortunately, Murasugi did not provide Emi’s age for each utterance. Further, glossing these instances of no is difficult because they are non-target and because no can be analyzed as a genitive marker, a nominalizer, a pronoun, a complimentizer, and more. We follow Murasugi then in glossing no with the intentionally vague *. We have also provided Murasugi’s translations, noting though that (10a), (10b), and (12a) are not consistent with her analysis of the modifiers as relative clauses.) (10)
Adjectives:T arget form in (3) a. *suppa-i *no zyuusu sour * juice ‘sour juice’ b. *kawai-i *no zoo-san cute * elephant ‘a cute elephant’
LEMMA STRUCTURE IN LANGUAGE LEARNING
(11)
(12)
133
Adjectival nouns:T arget form in (4) a. *daizyoobu *no neko all right * cat ‘the cat that is all right’ b. *genki *no onnanoko cheerful * girl ‘a girl who is cheerful’ Sentence modifiers: Target form in (5) a. *tiga-u *no o-uti different * house ‘a different house’ b. *odot-te-ru *no sinderera is dancing * Cinderella ‘the Cinderella that is dancing’
Murasugi (1991) analyzed these utterances, and others like them, as reflecting an innately-specified parameter with the ‘wrong’ initial value. She hypothesized that children who overgenerate no do so because they assume that the relative clause in Japanese is a CP instead of the target IP. (See Murasugi 1991 for details on her analysis of Japanese relative clauses as IPs.) She thus concluded that Emi’s no is a complementizer and that she was producing relative clauses in (10)–(12). Murasugi found empirical support for this proposal in her observation that the errors (both from Emi and from her other subjects) are structure-dependent, and not item-dependent. For several reasons, we question whether Emi’s overuse of no supports Murasugi’s competence account. First, a rule making relative clauses in prenominal modifiers like in (10)–(12) should have uniform consequences (across items, and if it reflects innate knowledge, across children too). That is, a rulegoverned pattern should show up across the board. But “almost all” of the relevant NPs in Emi’s corpus showed this overgeneration (Murasugi 1991:224). As Murasugi’s account leaves no way for the learner to generate target utterances, the question of which of Emi’s utterances are target is critical. Should we hypothesize that Emi’s few target utterances are speech errors (from her grammar’s perspective)? A related question is why Japanese-learners should settle on no as the complementizer that reflects their erroneous parameter value. If indeed the child’s relative clause is a CP, Japanese includes other, more typical complementizers (such as to). (To pursue this criticism, one should consider the occurrence of complementizer-no in the input, and also Emi’s use of it in other structures.) Next, although Murasugi does not comment extensively
134
CECILE MCKEE & NORIKO IWASAKI
on consistency, there is some suggestion in her research that Emi’s overuse of no came and went. Murasugi observed a period of a few days at the age of 3;4 when Emi stopped overusing no in utterances like (12). After that period, Emi resumed the overuse of no until the age of 4;2.9 Finally, Yokoyama’s (1990) research on no errors in two children under 3 differs quite strikingly from Murasugi’s. While his subjects did produce [adjective+no+noun] errors like Emi’s (10), they also produced the [adjective + ∅+noun] target utterances that cannot be explained by Murasugi’s relative clause account. And unlike Murasugi, Yokoyama found evidence of item-dependency. For example, one child used 31 adjectives correctly and overgenerated no with only eight of these. We favor an alternative to Murasugi’s competence-based account, noting though that the data that might decide between these alternatives is unavailable. The alternative account attributes Emi’s overuse of no to an insufficiently developed processing mechanism that frequently fails to call up the relevant functional procedures when they are required. At least for the time between 3;4 and 4;2, when Emi’s overused no was observed to come and go, a processing account is plausible. To make this exercise more concrete, let us consider the processing of elements like no more carefully. Recall first that the correct use of the no, na, ∅ paradigm depends both on the categorial status of the modifiers and on the functional procedure(s) called up by a modifier’s phrase. For an example, compare Emi’s error in (11a) to the target phrase with another adjectival noun in (4); both are repeated below. The lemma of Emi’s adjectival noun daizyoobu ‘all right’ should call up a procedure whose abstract representation is overtly realized as the marker na. So how did no get there instead? As described in our discussion of the hypothetical error (6), no might become part of a NP with an adjectival noun modifier either because the adjectival noun’s lemma is flawed or because a default procedure applies when the functional procedures’ timing demands are not met. Murasugi’s observation that Emi’s errors were structuredependent enhances the plausibility of an account in terms of a default procedure. (11)
(4)
Adjectival nouns:T arget form in (4) a. *daizyoobu *no neko all right * cat ‘the cat that is all right’ [NP [ANP AdjN] [N′ N]] e.g.,
kiree na neko pretty cat
Relevant to such an account of children’s overuse of particles are some speech errors produced by normally functioning, native, adult users of Japanese.
LEMMA STRUCTURE IN LANGUAGE LEARNING
135
There is now sufficient evidence that adults often replace other particles with the nominative marker ga (Hashimoto Ishihara 1991; Iwasaki 1995; Terao 1995b). In Terao’s (1995a) corpus of 3200 speech errors, for example, 100 out of 373 particle errors are overuse of ga. Similarly, overuse of ga is observed among children (e.g., Clancy 1985). It is plausible that both adults and children insert ga as a default case particle when verb lemmas and VP-related procedures are not fully accessed. Interestingly, Terao’s corpus also includes no errors, as illustrated by the sentences in (13). Unlike ga though, these are relatively rare in adult speech errors. But among children, we should consider the possibility that an elevated overuse of no reflects a default NP-related procedure. (13) a.
*maa, zeitaku *no nayami to i-u n des-yoo ne well luxurious * worry say - ‘Well, (it) may be what you call a “luxurious worry”, right?’ b. *nihon sin-kiroku o maaku si-te migoto *no Japan new-record mark do- splendid * yuusyoo des-ita victory - ‘Marking the new Japanese record, (it) was a splendid victory.’
Thus, adults’ overuse of ga and no is describable as the result of processing factors. If we assume continuity in the developing production system, it is plausible to posit that learners have as much (or greater) difficulty with the same processes. Again, a learner’s overt error should not necessarily be equated with a lack of grammatical knowledge. The child’s overuse of a particle might, alternatively, indicate slips of the tongue. 3.2 Missing elements We examine now two very different cases of young children’s missing elements. We begin with the simpler case — one where the child’s utterances are missing elements and yet resemble adult utterances. Then we turn to the more complex case — one where utterances quite unlike adult utterances are missing elements. In both cases, we will argue that the learner’s underlying syntactic representation is not compromised; the unrealized elements have a different explanation. The simpler case centers on a Japanese child’s omissions of certain case particles. We follow Miyata (1993) and Otsu (1994) in thinking that these omissions are accounted for by linguistic theory; that is, they are grammatical. Describing this in production model terms, the child’s acceptable case particle drop reflects appropriate information in his verb lemmas as well as the relevant
136
CECILE MCKEE & NORIKO IWASAKI
procedural knowledge. To illustrate with specifics, consider the utterances in (14)–(19). They come from Noji’s (1974–1977) diary study of his son Sumihare, who produced these utterances between 1;9 and 1;10.10 The first set in (14)–(16) are instances of his omission of a case particle (its canonical position indicated by ∅). (14)
razio ∅ tat-ta radio turn-off- ‘I turned off the radio.’
(Volume 2, p. 261)
(15)
ame ∅ oti-ta candy fall- ‘The candy fell.’
(Volume 2, p. 291)
(16)
o-udon ∅ at-ta noodles exist- ‘I found noodles.’
(Volume 2, p. 292)
Each instance of case particle drop in (14)–(16) is grammatical. Theoretical linguists (e.g., Saito 1982, 1983; Takezawa 1987) point out that the accusative marker o can be dropped in the environment that (14) demonstrates. That is, “when a NP is adjacent to and c-commanded by V, the Case marker attached to it (whether o or ga) can ‘drop’ ” (Takezawa 1987:126). The nominative marker ga can be dropped in the environment that (15) demonstrates, namely with an unaccusative verb, whose subject behaves like the object NP (Kageyama 1993:57). We maintain that the nominative marker ga can also be dropped in the environment that (16) demonstrates. Although less typical than (15)’s verb, ar-u ‘exist’ can also be analyzed as an unaccusative verb. Although (14)–(16) resemble what adults say, some might think we are making much of an accidental overlap. Relevant to that concern is Clancy’s (1985) observation that overt case particles gradually increase during Japanese language development. Interestingly, Clancy also cites Miyazaki (1979) as claiming that Sumihare did not produce case particles till he was 2;1. But Sumihare produced utterances like (17)–(19) at the same age as he produced (14)–(16), two to three months earlier than Miyazaki observed. (Note that (17) includes the verb ar-u, suggesting that Sumihare’s omission of ga in (16) reflects the appropriate syntactic information in his lemma of that verb.) (17)
mikan ga at-ta tangerine exist- ‘There is a tangerine.’
(Volume 2, p. 290)
LEMMA STRUCTURE IN LANGUAGE LEARNING
(18)
(19)
137
tootyan ga (Volume 2, p. 264) father Noji’s interpretation:‘The father brought these sands.’ baatyan ga okut-ta (Volume 2, p. 269) Grandma send- ‘Grandma sent (it to me).’
The hypothesis that Sumihare’s utterances in (14)–(19) indicate adult competence comports with evidence from other sources that children appreciate the rules governing case particle drop. In experimental research, Otsu (1994) found that 3–4 year-olds correctly interpreted dropped accusative markers in questions like (20). His materials cleverly exploited the fact that a person who does not know that dare ‘who’ can be interpreted as accusative when followed by o or ∅ should treat (20) as ambiguous (meaning either “Who did (someone) knock down?” or “Who knocked (someone) down?”). (20)
Dare-∅ taosi-ta no? who knock-down- ‘Who did (someone) knock down?’
In Otsu’s comprehension task, children’s responses to questions like (20) indicated that they interpreted them as asking about the direct object. And in his production task, the same subjects produced utterances like Sumihare’s. That is, they demonstrated both appropriate overt case particles and appropriate case particle drop. (See also Miyata 1993. Further, in unpublished research following Otsu’s, Iwasaki found similar results with 2–3 year-olds:Subjects who correctly interpreted the accusative marker o were also likely to correctly interpret its grammatical omission.) We conclude our discussion of Sumihare’s case then by reiterating that a speaker’s age should not determine our analysis of his utterances. Instead, it is the nature of his utterances that should guide our hypotheses about his competence. We turn now to the problem of establishing speakers’ competence when their missing elements are clearly not target, and discuss a question about Italianlearners’ mastery of verb-object agreement originally raised by Antinucci and Miller (1976). McKee and Emiliani (1992) challenged Antinucci and Miller’s interpretation of some utterances in their corpus, crucially asking if clitic objects were represented but not realized in utterances showing overt verb-object agreement. To illustrate, (21) contrasts Antinucci and Miller’s interpretation of an utterance from their corpus with the alternative McKee and Emiliani defended. The utterance in (21a) was produced by a child of 1; 8. (Feminine and plural
138
CECILE MCKEE & NORIKO IWASAKI
marking are indicated in the gloss by FP.) Antinucci and Miller suggested that by this, the child meant (21b).11 That would entail ungrammatical agreement between the verb’s past participle prese ‘taken’ and the full postverbal NP object le calze ‘the socks’, the latter from the utterance’s context. But McKee and Emiliani proposed that the child might have intended the grammatical target (21c) instead. The utterance in (21a) has the postverbal subject io ‘I’, which occurs most naturally with a transitive verb only when the direct object is a preverbal clitic. If McKee and Emiliani’s interpretation is right, the utterance demonstrates grammatical agreement between the main verb and a represented but not realized cliticized object le ‘them’. The order of uttered elements would be fine, and the only missing elements would be ones typically omitted in telegraphic speech. (21) a.
prese io taken- I *‘I took.’ b. ho preso le calze have taken-∅ the- socks- ‘I took the socks.’ c. le ho prese io them- have taken- I ‘I took them.’
McKee and Emiliani (1992) tested their interpretation in an experiment designed to elicit utterances like those in (7) and (8); both utterances are repeated below. Differential focus on the subject or object of a target utterance affected whether it was encoded as a pronoun or as a full NP. If an event focused on the of an action, utterances describing that event tended to refer to the patient with a full NP. As (7) shows, such sentences contain no verb–object agreement. But if an event focused on the of an action, utterances describing that event tended to refer to the patient with a preverbal clitic pronoun. As (8) shows, such sentences contain verb–object agreement. The 2-year-olds in the experiment produced utterances missing various parts (clitics and auxiliary verbs being the most interesting). Crucially, their main verbs showed verb–object agreement in the clitic-eliciting or patient-focus contexts (whether or not their utterances contained overt realizations of the clitics) and no verb–object agreement in the NP-eliciting or agent-focus contexts.12 (7)
ha mangiato i panini has eaten-∅ the- sandwich- ‘(She/he) ate the sandwiches.’
LEMMA STRUCTURE IN LANGUAGE LEARNING
(8)
139
li ha mangiati them- has eaten- ‘(She/he) ate them.’
For (21a) too, we can analyze the child’s utterance (nontarget though it is in some aspects) as being generated by a grammar that encodes target competence. Missing elements can result from processes late in the generation of a sentence. And we can hypothesize that the child’s lemmas and structure-building procedures resemble the adult’s, even when her utterances only partly do so.
4.
Conclusion
In this paper, we analyzed children’s misused and missing elements in the context of a lemma-driven and multi-staged production model. The perspective on production that we have outlined pulls apart the components of the surface utterance. And it does so in a way that associates functional elements with late processing stages and major class elements with earlier stages. That kind of processing distinction can give us leverage on different developmental courses. And this can be done without necessarily assuming no phrase structure or no representation of functional elements. What this approach to certain so-called “performance errors” brings home is the message that links between closed class elements and lemmas should be examined before we conclude that a child lacks general grammatical support for the elements in question.
Acknowledgments We are grateful to Merrill Garrett, Yasushi Terao, and Gabriella Vigliocco for their help with this paper.
Notes 1. This approach may reflect the tendency in theoretical linguistics to abstract away from errors, in effect dismissing noisy data as mere performance. While theoretical linguists have made considerable progress toward identifying language competence by doing this, such methods have not contributed as much to psychological models of the dynamic processing of that competence. Of course, we are also limited by the methodological challenge in designing childfriendly measures of real-time language processing. See McKee (1996) for some discussion.
140
CECILE MCKEE & NORIKO IWASAKI
2. Many types of data have contributed to the development of this and similar models of language production:Initially , naturally occurring speech errors from normally functioning adults provided the empirical base of such models (Fromkin 1971; Garrett 1975). But see also Kempen (1989) for a review of computer modeling research, Dell (1986) for an example of the experimental elicitation of speech errors, Garrett (1982) for research in aphasia, Vigliocco, Antonini and Garrett (1997) for an example from tip-of-the-tongue research, and Myers-Scotton (1993) for examples from code-switching. Furthermore, much of this research has included crosslinguistic comparisons. As we will examine Japanese utterances later, research by Terao (1989, 1995b) and Iwasaki (1995) on Japanese speech errors should be noted. 3. The hypothesis that syntactic information precedes phonological information finds considerable empirical support in studies on tip-of-the-tongue or TOT states. For example, Vigliocco et al. (1997) found that the grammatical gender of Italian nouns (i.e., lemma information) is available during TOT states (i.e., before lexeme information is fully accessed). In another example, Iwasaki, Vigliocco and Silverberg (1997) found that Japanese speakers in TOT states distinguish the syntactic categories of adjective and adjectival noun. (See also Iwasaki, Vigliocco and Garrett 1998) 4. For interested readers, we provide one of Levelt’s lemmas to show how rich and how specific that information is proposed to be. The list below summarizes a lemma for give (Levelt 1989:191) conceptual specification: CAUSE (X, (GOPOSS (Y, (FROM/TO (X,Z))))) conceptual arguments:(X, Y, Z) syntactic category:V grammatical functions:(SUBJ, DO, IO) relation to COMP:none lexical pointer:713 =( an arbitrary number referring to a particular word form among give’s inflections) diacritic parameters:tense, aspect, mood, person, number, pitch accent 5. In connection with innateness and bootstrapping debates, it is reasonable to ask if certain kinds of information are necessary in a lemma. Some information, such as syntactic category, might be universally included in lemmas. But other information, like diacritic parameters (see note 4), is more language-specific and might therefore be optional initially. One also wonders what it would mean for a lemma to be learned piecemeal and whether learners start out with “protolemmas” (like the notion of parameters in Universal Grammar). See Deprez (1984) for some related discussion. 6. Regarding the question of whether case particles have their own projections, we refer interested readers to Miyagawa (1989) who states that case particles cliticize onto NPs. 7. This kind of variability probably reflects the production system per se. The language only allows (8) as a way to say (8). Contrast this with our later discussion of Sumihare’s (14)–(16), a case where the language allows variability. Our discussion of (9) also bears on the value of crosslinguistic comparisons in language acquisition research. Most one-word utterances in English cannot distinguish nonretrieval of a functional procedure from retrieval without phonological realization of a functional procedure. 8. In an extensive survey of research on children’s acquisition of Japanese, Clancy (1985) found that most case particles emerged between 1;8 and 2;6. Interestingly, the frequency and accuracy of their use remained non-target for almost a year. The course of development usually proceeded “from failure to use a particle where appropriate to a gradually increasing rate of production until the child’s frequency approximates adult usage” (Clancy 1985:387). Clancy
LEMMA STRUCTURE IN LANGUAGE LEARNING
141
recognized that acquisitionists seeking reasons for a child’s failure to produce a particle must appreciate the contexts in which its omission is acceptable in the target system. But in discussing Japanese-learners’ overuse of the nominative marker ga, she speculatively attributed errors like (i) to the multiple functions of this marker, proposing that “some children make a syntactic, or positionalhypothesis, namely, that ga follows the first nominal argument in a sentence” (Clancy 1985: 390). To better understand this error, compare (i) to (ii). These come from a child who produced such utterances at the age of 2;1 (Clancy 1985: 389); in fact, both utterances come from the same conversation. Clancy’s competence-based explanation of errors like (i) contrasts with the account one would give for similar errors produced by adults, and also with what we argue for a case of a child’s overusing the particle no. (Note that glosses and translations from here on include some modifications from the originals. All such modifications are clarifications or updates to reflect more recent analyses.) (i) *o-mizu ga ire-ta noni water put-in although ‘although she put water in it’ (ii) mama ga mizu ire-ta no ne mama water put in - ‘Mama put water in it, right?’ 9. A related concern with this developmental picture is that utterances like (10) and (11) disappeared when Emi was 4;0, while utterances like (12) remained till she was 4;2. In other words, NPs containing structures that are ambiguously analyzable as relative clauses lost the CP analysis and the complementizer no before the NPs containing structures that can only be analyzed as relative clauses (i.e., sentence modifiers containing verbs). While thought-provoking, such observations cannot bear much weight because it is possible that Emi’s competence grammar passed through stages of development that the longitudinal study missed. Murasugi collected data at 4–5 month intervals, each visit to Emi’s home being 3–6 days long. 10. In Noji’s diary, Sumihare’s age was computed so that 1;9 is equivalent to what would probably be calculated as 1;8 now. Note also that Sumihare’s pronunciation at this time was non-target. For example, he said tat-ta for kit-ta ‘turned off’ in (14). 11. Borer and Wexler (1992) detailed an analysis of the syntax that would generate the ungrammatical verb–object agreement that Antinucci and Miller (1976) proposed. We refer readers to that research but do not discuss it because we question its empiricalunderpinnings. 12. Only one utterance contained verb–object agreement and a full referring object NP after the verb. See McKee and Emiliani (1992) for discussion of that exception.
References Antinucci, F. and Miller R. 1976. “How Children Talk about What Happened.” Journal of Child Language 3: 167–189. Borer, H. and Wexler K. 1992. “Biunique Relations and the Maturation of Grammatical Principles.” Natural Language and Linguistic Theory 10: 147–189. Clancy, P. 1985. “The Acquisition of Japanese.” In The Crosslinguistic Study of Language Acquisition (Vol. 1), D. Slobin (ed.). Hillsdale, NJ: Lawrence Erlbaum.
142
CECILE MCKEE & NORIKO IWASAKI
Dell, G. 1986. “A Spreading Activation Theory of Retrieval in Sentence Production.” Psychological Review 93:283–321. Demuth, K. 1994. “On the Underspecification of Functional Categories in Early Grammars.” In Syntactic Theory and First Language Acquisition: Cross-Linguistic Perspectives, Vol. 1, Heads, Projections, and Learnability, B. Lust, M. Suñer and J. Whitman (eds.). Hillsdale, NJ:Lawrence Erlbaum. Deprez, V. 1994. “Underspecification, Functional Porjections, and Parameter Setting.” In Syntactic Theory and First Language Acquisition: Cross-Linguistic Perspectives, Vol. 1, Heads, Projections, and Learnability, B. Lust, M. Suñer and J. Whitman (eds.). Hillsdale, NJ:Lawrence Erlbaum. Fodor, J. and Merrill, G. 1966. “Some Reflections on Competence and Performance.” In Psycholinguistics Papers, J. Lyons and R. Wales (eds.). Edinburgh:Edinbur gh University. Fromkin, V. 1971. “The Non-anomalous Nature of Anomalous Utterances.” Language 47:27–52. Garrett, M. 1975. “The Analysis of Sentence Production.” In The Psychology of Learning and Motivation, G. H. Bower (ed.). New York:Academic Press. Garrett, M. 1976. “Syntactic Processes in Sentence Production.” In New Approaches to Language Mechanisms, E. Walker and R. Wales (eds.). Amsterdam:North Holland Publishers. Garrett, M. 1980. “The Limits of Accommodation:Ar guments for Independent Processing Levels in Sentence Production.” In Errors in Linguistic Performance: Slips of the Tongue, Ear, Pen, and Hand, V. A. Fromkin (ed.). New York:Academic Press. Garrett, M. 1982. “Production of Speech:Observations from Normal and Pathological Language Use.” In Normality and Pathology in Cognitive Functions, A. W. Ellis (ed.). London:Academic Press. Garrett, M. 1988. “Processes in Language Production.” In Linguistics: The Cambridge Survey (Vol. 3), F. J. Newmeyer (ed.). Cambridge, UK:Cambridge University Press. Hashimoto Ishihara, T. 1991. “Speech Errors and Japanese Case-marking.” Kinjo Gakuin Daigaku Ronsyu, Studies in English Language and Literature 33:87–108. Iwasaki, N. 1995. “Slips of the Tongue of Japanese Native Speakers and L2 Learners’ Sentence Processing Difficulties.” Proceedings of the 1995 ATJ Conference on Literature, Language and Pedagogy. Iwasaki, N., Vigliocco, G. and Garrett, M. 1998. “Adjectives and Adjectival Nouns in Japanese:Psychological processes in Sentence Production.” Japanese/Korean Linguistics 8. Stanford, CA:Center for the Study of Language and Information. Iwasaki, N., Vigliocco, G. and Silverberg, N. 1997. “Evidence for a two-stage retrieval of lexical items:Japanese adjectives and adjectival nouns.” Poster presented at the 10th CUNY Conference on Human Sentence Processing. Kageyama, T. 1993. Bunpoo to Gokeisei. Tokyo:Hituzi Syoboo. Kempen, G. 1989. “Language Generation Systems.” In Computational Linguistics: An International Handbook on Computer Oriented Language Research and Applications, I. Batori, W. Lenders and W. Putschke (eds.). Berlin:de Gruyter.
LEMMA STRUCTURE IN LANGUAGE LEARNING
143
Levelt, W. 1989. Speaking: FromIntention to Articulation. Cambridge, MA:MIT Press. McKee, C. 1994. “What You See Isn’t Always What You Get.” In Syntactic Theory and First Language Acquisition: Cross-Linguistic Perspectives, Vol. 1, Heads, Projections, and Learnability, B. Lust, M. Suñer and J. Whitman (eds.). Hillsdale, NJ:Lawrence Erlbaum. McKee, C. 1996. “On-line methods”. In Methods for Assessing Children’s Syntax, D. McDaniel, C. McKee and H. S. Cairns (eds.). Cambridge, MA:MIT Press. McKee, C. and Emiliani, M. 1992. “Il Clitico:C’é Ma Non Si Vede.” Natural Language and Linguistic Theory 10:415–437. Miyata, H. 1993. “The Performance of the Japanese Case Particles in Children’s Speech: With Special Reference to ga and o.” MITA Working Papers 3:1 17–136. Miyagawa, S. 1989. Structure and Case Marking in Japanese: Syntax and Semantics 22. New York, NY:Academic Press. Miyazaki, M. 1979. The Acquisition of the Two Particles wa and ga in Japanese — A Comparative Study of L1 and L2 Acquisition. Unpublished master’s thesis, University of Southern California. Cited in Clancy (1985). Murasugi, K. 1991. Noun Phrases in Japanese and English: A Study in Syntax, Learnability and Acquisition. Unpublished doctoral dissertation, University of Connecticut. Myers-Scotton, C. 1993. Duelling Languages: Grammatical Structure in Codeswitching. Oxford:Clarendon Press. Noji, J. 1974–1977. Yooziki no Gengo Seikatzu no Zittai (Vol. 1–4). Hiroshima:Bunka Hyooron Publishing Co. Otsu, Y. 1994. “Case-marking Particles and Phrase Structure in Early Japanese Acquisition.” In Syntactic Theory and First Language Acquisition: Cross-Linguistic Perspectives (Vol. 1), B. Lust, M. Suñer and J. Whitman (eds.). Hillsdale, NJ:Lawrence Erlbaum. Pinker, S. 1984. Language Learnability and Language Development. Cambridge, MA: Harvard University Press. Saito, M. 1982. “Case Marking in Japanese:A Preliminary Study.” Unpublished manuscript, MIT. Saito, M. 1983. “Case and Government in Japanese.” West Coast Conference on Formal Linguistics 2:247–259. Sekiguchi, T. 1995. “The Role of the Head-Direction Parameter in the Acquisition of Japanese Noun Phrases by English and Chinese/Korean Speakers.” Proceedings of the 1995 ATJ Conference on Literature, Language and Pedagogy. Shirahata, T. 1993. “The Acquisition of Japanese Prenominal Modification Structures and Overgeneralized “NO”:A Case of Korean Child.” Nihongo Kyooiku, Journal of Japanese Language Teaching 81:104–1 15. Takezawa, K. 1987. A Configurational Approach to Case-Marking in Japanese. Unpublished doctoral dissertation, University of Washington. Terao, Y. 1989. “Units of Processing in Sentence Production:Evidence from Speech Errors.” MITA Working Papers 2:79–99.
144
CECILE MCKEE & NORIKO IWASAKI
Terao, Y. 1995a. Seizin no ii ayamari Deeta beesu, TGCORPUS. (Data base of adults’ speech errors). Terao, Y. 1995b. “Bunsansyutu-katei ni okeru Toogobumon Kenkyuu no Tenboo:Zyosi no Hatuwa Deeta o Siryoo to site.” Bulletin of Tokoha Gakuen Junior College 26:245–255. Vigliocco, G., Antonini, T. and Garrett, M. 1997. “Grammatical Gender Is on the Tip of Italian Tongues.” Psychological Science 8:314–317. Weissenborn, J. 1994. “Constraining the Child’s Grammar:Local Well-Formedness in the Development of Verb Movement in German and French.” In Syntactic Theory and First Language Acquisition: Cross-Linguistic Perspectives, Vol. 1, Heads, Projections, and Learnability, B. Lust, M. Suñer and J. Whitman (eds.). Hillsdale, NJ:Lawrence Erlbaum. Yokoyama, M. 1990. “Errors of Particle no of Young Japanese Children in Adjectivenoun Constructions.” The Japanese Journal of Developmental Psychology 1:2–9.
P II From Input Cues to Syntactic Knowledge
Signal to Syntax Building a bridge LouAnn Gerken
University of Arizona
1.
Introduction
Over the past 15 to 20 years, researchers studying language development have collected considerable data on what information is available in the linguistic signal presented to the language learner, and about what aspects of that signal the learner encodes at different stages of development. Many researchers working in this area have noted that the linguistic signal is influenced by several theoretically defined linguistic levels including phonology, morphology and syntax. This observation has led some to consider the possibility that learners might extract from the signal some information about linguistic regularities at each of these levels. The hypothesis under consideration in this chapter and others in this volume is that learners can extract information about morpho-syntax directly from the signal. This hypothesis is the basis of a general class of approaches to language development variously known as prosodic bootstrapping, phonological bootstrapping, or distributional bootstrapping and is at the heart of the “signal to syntax” discussion. Despite the extreme productivity of research on the linguistic signal and learners’ sensitivity to it, the enterprise has had surprising little impact on researchers studying syntax acquisition. Similarly, many of us who work on the signal side are relatively unsophisticated about the syntactic system the child might ultimately acquire. Clearly each researcher cannot know everything about both the signal and syntax. However, I suggest that the time has come to build a bridge, and I offer this chapter as building material. In Section 2, I review the types of research that have been done on the signal side, using a system of classification that I believe is relevant to syntax acquisition. In each of the
148
LOUANN GERKEN
subsections in Section 2, I attempt to indicate areas where there is general agreement, areas that need more or different types of research, and areas where there is substantial disagreement. In Section 3, I lay out my understanding of the major issues on the syntax side and suggest ways that the signal to syntax enterprise might and might not bear on these issues.
2.
On the signal side with an eye toward syntax
What information might be in the signal? Basically, all of the cues that will be discussed in this section fall into one of two categories:boundary cues and distributional cues. Boundary cues are acoustically measurable events that tend to occur at the edges of linguistic units; they include pausing, pitch change, and vowel lengthening. Distributional cues are co-occurences between two aspects of language. They include phonotactic regularities, co-occurences of syllables comprising a word, correlations of phonetic information and lexical class, cooccurences between grammatical morphemes and content words, co-occurences between grammatical morphemes and boundary cues, correlations between syntactic phrase types and utterance positions, and temporal contiguity of sentences of different types. Most boundary and distributional cues have been proposed to play more than one role in syntax acquisition. The following subsections describe these potentially relevant cues and their roles. 2.1 Lexical segmentation Much of the research that has been done on infants’ encoding of the signal can be characterized not as a direct attempt to get to syntax from the signal, but rather as an attempt to solve the lexical segmentation problem. That is, the research asks, how does the learner extract and store word-sized units from the continuous stream of speech? Research on lexical segmentation is often included under the signal to syntax umbrella, because most theories treat the ability to solve the segmentation problem as a prerequisite to further syntactic development (e.g., Maratsos 1982; Pinker 1984). At least six solutions to the problem of lexical segmentation have been proposed. These include the hypothesis that the learner is exposed to at least some single words uttered in isolation (e.g., Mandel, Jusczyk & Pisoni 1994). In such situations, the left and right utterance boundary is also the left and right word boundary, thereby eliminating the segmentation problem. Another hypothesis is that critical words are placed in utterance-final positions, thereby aligning
SIGNAL TO SYNTAX
149
the right utterance boundary with the right word boundary (Fernald & Mazzie 1991; Fernald & McRoberts 1993; Slobin 1973; Woodward & Aslin 1990). A third hypothesis is that, in many languages, syllables in word-final positions are lengthened compared with non-final syllables, therefore length might cue learners that they are at a right word boundary (Saffran, Newport & Aslin 1996). A fourth hypothesis is that language-specific canonical stress patterns direct both the infant and adult listener to likely word boundaries (Cutler 1990; Cutler & Carter 1987; Cutler & Norris 1988; Jusczyk, Cutler & Redanz 1993; Morgan & Saffran 1995). For example, because most English words begin with a stressed syllable (Cutler & Carter 1987), a reasonable strategy for someone listening to English is to treat stressed syllables as the left boundary of a word. A fifth hypothesis is that infants use phonotactic information to infer the presence of a left or right lexical boundary (Christophe et al. 1994; Hohne & Jusczyk 1994). For example, []] occurs only at the ends of English syllables, and a listener who was aware of that regularity could insert a right syllable boundary (and therefore a potential word boundary) after any occurrence of that phone. Finally, listeners might use the probability with which one syllable occurs adjacent to another to infer which syllables cohere as a word and which abut only by accident (Morgan & Saffran 1995; Newsome & Jusczyk 1995; Saffran, Aslin & Newport 1996). For example, if [hæm] and [l7t] co-occur with sufficient frequency, the listener will come to treat “hamlet” as a coherent unit. Studies on lexical segmentation are typically of two types. One type of study examines infants’ sensitivity to a particular cue by demonstrating that they discriminate two types of stimuli in which the cue is differentially exhibited. For example, English-learning infants listen longer to words exhibiting the canonical strong-weak pattern of English words than to words exhibiting a weak-strong pattern (Jusczyk et al. 1993). Such studies are open to the criticism that, although infants may be sensitive to a cue, they may not use it in lexical segmentation. The other type of study is one in which infants are exposed to training stimuli in which words are embedded. Whether or not infants extracted those words is assessed in a subsequent test phase. For example, in a technique developed by Jusczyk and Aslin (1995), infants are presented with passages containing several repeated words (e.g., “hamlet”). They are then exposed to word lists in which the training words do or do not appear and listening time to the two lists is measured. Such procedures more convincingly demonstrate that the infant has extracted and stored the word or words in question (also see Echols, Crowhurst & Childers 1997; Höhle & Weissenborn 1998; Newsome & Jusczyk 1995). There appears to be little controversy over which of the solutions to the
150
LOUANN GERKEN
lexical segmentation problem described above is the “best”. That is, most researchers working in the field appear to agree that the learner must use some combination of information to accurately locate and store word-sized units. In fact, solutions that depend on language-specific regularities, such as stress patterns and phonotactic cues, require the learner to have isolated a number of words by other means before the regularities can be induced and used in further segmentation (e.g., Saffran, Aslin & Newport 1996). In summary, it is probably safe to say that the majority of research that has been done under the signal to syntax umbrella has addressed the lexical segmentation problem. Researchers in this area have amassed an impressive collection of information on infants’ ability to extract and store words. In the next phase of research, which has already begun, the question becomes how learners begin to assign a semantic interpretation to the word-sized phonetic strings they have isolated (Molfese 1992; Plunkett 1993; Stager & Werker 1996). However, because there is clearly much more to the acquisition of syntax than learning words, we must also begin to focus more directly on other possible relations between the signal and syntax. Some of this research is reviewed in the next few subsections. 2.2 Phrase and clause segmentation Most language scholars agree that the utterances we produce and comprehend are not strings of words with no internal structure. Rather, words are combined into phrases, which are combined into clauses and sentences. Therefore, just as learners must find words in the speech stream, they must also find these larger linguistic units. Given this description, it is not surprising that researchers often use the term “segmentation” to refer to the isolation of both words and larger units. However, there are some critical differences between lexical segmentation and phrase and clause segmentation. Most notably, words are stored in the mental lexicon, and comprehending or producing a word requires selecting it over its neighbors. In contrast, most researchers agree that phrases, clauses and sentences are not stored but generated. Although there may be controversy over the basis on which these larger units are constructed, it seems clear that the learner’s task is ultimately to store and retrieve words and to do something else with phrases and clauses. Just what learners might do with the larger units, once they have found them, will be discussed in the next two subsections. However, let us first consider what information they might use to find them. The majority of research on phrase and clause segmentation has focused on prosodic boundary cues. With respect to clauses in particular, pausing appears to
SIGNAL TO SYNTAX
151
be a universal or near universal cue to these units (e.g., Cruttenden 1986). Indeed pausing is also used to mark boundaries between structurally important sections in music, suggesting that’s its significance as a boundary indicator relies on general perceptual properties not specific to language (Jusczyk 1997). In contrast, cues to phrases are not nearly as reliably marked in the linguistic signal (e.g., Fisher & Tokura 1996; Gerken 1996c; Jusczyk 1997). Furthermore, although many languages appear to use some configuration of pausing, pitch change and syllable lengthening at phrase boundaries, the particular configuration of cues is not universal. For example, English makes much more extensive use of final lengthening than does French (Delattre 1966). The patterning of clause and phrase markers across languages suggests that infants may be able to find clauselike units in the signal from very early in development. However, longer exposure to the target language may be required before infants can discern the particular configuration of cues used to mark phrase boundaries. Consistent with this characterization, researchers have found that infants are sensitive to disruptions of acoustic information at clause boundaries at age 6 months, but are not sensitive to similar disruptions to phrase boundaries until 9 months (Kemler Nelson et al. 1989). Much of the research on prosodic cues to phrase and clause segmentation is of the type just described, in which a learner is presented with two types of stimuli, one exhibiting normal prosody and one with a particular component of prosody missing or exhibited in an unusual way. Sensitivity to the prosodic component is inferred if the learner discriminates the two types of stimuli. As noted with respect to the same approach to lexical segmentation, this technique is open to the criticism that demonstrating sensitivity to a particular cue does not imply that the learner uses it for clause or phrase segmentation. However, the alternative approach used to demonstrate lexical segmentation may not be applicable to studies of phrase and clause segmentation. That is, it may not be possible to expose infants to training stimuli containing particular phrases or clauses and then test for recognition of the same phrases or clauses (but see Mandel, Jusczyk & Kemler Nelson 1994). This is because, as noted earlier, phrases and clauses are probably not be stored by the listener in the same way as words are. One method that has been used to determine whether learners use prosody in phrase and clause segmentation assesses the unity of prosodically marked units in perception. Morgan, Swingley and Miritai (1993) inserted extraneous noises either between or within prosodically marked units in sentences. They trained ten-month-olds to turn their heads when they detected these noises and found that they were more likely to demonstrate detection of noises between prosodic units than within a unit.
152
LOUANN GERKEN
Another, less direct, form of evidence that listeners can use prosody to segment the speech stream into linguistically relevant chunks comes from adult artificial language learning studies (Morgan, Meier & Newport 1987; Morgan & Newport 1981). In one study, adults were presented with correctly, incorrectly or unsegmented strings from an artificial grammar. They were subsequently tested on new strings that were either generated by the same grammar or were ungrammatical in some way. Adults initially presented with correctly segmented strings were better than the other two groups at discriminating grammatical vs. ungrammatical strings, suggesting that segmented input led to superior learning (Morgan, Meier & Newport 1987). Perhaps this technique can be applied to studies with infants. Recent research suggests that 11-month olds can successfully learn artificial grammars with very little exposure (Gomez & Gerken 1999). Therefore, it is possible that a manipulation like the one used by Morgan and his colleagues will allow us to determine if infants are able to use prosodic segmentation cues in the service of syntax learning. Even if future studies demonstrate that infants use prosodic information to locate the boundaries of linguistic units larger than the word, it is important to note that prosodic boundary information is governed not by syntactic units like NPs, but by prosodic units like phonological phrases (Dresher 1996; Hayes 1989; Nespor & Vogel 1986; Selkirk 1996). Data from both infant speech perception and child production suggest that prosodic units are salient to young learners (Demuth 1996; Fee 1992; Fikkert 1994; Gerken 1996a; Gerken 1996b; Gerken, Jusczyk & Mandel 1994). Because nearly every prosodic boundary is also a syntactic boundary, the non-isomorphism between prosodic structure and syntactic structure is probably not an impediment to learners using prosody to locate syntactic boundaries, although it may lead to undersegmented input. However, this non-isomorphism may indeed pose a problem for accounts in which learners use prosody to do more than chunk the linear stream of speech into syntactic units. This problem will be discussed in more detail in the next subsection. In addition to prosody, the set of highly frequent grammatical morphemes, such as articles and auxiliary verbs, may also provide cues to phrase segmentation (Carter & Gerken 1996; Clark & Clark 1977; Gerken, Landau & Remez 1990; Golinkoff, Hirsh-Pasek & Schweisguth this volume; Kimball 1973; Shady & Gerken 1999; Valian & Coulson 1988). Grammatical morphemes in a particular language tend to share phonetic properties that might make them salient in the overall segmental and suprasegmental character of the language (Gerken 1996a; Gerken, Landau & Remez 1990j; Jakobson & Waugh 1987; Morgan, Allopenna & Shi 1996). They also tend to occur at the beginnings and
SIGNAL TO SYNTAX
153
ends of phrases and might serve to cue phrase boundaries. Consider the potential role of grammatical morphemes in phrase segmentation in the sentence “The dog chased the cat.” In child-directed speech, there is likely to be some prosodic marking of the phrase boundary between “dog” and “chased” but no prosodic marking of the boundary between the verb and object NP. Note, however, that if a learner is exposed to enough prosodically marked units beginning with “the” that she or he might infer that “the” occurs at the beginnings of some linguistic units (Juliano & Bever 1990). Consistent with the hypothesis that learners are sensitive to the presence of grammatical morphemes in the speech stream, infants as young as 10½ months of age discriminate normal English passages from those containing nonsense syllables in place of grammatical morphemes (Shady 1996; Shafer et al. 1998, also see Höhle & Weissenborn 1998). However, as noted with respect to prosody, demonstrating sensitivity to a cue by no means implies that learners use that cue in segmentation. Perhaps the artificial grammar learning technique described above will prove useful in more rigorous tests of grammatical morphemes as segmentation cues. That is, perhaps grammars containing morphemelike elements at the periphery of phrases will be easier to learn than grammars without such elements. In summary, infants and young children appear to be sensitive to aspects of prosody and grammatical morphemes in the signal. The limited work that has asked whether infants might in fact use prosody in phrase and clause segmentation suggests that they do. No similar data have yet been gathered with respect to grammatical morphemes. Clearly phrase and clause segmentation is an important area for future studies on learners’ ability to extract syntactically relevant information from the signal. 2.3 Syntactic structure Researchers on the signal side have hypothesized that prosody might play a role not only in locating linguistically relevant chunks in the linear stream of speech, but also in assigning a hierarchical structure to those chunks (i.e., syntactic bracketing). For example, one piece of information that the learner might extract from the signal is the basic generalization that sentences are composed of a NP and VP. As noted in the previous subsection, prosodic boundary information appearing after the subject in sentence like (1a), below, might serve to highlight such a structure. However, the observation that prosodic boundary information occurs at prosodic boundaries, and not necessarily all syntactic boundaries, suggests that the story may not be so simple. For example, the same sentence
154
LOUANN GERKEN
with a pronoun subject (1b) contains no prosodic boundary information separating the subject NP from the VP. The situation is still worse in (1c), where there is likely to be a prosodic boundary marker between the verb and object NP, but not between the subject and verb (Gerken 1996c; Gerken, Jusczyk & Mandel 1994). (1)
a. b. c.
The dog / chased the cat. He chased the cat. He chased / the big old cat.
Perhaps prosodic boundary information could be combined with cross-sentence distributional information to inform the learner about basic phrase structure. For example, a child hearing (1a) immediately followed by (1b) might use the prosodic information in (1a) and the overall similarity between the two sentences to infer that the pronoun is the subject NP in (1b), even though it is not prosodically separated from the VP (e.g., Gerken et al. 1994). The problem of assigning syntactic structure from prosodic cues becomes somewhat thornier when one considers prosodic cues to finer attachment distinctions (e.g., Lederer & Kelly 1991; Morgan 1986). In sentence (2a), the underlined PP is attached to the verb, but in (2b), it is attached to the object NP. Could a learner discover these different attachments from prosodic cues? Consider an ideal case in which a careful talker gave consistently different prosodic renderings of the two attachments. For example, the talker always and only pauses after “baby” when the NP attachment meaning is intended. However, for a learner to use such information, she or he would not only have to discriminate the prosodic differences between the sentences, but also already have in place the requisite linking information that pausing indicates NP attachment while no pausing indicates verb attachment. Thus, even the best case scenario does not allow syntactic structure to be “read” off the signal (for further discussion, see Gerken 1996c; Nicol 1996). Rather, it seems likely that only a knowledge of verb subcategorization information would allow the learner to determine attachment (e.g., “put” requires a PP, but “get” does not; see Landau & Gleitman 1985; Gleitman this volume). (2)
a. b.
Put the baby on the table. Get the baby on the table.
Mazuka (1996) noted different degrees of prosodic marking for sentences with different phrase or clause attachments in both Japanese and English. From these prosodic differences, she hypothesized that learners could set their “Principle Branching Direction” parameter (Lust & Chien 1984). As discussed in the previous paragraph, one might ask how the learner knows which pattern of
SIGNAL TO SYNTAX
155
prosodic marking is associated with which syntactic structure. However, it is also possible to interpret Mazuka’s hypothesis as only concerning the markedness of certain syntactic structures in a language. Thus, a left-branching, subordinatemain clause order is more prosodically marked than a right-branching mainsubordinate order in a right-branching language like English. Conversely, right dislocation is more prosodically marked than straight left branching in a left branching language like Japanese. On such an interpretation, prosody might serve as an indirect cue to syntactic structure by informing the learner about which sentences are non-canonical. In summary, it seems unlikely that prosody plays a direct role in cueing syntactic structure. However, the possibility remains that prosody cues syntactic structure indirectly, either in conjunction with cross-sentence comparisons or by informing that learner about which sentences are structurally unusual in the language. Both of these forms of indirect cueing deserve much more study. 2.4 Syntactic categories Although the signal does not appear to provide direct information about syntactic structure, it may aid the learner in distinguishing syntactic categories, such as noun, NP, etc. At least three cues to syntactic categories can be identified. One is grammatical morphemes, including articles and auxiliary verbs. These elements are what make it possible for an adult to identify the nouns and verb in meaningless sentences like “The zigs are riffing the nug”. Nearly every theory of language acquisition asserts that children use grammatical morphemes to assign syntactic category at some stage of development. This assertion has been supported in computer models of phrase segmentation and identification (e.g., Brent 1992; Juliano & Bever 1990). However, because young children typically fail to produce grammatical morphemes in their early utterances, many researchers have hypothesized that they do not use these cues from the beginning of language acquisition (e.g., Pinker 1984; Schlesinger 1981). Although the production data continue to foster a debate about infants’ and young children’s sensitivity to grammatical morphemes (see next subsection), a growing body of data suggests that learners are tacitly aware of particular morphemes and the syntactic categories with which they are correlated by the age of 16- to 18-months (Santelmann & Jusczyk 1997; Shady 1996). Furthermore, sentence comprehension of 24-month-old single word talkers is adversely affected when sentences contain incorrectly used grammatical morphemes (Carter & Gerken 1996; Gerken & McIntosh 1993; Golinkoff et al., this volume; Shady & Gerken 1999). However, further research is necessary to demonstrate that learners in this
156
LOUANN GERKEN
age range are able to use grammatical morphemes in syntactic category assignment. Another potential cue to syntactic categories is positional information. For example, nouns tend to occur in certain lexical contexts and verbs in others. To examine the role of such information, Mintz (1996) performed a cluster analysis on each word in a corpus of child-directed sentences based on what word preceded and followed it. He found that this kind of positional information yielded above chance categorization of both nouns and verbs, even in utterances from which a subset of grammatical morphemes was removed. It is important to note that positional information might mislead the child into treating subject nouns and object nouns as separate classes, should a different distribution of lexical items occur in the two positions. Such a misclassification might arise, for example, if subjects in child-directed sentences tend to be animate nouns and objects inanimates, or if subjects tend to be pronouns and objects lexical nouns. Perhaps the learner might use a small number of words that occur in both subject and object positions create a single noun category. Such potential differences in the distribution of subject and object nouns may require positional information to be augmented with other cues (see below). The third potential cue to syntactic categories is phonetic differences among categories. Kelly (1988) noted that English nouns are more likely than verbs to be stressed on their initial syllable (also see Cassidy & Kelly 1991; Sereno & Jongman 1995). Perhaps learners, once they have identified a set of nouns and verbs by other means, could use such information to aid in the classification of new words. In summary, grammatical morphemes, positional information and phonetic differences among syntactic classes have all been proposed to play a role in syntactic category acquisition. One important issue that deserves more discussion than it has recently been given is whether the categories are innate and need only to be filled in through the use of cues in the signal, or whether the categories themselves can be induced from these cues. There are several logical problems associated with the latter view. Take for example the above discussion of positional information. If children were exposed to sentences in which not all nouns that occurred as subjects also occurred as objects and vice versa, only a few nouns used in both positions might be sufficient to place both subjects and objects into an innate category “noun.” However, it is not clear how much lexical overlap between the two positions would be required for the child without innate syntactic categories to decide that subjects and objects belong to one category, not two. (For an enlightening discussion of related issues, see Gleitman 1982.) Thus, although there are several possible cues to syntactic categories in the signal, just how learners might use these cues needs further consideration.
SIGNAL TO SYNTAX
157
2.5 Grammatical morphemes In Subsections 2.2 and 2.4, above, grammatical morphemes were proposed as cues for phrase segmentation and syntactic category assignment. However, as noted, many researchers have taken the lack of grammatical morphemes in children’s production to indicate their lack of tacit knowledge of these elements. Some of these researchers have further explored the notion that the order in which particular grammatical morphemes appear in children’s utterances reflects the order in which they become sensitive to them in the signal (Hung 1996; Peters & Strömqvist 1996). In particular, these researchers hypothesize that the prosodic contexts in which different morphemes occur influence the speed at which children will acquire them. Numerous studies have indeed demonstrated that the prosodic contexts in which a morpheme occurs influence the likelihood that a child will produce it (Demuth 1992; Demuth 1994; Gerken 1991; Gerken 1996b; McGregor & Leonard 1994; Wijnen, Krikhaar & den Os 1994). For example, children are more likely to retain the object article in a sentence like (3a), which contains a non-syllabic verb inflection, than (3b), which contains a syllabic inflection (Gerken 1996b). One possible reason for this difference is that the article forms a SW foot with the verb in (3a) and is unfooted in (3b). The main question in this area of research is whether the prosodic effect seen on children’s omissions reflect their perception of the signal or constraints on utterance planning and production. (3)
a. b.
He hugs the dog. He kisses the dog.
The notion that the prosodic context in which a grammatical morpheme appears influences whether it will be perceived and therefore acquired poses a paradox for the signal to syntax enterprise. On the one hand, grammatical morphemes provide a potentially critical cue to aspects of syntax, but only if children have a more complete knowledge of them than their early productions might suggest. However, if we take children’s early productions to reflect their knowledge of grammatical morphemes (rather than a type of output constraint), the usefulness of these morphemes as cues to syntax is greatly reduced. As noted in Subsection 2.4, above, there is a growing body of data supporting the notion that learners are sensitive to grammatical morphemes, despite their frequent failure to produce them. In particular, 16- to 18-month-olds discriminate sentences containing correctly vs. incorrectly used grammatical morphemes (Santelmann & Jusczyk 1997; Shady 1996). Similarly, single word talkers exhibit differential comprehension for utterances with correctly vs. incorrectly used morphemes
158
LOUANN GERKEN
(Carter & Gerken 1996; Gerken & McIntosh 1993; Shady & Gerken 1999). Such data support a production constraint account of grammatical morpheme omissions. However, the exact nature of this constraint requires further study (e.g., Carter 1999). An important issue for further discussion is how much infants and children tacitly know about function morphemes and how to best interpret the apparently conflicting perception vs. production data. Ignoring this issue lets linger a potentially unnecessary paradox for the signal to syntax enterprise.
3.
On the syntax side and a sketch of a bridge to the signal
Why should researchers on the syntax side care about the signal? To answer that question, it might be useful to consider the issues that have motivated research in syntax acquisition. During the late 1950s and early 1960s, Chomsky and his students and colleagues challenged previously held notions of grammar and its acquisition (e.g., Chomsky 1965). It is possible to think of these challenges as being made on three fronts. First, the feasibility of distributional analysis as a mechanism for language acquisition was questioned. The second challenge was that even if a distributional analysis could provide the correct characterization of surface phrase structure, surface structure alone does not adequately explain the patterns observed in a single language or, more critically, syntactic universals. An account of these data, it was argued, requires a transformational grammar. The third challenge is the poverty of the stimulus argument, in which a learner without negative evidence can never be guaranteed to induce the correct grammar based on the necessarily limited number of strings encountered. Based on these challenges to the feasibility of grammar induction, the next 40 years of research in syntax acquisition have constituted a search for evidence of specific innate syntactic principles/parameters/constraints. Does the signal to syntax enterprise bear on any of these three challenges and therefore on the direction of future syntax acquisition research? Let us consider each challenge individually. The first challenge rests on two assumptions:that an unconstrained distributional analysis would be performed over all words in all of the utterances that the child had encountered, and that children have no way of discriminating elements that nearly always co-occur and those that co-occur only by chance. With respect to the first assumption, the notion that distributional analyses are unconstrained may be unwarranted. The data on phrase and clause segmentation potentially limit the within-utterance domain over which distributional analyses are performed (Gerken 1996a; Mintz 1996; Morgan,
SIGNAL TO SYNTAX
159
Allopenna & Shi 1996). Furthermore, if learners are sensitive to grammatical morphemes as a phonologically defined class, they could restrict their search for co-occurences between members of open and closed class categories (Morgan, Allopenna & Shi 1996). These hypotheses, combined with the assumption that the learner only performs between-utterance analyses over a few temporally contiguous samples, appears to greatly increase the feasibility of distributional analysis as a route to at least basic surface phrase structure. What of the statistical problem, in which the child must discriminate elements that frequently co-occur from those that co-occur only by chance? Such discrimination is particularly critical for the potential usefulness of distributional cues. Recent lexical segmentation research by Saffran, Aslin and Newport (1996) suggests that learners may in fact be sensitive to transitional probabilities across strings of nonsense syllables. In their study, infants were trained on four three-syllable nonsense words presented in random order with no prosodic word boundary cues. During test, infants were able to discriminate the nonsense words from other three-syllable sequences created from the last syllable of one nonsense word and the first two syllables of another, even though they had also heard these sequences during training. The basis of infants’ discrimination appears to be that the syllables within a nonsense word co-occurred 100% of the time, while syllables created from parts of two nonsense words co-occurred less frequently. Although we must remember that the statistical sensitivity needed to extract words from short strings is of a very different magnitude than the statistical sensitivity that would be required for syntax acquisition, this study and others re-open the possibility that learners could acquire surface phrase structure from cues in the signal. But does re-opening this possibility in any way change what we must hypothesize to be innate? Turning to the second challenge to induction, if the mature target is a transformational grammar, the answer may be “no”. Although signal-derived knowledge of surface phrase structure may change a few assumptions about the way in which innate syntactic knowledge is triggered, it probably does nothing to change the logical problem of language acquisition outlined by Chomsky and others. However, other non-transformational grammars have also been proposed. Unfortunately, we know little about the logical implications of adopting such grammars in light of the signal to syntax enterprise. Perhaps as researchers on the signal side attempt to make more concrete proposals about what morpho-syntactic information can be extracted from the signal, they will need to consider more carefully the grammar that the learner ultimately acquires. Similarly, researchers on the syntax side should note that, if one type of grammatical representation of a particular utterance is extractable from the signal and
160
LOUANN GERKEN
another representation for the same utterance is not, the extractable representation should be treated as a priori more plausible. With respect to the poverty of the stimulus argument, this challenge to induction combines two separable problems facing the child. One is statistical — how does the learner avoid treating occasional adult performance errors as data for grammar induction? Perhaps infants’ newly demonstrated ability to discriminate very frequent from less frequent patterns can handle this problem. The other problem concerns how far to generalize any piece of information and what sorts of generalizations to draw. This problem is inherent to any induction system and cannot be eradicated by any information in the signal. At the very least, some bias to generalize and some constraint against doing so too far must be innate. However, any innate principles governing generalization might be more or less specific to the language acquisition task per se depending on how much information is available in the signal. That is, if a great deal of information is available in the signal, then only general purpose innate constraints on induction may be needed to acquire language. However, if the structure of language presents a unique problem space for the learner, because it cannot be acquired by induction, then it may require domain specific innate constraints (e.g., parameters). In summary, researchers on the signal side of the signal to syntax discussion have amassed a solid foundation of data on infants’ and young children’s sensitivity to the linguistic signal. It is not yet clear, at least to me, whether any of these data should change the way many researchers on the syntax side approach their work. However, it is clear that it is time to build a bridge between the signal and syntax sides and to reopen discussions of what the learner’s sensitivity to the signal might say about the logical problem of syntax acquisition.
References Brent, M. 1992. “Automatic acquisition of subcategorization frames from unrestricted English.” Ph.D. dissertation, MIT, Cambridge, MA. Carter, A. K. 1999. “An integrated acoustic and phonological investigation of weak syllable omissions.” Unpublished Ph. D. dissertation, University of Arizona, Tuscon. Carter, A. and Gerken, L. 1996. “Children’s use of grammatical morphemes in on-line sentence comprehension.” In Proceedings of the twenty-eighth annual child language research forum, E. Clark (ed.). Palo Alto, CA:Stanford University Press. Cassidy, K. and Kelly, M. 1991. “Phonological information for grammatical category assignments.” Journal of Memory and Language 30:348–369. Chomsky, N. 1965. Aspects of a theory of syntax. Cambridge, MA:MIT Press.
SIGNAL TO SYNTAX
161
Christophe, A., Dupoux, E., Bertoncini, J. and Mehler, J. 1994. “Do infants perceive word boundaries? An empirical approach to the bootstrapping problem for lexical acquisition.” Journal of the Acoustical Society of America 95:1570–1580. Clark, H. H. and Clark, E. V. 1977. Psychology of language: An introduction to psycholinguistics. New York:Harcourt Brace Jovanovich. Cruttenden, A. 1986. Intonation. Cambridge, England:Cambridge University Press. Cutler, A. 1990. “Exploiting prosodic probabilities in speech segmentation.” In Computational and psychological approaches to language processes, G. Altmann (ed.). Cambridge, MA:MIT Press. Cutler, A. and Carter, D. 1987. “The predominance of strong initial syllables in the English vocabulary.” Computer Speech and Language 2:133–142. Cutler, A. and Norris, D. 1988. “The role of strong syllables in segmentation for lexical access.” Journal of Experimental Psychology: Human Perception and Performance 14:1 13–121. Delattre, P. 1966. “A comparison of syllable length conditions across languages.” International review of applied linguistics IV(3):183–198. Demuth, K. 1992. “Competence or Performance? What phonology shows about children’s emerging syntax.” Boston:Boston University Conference on Language Development. Demuth, K. 1994. “On the underspecification of functional categories in early grammars.” In Syntactic theory and first language acquisition, B. Lust (ed.). Hillsdale, NJ: Lawrence Erlbaum Associates. Demuth, K. 1996. “The prosodic structure of early words.” In Signal to syntax, J. Morgan and K. Demuth (eds.). Mahwah, NJ:Erlbaum. Dresher, E. 1996. “Introduction to metrical and prosodic phonology.” In Signal to syntax, J. Morgan and K. Demuth (eds.). Mahwah, NJ:Erlbaum. Echols, C., Crowhurst, M. and Childers, J. B. 1997. “The perception of rhythmic units in speech by infants and adults.” Journal of Memory and Language 36:202–225. Fee, E. J. 1992. “Exploring the minimal word in early phonological acquisition.” Proceedings of the 1992 Annual Conference of the Canadian Linguistics Association. Fernald, A. and Mazzie, C. 1991. “Prosody and focus in speech to infants and adults.” Developmental Psychology 27:209–221. Fernald, A. and McRoberts, G. 1993. “Effects of prosody and word position on infants’ lexical comprehension.” Paper read at Boston University Conference on Language Development, October, at Boston. Fikkert, P. 1994. On the acquisition of prosodic structure. Dordrecht:Holland Institute of Generative Linguistics. Fisher, C. and Tokura, H. 1996. “Prosody in speech to infants:Direct and indirect acoustic cues to syntactic structure.” In Signal to syntax, J. Morgan and K. Demuth (eds.). Mahwah, NJ:Erlbaum. Gerken, L. 1991. “The metrical basis for children’s subjectless sentences.” Journal of Memory and Language 30:431–451. Gerken, L. 1996a. “Phonological and distributional cues to syntax acquisition.” In Signal to syntax, J. Morgan and K. Demuth (eds.). Mahwah, NJ:Erlbaum.
162
LOUANN GERKEN
Gerken, L. 1996b. “Prosodic structure in young children’s language production.” Language 72:683–712. Gerken, L. 1996c. “Prosody’s role in language acquisition and adult parsing.” Journal of Psycholinguistic Research 25:341–352. Gerken, L., Jusczyk, P. and Mandel, D. 1994. “When prosody fails to cue syntactic structure:9-month-olds’ sensitivity to phonological versus syntactic phrases.” Cognition 51:237–265. Gerken, L., Landau, B. and Remez, R. E. 1990. “Function morphemes in young children’s speech perception and production.” Developmental Psychology 27:204–216. Gerken, L. and McIntosh, B. 1993. “The interplay of function morphemes and prosody in early language.” Developmental Psychology 29:448–457. Gleitman, L. and Wanner, E. 1982. “The state of the state of the art.” In Language acquisition: The state of the art, E. Wanner and L. Gleitman (eds.). Cambridge, England:Cambridge University Press. Gomez, R. L. and Gerken, L. A. 1999. “11-month-olds are sensitive to structure in an artificial grammar.” Cognition 70:109–135. Hayes, B. 1989. “The prosodic hierarchy in meter.” In Phonetics and phonology: Rhythm and meter, P. Kiparsky and G. Youmans (eds.). San Diego, CA:Academic Press. Höhle, B. and Weissenborn, J. 1998. “Sensitivity to closed-class elements in preverbal children.” In Proceedings of the 22nd Annual Boston Conference on Language Development, A. Greenhill, M. Hughes, H. Littlefield and H. Walsh (eds.). Somerville, MA:Cascadilla Press. Hohne, E. and Jusczyk, P. 1994. “Two-month-old infants’ sensitivity to allophonic differences.” Perception and Psychophysics 56:613–623. Hung, F. 1996. “Prosody and the acquisition of grammatical morphemes in Chinese languages.” Ph.D. dissertation, University of Hawaii, Honolulu, HI. Jakobson, R. and Waugh, L. 1987. The sound shape of language. Berlin:Mouton de Gruyter. Juliano, C. and Bever, T. G. 1990. “Clever moms:Regularities in motherese that prove useful in parsing.” Paper read at CUNY Sentence Processing Conference, March, at New York. Jusczyk, P. 1997. The discovery of spoken language. Cambridge, MA:MIT Press. Jusczyk, P. and Aslin, R. 1995. “Infants’ detection of the sound patterns of words in fluent speech.” Cognitive Psychology 29:1–23. Jusczyk, P., Cutler, A. and Redanz, N. 1993. “Infants’ sensitivity to predominant word stress patterns in English.” Child Development 64:675–687. Kelly, M. 1988. “Rhythmic alternation and lexical stress differences in English.” Cognition 30:107–137. Kemler Nelson, D., Hirsh-Pasek, K., Jusczyk, P. and Wright Cassidy, K. 1989. “How prosodic cues in motherese might assist language learning.” Journal of Child Language 16:53–68. Kimball, J. 1973. “Seven principles of surface structure parsing in natural languages.” Cognition 2:15–47.
SIGNAL TO SYNTAX
163
Landau, B. and Gleitman, L. 1985. Language and experience. Cambridge, MA:Harvard University Press. Lederer, A. and Kelly, M. 1991. “Prosodic correlations to the adjunct/complement distinction in motherese.” Papers and Reports on Child Language Development 30. Lust, B. and Chien, Y. C. 1984. “The structure of coordination in first language acquisition of Mandarine Chinese.” Cognition 17:49–83. Mandel, D., Jusczyk, P. and Kemler Nelson, D. 1994. “Does sentential prosody help infants organize and remember speech information?” Cognition 53:155–180. Mandel, D., Jusczyk, P. and Pisoni, D. B. 1994. “Infants’ recognition of the the sound patterns of their own names.” Psychological Science 6:314–317. Maratsos, M. 1982. “The child’s construction of grammatical categories.” In Language acquisition: The state of the art, E. Wanner and L. Gleitman (eds.). Cambridge, England:Cambridge University Press. Mazuka, R. 1996. “Can a parameter be set before the first word?” In Signal to syntax, J. L. Morgan and K. Demuth (eds.). Mahwah, NJ:Erlbaum. McGregor, K. and Leonard, L. B. 1994. “Subject pronoun and article omissions in the speech of children with specific language impairment:A phonological interpretation.” Journal of Speech and Hearing Research 37:171–181. Mintz, T. 1996. “The roles of linguistic input and innate mechanisms in children’s acquisition of grammatical categories.” Ph.D. dissertation, University of Rochester, Rochester, NY. Molfese, D. 1992. “Short- and long-term auditory recognition memory in 14-month-old human infants:Electrophysiological correlates.” Developmental Neuropsychology 8:135–160. Morgan, J. 1986. From simple input to complex grammar. Cambridge, MA:MIT Press. Morgan, J., Allopenna, P. and Shi, R. 1996. “Perceptual bases of rudimentary grammatical categories:T oward a broader conception of bootstrapping.” In Signal to syntax, J. Morgan and K. Demuth (eds.). Mahwah, NJ:Erlbaum. Morgan, J., Meier, R. and Newport, E. 1987. “Structural packaging in the input to language learning.” Cognitive Psychology 22:498–550. Morgan, J. and Newport, E. 1981. “The role of constituent structure in the induction of an artificial language.” Journal of Verbal Learning and Verbal Behavior 20:67–85. Morgan, J. and Saffran, J. 1995. “Emerging integration of sequential and suprasegmental information in preverbal speech segmentation.” Child Development 66:91 1–936. Morgan, J., Swingley, D. and Miritai, K. 1993. “Infants listen longer to extraneous noises inserted at clause boundaries.” Paper read at Society for Research in Child Development, April, at New Orleans, LA. Nespor, M. and Vogel, I. 1986. Prosodic phonology. Dordrecht:Foris. Newsome, M. and Jusczyk, P. 1995. “Do infants use stress as a cue for segmentating fluent speech?” In Proceedings of the 19th Annual Boston University Conference on Language Development, 2, D. MacLaughlin and S. McEwen (eds.). Somerville, MA: Cascadilla Press.
164
LOUANN GERKEN
Nicol, J. 1996. “What can prosody tell a parser?” Journal of Psycholinguistics Research 25:179–192. Peters, A. and Strömqvist, S. 1996. “The role of prosody in the acquisition of grammatical morphemes.” In Signal to syntax, J. L. Morgan and K. Demuth (eds.). Mahwah, NJ:Erlbaum. Pinker, S. 1984. Language learnability and language development. Cambridge, MA: Harvard University Press. Plunkett, K. 1993. “Lexical segmentation and vocabulary growth in early language acquisition.” Journal of Child Language 20:43–60. Saffran, J., Aslin, R. and Newport, E. 1996. “Statistical learning by 8-month-old infants.” Science 274:1926–1928. Saffran, J., Newport, E. and Aslin, R. 1996. “Word segmentation:The role of distributional cues.” Journal of Memory & Language 35:606–621. Santelmann, L. and Jusczyk, P. 1997. “What discontinuous dependencies reveal about the size of the learner’s processing window.” In Proceedings of the 21st Boston University Conference on Language Development, E. Hughes (eds.). Somerville, MA: Cascadilla Press. Schlesinger, I. M. 1981. “Semantic assimilation in the development of relational categories.” In The child’s construction of language, W. Deutsch (ed.). London:Academic Press. Selkirk, E. 1996. “The prosodic structure of function words.” In Signal to syntax, J. Morgan and K. Demuth (eds.). Mahwah, NJ:Erlbaum. Sereno, J. and Jongman, A. 1995. “Acoustic correlates of grammatical class.” Language and Speech 38:57–76. Shady, M. E. and Gerken, L. A. 1999. “Grammatical and caregiver cues in early sentence comprehension.” Journal of Child Language 26:1–13. Shady, M., Gerken, L. and Jusczyk, P. 1995. “Some evidence of sensitivity to prosody and word order in ten-month-olds.” In Proceedings of the 19th Boston University Conference on Language Development, D. MacLaughlin and S. McEwan (eds.). Somerville, MA:Cascadilla Press. Shady, M. E. 1996. “Infants’ sensitivity to function morphemes.” Ph.D. dissertation, State University of New York at Buffalo, Buffalo, NY. Shafer, V., Shucard, D., Shucard, J. and Gerken, L. 1998. “”The” and the brain:An electrophysiological study of infants’ sensitivity of English function morphemes.” Journal of Speech-Language and Hearing Research:41, 1–11. Slobin, D. I. 1973. “Cognitive prerequisites for the acquisition of grammar.” In Studies of child language development, C. A. Ferguson and D. I. Slobin (eds.). New York:Holt, Rinehart & Winston. Stager, C. and Werker, J. 1996. “The acquisition of word-object associations:Does phonetic similarity make a difference?” Paper read at International Conference on Infant Studies, April, at Providence, RI. Valian, V. and Coulson, S. 1988. “Anchor points in language learning:The role of marker frequency.” Journal of Memory and Language 27:71–86.
SIGNAL TO SYNTAX
165
Wijnen, F., Krikhaar, E. and den Os, E. 1994. “The (non)realization of unstressed elements in children’s utterances:A rhythmic constraint?” Journal of Child Language 21:59–84. Woodward, J. and Aslin, R. 1990. “Segmentation cues in maternal speech to infants.” Paper read at International Conference on Infant Studies, April, at Montreal Quebec, Canada.
A Reappraisal of Young Children’s Knowledge of Grammatical Morphemes Roberta Michnick Golinkoff University of Delaware
Kathy Hirsh-Pasek Temple University
Melissa A. Schweisguth University of Delaware
In 1984, Pinker wrote, In general, it appears to be very common for unstressed closed-class morphemes not to be present in the earliest stages in the acquisition of many languages. Thus, as much as it would suit my purposes to claim that Stage I children have latent control over the morphemes whose presence defines the categorization of certain constituents, it does not seem to be tenable given available evidence (p. 103).
This chapter will examine some of the “available evidence” for children’s sensitivity to grammatical morphemes some 13 years after Pinker’s pessimistic statement. At the original “Signal to Syntax” conference, we (Hirsh-Pasek, Tucker & Golinkoff 1996) discussed the role of prosodic bootstrapping in helping the child to segment the linguistic stream. Utilizing the approach of “dynamic systems theory” (Smith & Thelen 1994; Thelen & Smith 1994), we argued that children begin by using the prosodic information available in the stream of speech to help them segment speech into various sized units (e.g., phrases, clauses, words). While finding units in the speech stream is necessary, however, it is not a sufficient condition for language acquisition. Once children segment the speech stream into units, they must assign the units to grammatical categories or at least figure out what functions those units serve in the sentence.1 There are a number of input cues that could signal grammatical categories for the listener. By way of example, there are distributional regularities such that certain words often appear at the beginning of phrases (e.g., verbs) and certain
168
R.M. GOLINKOFF, K. HIRSH-PASEK & M.A. SCHWEISGUTH
at the end (e.g., nouns). There are also phonological regularities such that nouns generally have longer durations and more syllables than verbs (Kelly 1992, 1996; Durieux & Gillis, this volue). Perhaps the most reliable cue for both segmentation into phrases and for the identification of units, however, are grammatical morphemes. Grammatical morphemes are closed class, free standing morphemes (such as “the”) and bound morphemes (such as /ing/). These morphemes may aid in utterance segmentation and form class assignment because they are typically found with particular form classes. For example, the grammatical morphemes “the” or “a” usually precede nouns (and sometimes adjectives as in “the furry bear”) and the endings /ed/ and /ing/ are associated with verbs. In addition to their complementary distribution, grammatical morphemes often appear in characteristic positions in the sentence. For example, in intransitive sentence frames, “the” often begins noun phrases while /ing/ tends to end the verb phrase. Thus, even though grammatical morphemes are weakly stressed in the input, they can serve as reliable cues for both grammatical segmentation and identification. The question that remains is whether children can attend to these cues and can use them in the service of language development. The study described here focuses on children’s sensitivity to grammatical morphemes by exploring infant and toddler sensitivity to the bound morpheme /ing/. The morpheme /ing/ is a particularly good candidate for exploration for several reasons. First, according to Brown (1973) and de Villiers and de Villiers (1973), /ing/ is the earliest grammatical morpheme to appear in obligatory contexts in children’s speech, typically appearing around 24 months of age. Second, there is evidence that verbs appear later in speech production than nouns (e.g., Gentner 1983, but see Tomasello & Merriman 1995). Perchance /ing/ appears early in children’s speech because children rely on it to find verbs in the input.2 Third and finally, despite their late appearance in production, there is some evidence that even prelinguistic infants are sensitive to verb information in the input. Hirsh-Pasek and Golinkoff (1996), for example, report that 13- to 15month olds know that when verbs and their objects appear in a sentence they form a “package” that specifies events in the world. Hirsh-Pasek and Golinkoff presented infants with two different video events in the “intermodal preferential looking paradigm” (Golinkoff, Hirsh-Pasek, Cauley & Gordon 1987). On one screen, a woman was seen kissing a set of keys and holding a ball in the foreground. On the other screen, the same woman was seen kissing the ball and holding the keys in the foreground. The linguistic stimulus which emanated from between the televisions was, “she’s kissing the keys!”. Children (especially girls) watched the screen that matched the linguistic stimulus more than the screen that did not match what they heard. Thus, even before children produce verbs they
GRAMMATICAL MORPHEMES
169
may expect that a verb somehow “goes with” the object which follows it in an utterance. This sensitivity to verbs suggests that toddlers may have early knowledge of the grammatical morphemes associated with verbs. The rest of this chapter will be divided into three sections. Section 1 asks the question, “What do English-reared, language learning children know about grammatical morphemes and when do they know it?”. Section 2 presents findings from a preliminary experiment on young children’s sensitivity to bound morphology — in particular to the morpheme /ing/. Finally, Section 3 considers the implications of our findings for children’s sensitivity to grammatical morphemes.
1.
What do English-reared, language learning children know about grammatical morphemes and when do they know it?
Is there any evidence that grammatical morphemes are even perceived as separate, meaningful elements in the input? Furthermore, if they are perceived, is there any reason to believe that they are actually used by the child in their discovery or construction of grammatical categories, or more generally, in sentence processing? Comprehension studies bear on these issues. 1.1 Experimental studies on the comprehension of grammatical morphemes As usual, Brown (1957) was the first researcher to probe experimentally children’s knowledge of the syntactic reflexes associated with form classes. In a now classic study, Brown showed 3-, 4-, and 5-year-olds pictures of a person performing a novel action on a novel substance. For example, in one picture, someone was seen performing a kneading action on a novel confetti-like substance. Children were asked to point to the correct part of the picture when the experimenter asked for “some sib” or “sibbing”. While Brown concluded that children were able to use the /ing/ (among others) to find the correct item in the picture, children actually heard multiple sentences for each request containing multiple cues to form class assignment. Therefore, it is difficult to discern exactly which aspect of the stimulus elicited children’s responses. For example, in the sentences used to request the novel action, children were asked, “Do you know how to sib?” and then “Can you find sibbing?”. It is difficult to conclude that children found the action because of the presence of the /ing/ morpheme in the second sentence (for a critique of the Brown study see Dockrell & McShane 1990). Shipley, Gleitman, and Smith (1969) took a different tack in their exploration
170
R.M. GOLINKOFF, K. HIRSH-PASEK & M.A. SCHWEISGUTH
of whether young children were aware of grammatical morphemes. They asked whether children who were themselves holophrastic or telegraphic speakers were also telegraphic listeners. Did children who did not include grammatical morphemes in their own speech expect to hear grammatical morphemes? Did they even notice whether those morphemes were present or absent in the input that they heard? Shipley et al. required children between the ages of 18 and 33 months to perform in an “act out” task in response to three types of commands: (1) (2) (3)
Appropriate — with the obligatory grammatical morphemes, e.g., “Throw the ball!”; Omissions — without the obligatory morphemes, e.g., ‘Throw ball!” or “Ball!”; Nonsense — with nonsense syllables in the position in the utterance where the grammatical morphemes belonged, e.g., “Gor ronta ball!”
Interestingly, the results differed depending on the language level of the children. Children in the holophrastic group carried out more commands when the commands omitted obligatory morphemes than when they included them. Children in the telegraphic group, however, carried out fewer commands when they omitted grammatical morphemes than when they included them. As Shipley et al. wrote, “What is surprising is that just those utterance types they themselves did not use were more effective as commands:the telegraphic children responded most readily to the well-formed sentences” (p. 331). Thus, these findings suggested something that most researchers had not considered in 1969 – the possibility that children were sensitive to grammatical morphemes even when they were not yet producing them. If this were true, then prior to the time when children produce grammatical morphemes in their own speech, they might be capable of using them for assigning novel words to grammatical categories. While this result is intriguing, there is an alternative explanation noted by the authors:Perhaps children had not noticed omissions or deformations of grammatical morphemes at all but just the way in which the prosody of the utterance was affected as a byproduct of these changes. Further, these results raise the issue of why the effects were limited to the telegraphic speakers. Why were holophrastic speakers more willing to perform to telegraphic than to complete commands? There are two possible interpretations of these findings: Either children are not sensitive to grammatical morphemes until they are on the brink of producing them (the telegraphic speakers) or, the act out task was simply too demanding for the youngest children. If the latter is true, then even the holophrastic speakers should show sensitivity to grammatical morphemes under other, simpler experimental conditions.
GRAMMATICAL MORPHEMES
171
More recent experimental studies seem to favor the second alternative that even holophrastic speakers are sensitive to grammatical morphemes. Katz, Baker and MacNamara (1974) and Gelman and Taylor (1984) asked whether children not yet producing determiners reliably, are sensitive to the grammatical morphemes associated with the noun class. In English, common count nouns take an article (as in “the block”) while proper nouns (as in “Mary”) do not. Gelman and Taylor and Katz et al. found that children as young as 17 months of age were sensitive to this distinction, treating a novel word as a proper name when the article was omitted and as a common noun when the article was included. These findings are impressive because they turn on the child’s detection of the presence or absence of an unstressed grammatical element (an article). Alternatively, it is also possible that children were responding in some way to the prosody of the test utterances. Work by Gerken and her colleagues also suggests that young children are sensitive to grammatical morphemes. Gerken, Landau and Remez (1990) gave children whose MLUs ranged from 1.30 to 5 an elicited imitation task in which they were asked to repeat strings such as “Pete pushes the dog”. Either the underlined grammatical morphemes were replaced with nonsense (as in “Pete pusho na dog”) or the content words were replaced by nonsense (as in “Pete bazed the fod”) or both were replaced by nonsense (as in “Pete bazo na dep”). The logic of this manipulation was as follows:If children omit grammatical morphemes because of a constraint on the complexity of their early productions, then grammatical morphemes which add grammatical complexity, would be good candidates for omission. If this is true, then grammatical morphemes should be omitted more than nonsense syllables in sentences where the grammatical morphemes were produced with same weak stress and in same position, as in “Pete pusho na dog”. Gerken et al.’s results indicated that children with low MLU’s omitted more function morphemes than weakly stressed nonsense syllables. This is an interesting finding because it is counterintuitive:One might think that failing to repeat a novel, low-stressed syllable such as “na” would be more likely than failing to repeat a functor syllable (such as “the”) that has been heard many times. This finding suggests that children do not omit functors in their speech because they fail to perceive them but rather because they contribute to sentence complexity. In sum, the previous studies suggest that children are sensitive to grammatical morphemes before they produce them. There is also data that suggests that somewhat older children, already producing some grammatical morphemes, are able to use grammatical morphemes to assign novel words to grammatical categories. Golinkoff, Schweisguth and Hirsh-Pasek (1992) created an
172
R.M. GOLINKOFF, K. HIRSH-PASEK & M.A. SCHWEISGUTH
ambiguous situation in which children (mean age 32 months) could assign novel words to either the noun or verb class only on the basis of morphological and phrase structural information. For example, the experimenter moved a novel object up and down her arm as she talked. If a child was in the Noun condition, the experimenter said, “Watch the fliff!”; for the Verb condition, she said, “Watch me fliffing!”. Immediately following the demonstration three familiar objects and the novel object were arrayed on the floor. Children in the Noun condition were asked “Do you see a fliff?” and “Can you give me the fliff?”. Children in the Verb condition were asked “Can you show me how to fliff?” and “Can you show me fliffing?”. Thus, there was information — both at training and at test — for interpreting the newly offered word (“fliff”) as either a noun or a verb. The results indicated that in the Noun condition, children selected the novel object 81% of time and did not act out the action. In the Verb condition, children acted out the novel action 69% of the time. These data suggest that children do indeed detect grammatical morphemes in the input and that they can use these morphemes (combined with minimal phrase structural information), to assign novel words to form classes after very few exposures. Research by Gerken and her colleagues reviewed next shows that even younger children are sensitive to the distributional properties of grammatical morphemes. Gerken and McIntosh (1993) and Gerken and Shady (1995) developed a picture pointing task to assess young children’s awareness and use of grammatical morphemes. The logic of their studies is that children should be sensitive to the distributional properties of grammatical morphemes if they are to help them in segmentation and form class assignment. They chose to test distributional sensitivity by creating violations of the contexts in which grammatical morphemes can occur. The logic was as follows:If children detect these violations, they should have the effect of disrupting sentence comprehension. Gerken and McIntosh created sentence stimuli such as the following: (1) (2) (3) (4)
Find Find Find Find
the dog for me. was dog for me. gub dog for me. * dog for me.
Children received all four types of sentences. The first sentence contained the correct grammatical morpheme (“the”) in the expected position in the sentence and therefore represented a control. The second sentence, while containing an actual grammatical morpheme of English, used that morpheme inappropriately. The third sentence contained a nonsense morpheme in the position where the
GRAMMATICAL MORPHEMES
173
determiner is usually found and the fourth sentence omitted the morpheme entirely. Gerken and McIntosh conducted two experiments. Children (mean age = 25 months) had as their task to point to the picture requested in a story book with four pictures on each page. Since our primary interest is in whether children not yet producing grammatical morphemes are nonetheless sensitive to them, we will focus on the results from the children who had MLU’s under 1.5. Clearly, children made the greatest number of correct choices (86%) in the condition where the expected morpheme was included. Children may have noticed the absence of the obligatory grammatical morpheme in the fourth condition (75% correct), although this condition and the control condition were not statistically different. There was a significant difference between the control condition (86%) and the ungrammatical condition (56%), suggesting that even children not yet producing grammatical morphemes are aware of these morphemes and expect them to be in certain places in sentences. When “was” — a verbal auxiliary — occupied the position in which “the” is ordinarily found, children’s sentence processing is disrupted. Finally, this finding must be coupled with the fact that response patterns were maximally disrupted by the presence of a nonsense word (39%), suggesting that toddlers know which items are permissible grammatical morphemes as well as their privileges of occurrence. The findings of Gerken and her colleagues suggest that toddlers may know some set of grammatical morphemes in English as well as their distributional properties prior to the time when they produce these morphemes. There are several reasons why the studies of Gerken and her colleagues reveal such precocity. First, Gerken and her colleagues focused on the grammatical morpheme “the”, which is a free-standing morpheme that precedes nouns. “The” shares that position with only a few other possible morphemes such as the indefinite determiner “a”, its allomorph “an”, and quantifiers like “some”. The function of the determiners is similar:They specify whether the modified noun is or is not information already given in the discourse and they also allow generic knowledge to be expressed (as in “A tiger has stripes.”) Second, Katz et al. (1974) and Gelman and Taylor (1984) had shown that children as young as 17 months are sensitive to the presence or absence of this morpheme for signalling whether the modified noun should be interpreted as a proper name or a count noun. Thus, there was reason to hypothesize that children would be sensitive to free standing morphemes like “the”. Would they, however, show a similar precocity with bound morphemes like /ing/ which are possibly more difficult to detect because they are bound to a stem? Would similar results obtain with a morpheme that shares its position not with just one other form (as in the case of the determiner) but with a wide range of possible forms serving
174
R.M. GOLINKOFF, K. HIRSH-PASEK & M.A. SCHWEISGUTH
different functions (viz., the third person singular /s/; the null morpheme on the other persons; the past tense marker /ed/; and the adverbial ending /ly/)? To the best of our knowledge, there is no research on this question with children not yet producing such morphology. The current study was designed to fill that gap. Would children not yet producing the /ing/ morpheme show evidence that they are both sensitive to it and use it in sentence comprehension?
2.
The experiment and the data
The experiments we conducted were modeled after those carried out by Gerken and McIntosh (1993) and Gerken and Shady (1995). We compared toddlers’ performance under 3 conditions:A correct morpheme condition (/ing/), an ungrammatical morpheme condition (/ly/), and a nonsense morpheme condition (/lu/). In the control condition, children heard familiar verb stems such as “dance” with the correct morphological ending, /ing/. In the ungrammatical condition, children heard that same verb accompanied by a possible morpheme of English (/ly/) which is not used on verbs. Unlike Gerken and her colleagues who created their ungrammatical condition by placing a possible English morpheme (“was”) in the wrong position in a sentence (before a noun), our ungrammatical condition does not hinge on the placement of the morpheme in the sentence. The adverbial morpheme /ly/ is in the correct position — at the end of a word — but is not used on verbs. In some ways this provides an even more powerful test that young children are sensitive to bound morphology because if a disruption occurs in sentence comprehension, it is because a mismatch has been detected between type of stem and type of morpheme. Finally, the nonsense condition, /lu/, provides us with an opportunity to assess whether it is just familiarity with the bound morphemes that drives correct responses. The study of a verbal morpheme, however, immediately presents a problem different than that faced by Gerken and her colleagues who needed only to create pictures of objects. To test for the comprehension of verbs we needed a way to portray dynamic events. In addition, because the children we wished to test were younger than the youngest children Gerken tested, we needed a method that did not require children to carry out commands. The younger children are, the less responsive they are in experimental situations that require compliance (see Golinkoff et al. 1987). To address these problems we used the “intermodal preferential looking paradigm”, (Hirsh-Pasek & Golinkoff 1996a, b) a method created to study language comprehension in children not yet producing much speech. Figure 1 is a schematic drawing of the paradigm.
175
GRAMMATICAL MORPHEMES
Computer
Hidden speaker
Hidden tape deck
Hidden tape deck
Hidden observer or hidden camera lens
Child on mother’s lap
Figure 1. The intermodal preferential looking paradigm
A child sits in the center of her blindfolded parent’s lap between and equidistant from the center of two televisions. On the two screens children see visual stimuli that are different but equally salient. A linguistic stimulus, delivered through a centrally placed audio speaker describes or “matches” the events portrayed on only one of the television screens. The rationale in all the studies we conduct in this paradigm is that if children comprehend the linguistic stimuli, they will watch the matching screen significantly more than the non-matching screen. Thus, the dependent variable is the duration of visual fixation time to the screen that matches the linguistic stimulus versus to the screen that does not match the linguistic stimulus.
176
R.M. GOLINKOFF, K. HIRSH-PASEK & M.A. SCHWEISGUTH
There are several key features of the design we employed. First, it was a between-subjects design so that children were randomly assigned to one of the three conditions in which they heard only one of kind of verb ending (viz, /ing/, /ly/, or /lu/). Second, there were four pairs of verbs, all expected from prior research (e.g., Goldin-Meadow, Seligman & Gelman 1976; Golinkoff et al. 1987) to be among the earliest that children comprehend and produce (see Table 2). Third, all sentences were presented in naturally produced infantdirected speech, containing all the exaggerated prosodic characteristics of speech addressed to young children. While synthesized speech was not used, every attempt was made to pronounce the sentences identically across conditions, with equal stress on the final test syllable. Table 1. A description of the verb pairs Tape 1
Tape 2
“Drinking” A seated woman drinking from a cup.
“Blowing” A seated woman blowing a piece of paper.
“Waving” A woman waving at the viewer.
“Eating” A woman eating a cookie.
“Bouncing” A seated woman bouncing a tennis ball on a table top.
“Pushing” A seated woman pushing a plant on a table top.
“Dancing” A woman dancing in place.
“Turning’ A woman turning around in place.
Note. All actions were performed by the same actress. The action that was requested from a pair was counterbalanced across subjects.
As Table 1 indicates, each of the four blocks of trials had the same structure. A pair of actions, one on each screen, was first shown simultaneously without test audio. This was followed by a pair of test trials during which the linguistic stimulus always requested that the child look at the same member of a pair. As in the example on Table 1, both test trials asked the child to find “dancing” (or “dancely” or “dancelu”). Furthermore, a number of variables were counterbalanced:The side of the match (by placing tape 1 into tape deck 1 and 2); the number of matches on each screen (the pattern was always Left-RightRight-Left and its mirror image); and which member of the pair the linguistic stimulus requested (e.g., “Where’s dancing?” versus “Where’s turning?”).
GRAMMATICAL MORPHEMES
177
Table 2. Tape layout and sample block of trials for the control (/ING/) condition Left screen
Linguistic
Right screen
Simultaneous trials 6 sec; Black
“Hey boys and girls! What do you see on TV?”
6 sec; Black
6 sec; woman drinking from a cup
“What’s going on on those TV’s? What are they doing?”
6 sec; woman blowing a piece of paper
3 sec; Black
“Hey! Look up here!”
3 sec; Black
6 sec; woman drinking from a cup
“Wow! I see it again! Look at that!”
6 sec; woman blowing a piece of paper
Test trials 6 sec; Black
“Which one is drinking? Can you find drinking?”
6 sec; Black
8 sec; woman drinking from a cup
“Where’s drinking? Do you see drinking?”
8 sec; woman blowing a piece of paper
3 sec; Black
“Whoa! Find drinking!”
3 sec; Black
8 sec; woman drinking from a cup
“Look up here again! Which one is drinking?”
8 sec; woman blowing a piece of paper
Note. This structure was duplicated for the three remaining verb pairs (see Table 1). Side of match was counterbalanced with half the matches on the left screen and half on the right screen. Also counterbalanced was which tape appeared on which screen.
The 108 subjects, distributed approximately equally and randomly into the 3 conditions and balanced for sex, ranged in age from 18 to 21 months. The children were also screened for their understanding of at least 6 of the 8 verbs that would be used in the test condition. At the time of the visit, parents were asked if their children produced /ing/. Very few (about 6 children) occasionally produced /ing/.
178
R.M. GOLINKOFF, K. HIRSH-PASEK & M.A. SCHWEISGUTH
2.1 Three possible patterns of results First, it is possible that the stem alone supports comprehension for this age group. In other words, children look to the match if they know the verb stem (e.g., “dance”); the bound morpheme (/ing/) is ignored. This seemed a genuinely possible outcome given that (a) verbs (at least in English) can appear in uninflected forms, as in “Look at you dance!”; and (b) our subjects were not yet producing the /ing/ morpheme. On this account, children’s responses in the three conditions would be identical; they would show comprehension of the verb by watching the target action (e.g., dance) significantly more than the non-target action (e.g., turn) with which it was paired. Another reason why children might watch the match more than the nonmatch in all conditions is based on the hypothesis that children rely on the verb stem plus some non-specific syllable to support comprehension. Perhaps children at this age expect to hear a syllable in certain circumstances attached to a verb stem. However, if they have never analyzed the syllable phonologically, perhaps they don’t care much about the syllable’s particular properties. Therefore, if the syllable chosen preserves the stress of /ing/, children will have no difficulty finding the match in any condition. Second, comprehension may be supported by the presence of the verb stem plus an English morpheme. Perhaps toddlers can comprehend the verb stem, expect to hear a familiar English morpheme attached to the stem, but aren’t too particular about which one it is. This account presupposes that toddlers do not realize that morphemes are restricted to particular form classes or are unsure of what functions they serve. This possibility predicts that comprehension should be evinced in both the grammatical /ing/ condition as well as in the ungrammatical /ly/ condition. This pattern of results would suggest that children are indeed sensitive to grammatical morphemes, although familiarity with the morpheme counts more than the correct use of the morpheme. The final possibility is that children will only show comprehension in the grammatical condition when they hear a verb stem plus the correct morpheme, /ing/. The other syllables should disrupt comprehension but for different reasons: /ly/ because it is appended to a verb and not an adverb; and /lu/ because it is an unfamiliar syllable. This outcome would show that children are indeed sensitive to grammatical morphemes before they produce them. It further suggests that children expect such morphemes to appear on particular parts of speech. However, we would still need to disentangle whether /ly/ disrupts comprehension because it is on a verb or whether /ly/ is rejected because it is not a known morpheme, and hence is equivalent to the nonsense syllable.
GRAMMATICAL MORPHEMES
179
2.2 What did we find? Before reviewing the results, it is important to note that there were no stimulus salience problems during the simultaneous trials. That is, when the pairs of actions were presented with a neutral linguistic stimulus, neither verb in a pair was intrinsically more interesting than the other member of the pair. 2.3 Results fromthe /ing/ condition. The first important outcome is that children showed comprehension in the /ing/ condition, watching the match (x = 4.01 sec.) significantly more than the nonmatch (x = 3.31 sec.). Before we could interpret the results of the other conditions, it was essential that children show their comprehension in this control condition. Note however, that we cannot claim from these results that children are attending to the /ing/ morpheme. Perhaps the verb stem plus inflection are stored as a single unit, as would be predicted by Caramazza, Laudanna and Romani’s (1988) Augmented Addressed Morphology Model. As Caramazza et al. reported, familiar items are not ordinarily decomposed when they are encountered and therefore, with adults, they yield the fastest reaction in lexical decision tasks (e.g., few of us decompose “runs” into /run/ and /s/). Another possibility is that children are ignoring the /ing/ and finding the match solely on the basis of the presence of the verb stem. Therefore, examination of the results from the other conditions is needed before we come to any conclusions about whether children are sensitive to /ing/. Recall that the only way in which the other conditions differ from the /ing/ condition is in alterations in morphology. Therefore, if differences occur between the /ing/ condition and the other conditions, it must be due to the morphology. 2.4 Results fromthe /ly/ condition The results indicate that an unexpected variation of the third story emerged. Recall that the second scenario predicted that comprehension would occur only in the /ing/ and /ly/ conditions if children use the verb stem and expect to hear English grammatical morphemes at the ends of verbs. Children did just this on the last three verb pairs, watching the match (x = 3.83) significantly more than the non-match (x = 3.02). However, on the first verb pair, children watched the non-match (x = 4.21) significantly more than the match (x = 3.07). One possible interpretation of this result is that children recognized /ly/ as a familiar English morpheme and were at first puzzled by its placement on a
180
R.M. GOLINKOFF, K. HIRSH-PASEK & M.A. SCHWEISGUTH
verb. This result therefore suggests that children are sensitive to the ungrammatical use of a familiar morpheme and that this ungrammatical usage is capable of disrupting sentence comprehension. (This assertion will gain support after the findings of the /lu/ condition are presented). This result is parallel to Gerken and McIntosh’s (1993) and Gerken and Shady’s (1995) “was” condition. “Was”, a familiar auxiliary, was used inappropriately after a verb and before a bare noun (viz, “Find was dog for me”). Thus, both “was” and /ly/ are anomalous because, although familiar, they are appearing with words from inappropriate categories. Children seem to detect this clash of morphemes and categories. If this admittedly speculative inference is correct, it suggests that by 18 months of age children possess more sophistication about grammatical morphemes than we imagined. They appear to be aware not only of which morphemes are found in English but of the type of words on which the morphemes are typically to be found. These data therefore, further indicate that children may indeed be segmenting a verb into a stem and a morpheme. How should we explain the fact that children watch the non-match significantly more than the match on the first verb pair? Hearing the /lu/ ending, children may have decided that “dancelu” was a novel word. They therefore systematically watched the non-match, thinking that the speaker must not be referring to “dance” or she would have called it that. If true, this is a very sophisticated strategy. Consider how many English words, for example, share a “stem” with no common meaning. There are words such as “corn” and “corner”, “ham” and “hamster”, etc., which bear no obvious relation to each other. By watching dancing’s mate (turning), it is as if children are segmenting the stem and ending and deciding that “dancelu” must mean something other than ‘dance’. Why do children then begin to watch the matching screen after the first block of test trials? There are three possibilities. First, children may decide that a familiar morpheme not ordinarily found on that word class is possible after all. The lack of negative evidence in the input implies that, “input must rule!”. Therefore, upon hearing /ly/ on the wrong word class, the child, who is producing neither /ing/ nor /ly/, perhaps decides that maybe /ly/ is acceptable on verbs after all. There are several implications that follow from accepting this view. One implication is that children can be swayed by input more rapidly than we think at certain points in their acquisition. A companion implication is that older children, who already produce /ing/ and /ly/, should not be so easily swayed. Older children should continue to show disruption in their sentence processing through all four blocks of trials. This would be a clear indication that once a grammatical morpheme is learned and has been associated with a particular class,
GRAMMATICAL MORPHEMES
181
its class affiliation cannot switch after a minimum number of exposures to it in a new grammatical environment. Yet another implication of this finding is that other grammatical morphemes should also be acceptable at this point when appended to verbs. For example, if we substituted the derivational morpheme /ness/ at the end of the verbs (as in “Find danceness!”), children should eventually find the match. The second reason why children may begin to watch the match in the last three blocks of /ly/ trials is that children may begin to interpret the novel words they hear as new adverbs. For example, perhaps they begin to interpret “dancely” as an adverb. This does not seem a likely alternative given the strange phrase structure frame which would result. That is, a sentence such as “Find dancely” would be parallel to a sentence such as “Find loudly”. It seems unlikely that children would move to this alternative. A third reason why they might watch the match on the latter trials is that they might recognize the ending /ly/ as familiar but have not yet decided on its function. That is, /ly/ may be stored as a familiar acoustic unit, perhaps in a general store of function morphemes. Faced with only two choices, they may decide that it must be coterminous with /ing/ and look toward the screen in which the actor is dancing. Finally, children may begin to watch the matching screen because the paradigm limits other possibilities. That is, since there are only two choices the child can make — either watch the event on the left screen or the one on the right screen — perhaps the child satisfices and just watches the screen which presents the closest to the “correct” answer. This seems an unlikely possibility because of how children respond in the /lu/ condition, to be discussed next. 2.5 Results fromthe /lu/ condition. In the /lu/ condition, we used a nonsense syllable with phonological characteristics not ordinarily found at the end of two-syllable English words. Here, comprehension is completely disrupted and neither the match nor the non-match is watched to a greater degree throughout the four blocks of trials. Indeed the mean visual fixation time across the four blocks of trials to the match and non-match is identical at 3.56 seconds. Children were not sure which screen to watch in response to words like “dancelu” and “wavelu”. Again, this finding supports the notion that children can segment the verb stem from its ending. This finding also suggests that children recognize that /lu/ is not in their store of grammatical morphemes. It is interesting again to note the parallel with Gerken’s work (Gerken & McIntosh 1993; Gerken & Shady 1995).
182
R.M. GOLINKOFF, K. HIRSH-PASEK & M.A. SCHWEISGUTH
Children responded correctly at the lowest rate to their nonsense condition (“Find gub dog for me”), finding the target only somewhat more than a third of the time. Here, children’s comprehension was completely disrupted in the /lu/ condition, with children unable to watch the matching screen more than the nonmatching screen for any block of trials. 2.6 Summary of our results We have found that children not yet producing grammatical morphemes in their own speech are indeed sensitive to them. They can discriminate grammatical morphemes that are used correctly from those used incorrectly and can apparently recognize that a nonsense syllable is not a grammatical morpheme. These results then, augment and extend the claims made earlier by Shipley et al. (1969) as well as those of Gerken and her colleagues (Gerken & McIntosh 1993; Gerken & Shady 1995). On a methodological note, the intermodal preferential looking paradigm yielded a surprising range of responses that allowed us to distinguish between a number of alternatives which could be used to characterize children’s knowledge of grammatical morphemes. It could be argued that, in a rough way, our findings parallel Gerken’s data. They found that correct grammatical morphemes prompted more correct responses than ungrammatical morphemes which in turn prompted more correct responses than nonsense syllables. We too, had more “correct” responses (i.e., watching of the match) for the /ing/ control condition than for the ungrammatical /ly/ condition than for the nonsense syllable condition /lu/.
3.
Part 3: Implications for children’s sensitivity and use of grammatical morphemes
Having reviewed much of the literature on children’s sensitivity to various grammatical morphemes and having presented the results of a study designed to test for sensitivity to the grammatical morpheme /ing/, we are now in a position to examine the Pinker (1984) quotation with which we opened this chapter. Pinker correctly noted that unstressed closed class morphemes were not present in young children’s speech. Furthermore, there was no clear evidence that the grammatical morphemes affected children’s sentence processing. While the only study available (Shipley et al. 1969) reported that children using telegraphic speech were affected by the absence of grammatical morphemes in commands, the authors of that study thought it possible to attribute their results to the
GRAMMATICAL MORPHEMES
183
peculiar prosodic effects created by omitting the grammatical morphemes. Therefore, in 1984, Pinker could rightly assume that young children had little awareness or comprehension of grammatical morphemes. The state of our knowledge has clearly changed. The research literature and experiment reviewed here provide new evidence that children not yet producing grammatical morphemes are aware of them and sensitive to their privileges of occurrence. These findings, then, make it unlikely that the earlier Shipley et al. results could be attributed only to children’s detection of unusual prosody in the test sentences. Furthermore, the present results go beyond the Shipley et al. results in showing that even holophrastic speakers are sensitive to grammatical morphemes — and to bound morphemes at that. The latter finding is additional support for the use of methods such as the intermodal preferential looking paradigm (Golinkoff et al. 1987; Hirsh-Pasek & Golinkoff 1996a, b) which do not tax young language learners (as did the Shipley et al. task) to carry out commands. At the start of this chapter we raised two questions about children’s use of grammatical morphemes which we will now revisit. We wondered whether young children could use these morphemes to segment the linguistic stream and whether they could possibly use grammatical morphemes to assign new words to form classes. Although the present results do not bear directly on either of these questions, we are now in a better position to evaluate these issues. If young children are capable of using grammatical morphemes for either of these tasks, we should be able to show that altering these morphemes actually disrupts sentence processing. The work of Gerken’s group, as well as our work, does indeed indicate that sentence processing is disrupted in two very different tasks (picture pointing and the intermodal preferential looking paradigm). This disruption occurs both when free (Gerken & McIntosh 1993; Gerken & Shady 1995) and bound grammatical morphemes (our /ly/ condition) appear with words of an incorrect form class. Based on these findings, it is at least logically possible that young children use grammatical morphemes to help in segmenting the linguistic stream. If young children have an expectation about what grammatical morphemes may appear where, these morphemes may already be serving the function of separating incoming words in the speech stream even when the meanings of the words are not known. There is also reason to believe that children are using these morphemes to categorize new words in the input. If children’s comprehension is disrupted when grammatical morphemes are placed on words from the wrong form class (as when the adverbial ending /ly/ was placed on a verb stem) then they are already distinguishing which morphemes appear with which form classes. This possibility needs to be assessed, perhaps in a downward extension of the Golinkoff,
184
R.M. GOLINKOFF, K. HIRSH-PASEK & M.A. SCHWEISGUTH
Schweisguth and Hirsh-Pasek (1992) experiment described earlier on how children assigned novel words to the form classes of noun and verb. Whatever the subsequent answer to the questions raised above about whether youngchildren are capable of using grammatical morphemes for segmentation of the speech stream and form class assignment, the present results indicate that children are not just processingwords as a whole without analyzing verbs into a stem and affix. There is evidence that toddlers do indeed know somethingabout morphemes and the type of words to which they are affixed. It is important to note that in this study, we tested only /ing/. Given the fact that it is a morpheme, however, the /ing/ result is all the more impressive. These results may license a further, highly tentative speculation: Perhaps toddlers possess the standard syntactic category of ‘verb’. If true, it should be possible to confirm two subhypotheses. First, the task should work equally well with non-action verbs such as “read” or “sleep”. Second, the task should work with novel verbs toddlers have not heard before. Thus, after trainingchildren with verbs presented in the infinitive form (e.g., “Do you know how to glorp?”), children should generalize that they can take certain endings like /ing/. 3.1 Where does sensitivity to grammatical morphemes come from? If children are aware of grammatical morphemes as early as 18 months of age, how did this come about? Some theories suggest that children mine the input for semantic information that can link form to meaning(e.g ., Pinker 1984). In this research, we demonstrate however, that even more abstract information does not escape the infant’s ear (see also Saffran, Aslin & Newport 1996) We have argued (Golinkoff & Hirsh-Pasek 1995; Hirsh-Pasek & Golinkoff 1996a) in our “coalition of cues” framework, that toddlers are miningcues from various domains for language comprehension, cues that are statistically yoked and that interact to yield sentence meaning. A similar story of noting redundant cues in the input can be told for the discovery of grammatical morphemes. In the first year, for example, children probably note grammatical morphemes for their statistical frequency and for their phonological and prosodic properties. In fact, Shafer, Shucard, Shucard and Gerken (in press) have shown just this. At 11 months of age, using cortical evoked potentials as the dependent variable, children seem to notice when unstressed grammatical morphemes such as “the” are replaced with nonsense words such as “gu” in sentences. Morgan, Shi and Allopenna (1996) also show that the prosodic properties of these morphemes provide cues for their detection. At this early time it is likely that children have only an undifferentiated category of grammatical morphemes — a storehouse of
GRAMMATICAL MORPHEMES
185
familiar acoustic cues — not yet distinguished by the types of words they are associated with. Yet even at this early phase, hearing one of these familiar morphemes could help the child find word boundaries in the input. In the beginning of the second year, the coalition of cues framework predicts that children begin to pay attention to semantic cues. Thus, in addition to having detected something of the morphemes’ phonological and prosodic properties, toddlers now begin to note the distributional characteristics of these words, that certain morphemes tend to occur with specific types of words (e.g., “the” tends to precede concrete objects while /ing/ tends to follow words describing actions). Finally, by the age at which our test was performed (18–21 months) children seem to have more than just an undifferentiated class of frequently heard grammatical morphemes. Children may have discovered what kinds of words these morphemes belong with. Thus, it seems that now an additional cue — that provided by syntax — comes on line. When added to the other cues in the coalition, this probably allows the child to assign even novel words to appropriate syntactic categories. While we have no direct evidence for this latter claim, it seems a strong, empirically verifiable possibility. 3.2 Conclusions Our review of the literature, as well as the study reported herein, suggest that children have considerably more grammatical knowledge than is reflected in their early utterances. This is the first study of which we are aware that shows that children not yet producing a bound morpheme are nonetheless capable of using it in sentence comprehension. Granted, what is presented here are data from a single bound morpheme (/ing/). Furthermore, progress is likely to be made on this front given that we now have subtle methods which make minimal demands on subjects (e.g., cortical evoked potentials and the intermodal preferential looking paradigm) to enable us to uncover these emerging linguistic competencies. Although this type of research is in its infancy, the field has already begun to examine the individual cues infants use to find grammatical morphemes in the input and subsequently to analyze their function. We must now move in the direction of analyzing how infants mine the correlation and the weighting of multiple cues available in the input as they begin to discover grammatical morphemes and use them for both the segmentation of linguistic units and the subsequent identification of these units in their language.
186
R.M. GOLINKOFF, K. HIRSH-PASEK & M.A. SCHWEISGUTH
Acknowledgments The data reported herein were collected as part of the third author’s senior honor’s thesis conducted under the supervision of the first author. We gratefully acknowledge the support of the University of Delaware’s Honors Program and NSF grant (#SDBR9601306) awarded to the first two authors and an NICHD grant (#HD25455–07) to the second author. We would also like to thank Rebecca Brand and He Len Chung for their able assistance in data collection and the editors of this volume for their comments on the chapter.
Notes 1. Assigning words to grammatical categories begs the question of the source of these categories. That is, are they innate? Are they constructed from the input? Although there are theories on all sides (see Hirsh-Pasek & Golinkoff 1996a), we can be agnostic on this issue for the purposes of this paper. 2. Another controversy we do not need to engage for our purposes is when the child has the concept “verb” as opposed to the semantically based category of action word.
References Brown, R. W. 1957. “Linguistic Determinism and the Part of Speech.” Journal of Abnormal and Social Psychology 55:1–5. Brown, R. 1973. A First Language. Cambridge, MA:Harvard University Press. Caramazza, A., Laudanna, A. and Romani, C. 1988. “Lexical Access and Inflectional Morphology.” Cognition 28:297–332. De Villiers, J. and de Villiers, P. 1973. “A Cross-sectional Study of the Acquisition of Grammatical Morphemes in Child Speech.” Journal of Psycholinguistic Research 2:267–278. Dockrell, J. and McShane, J. 1990. “Young Children’s Use of Phrase Structural and Inflectional Information in Form-class Assignments of Novel Nouns and Verbs.” First Language 10:127–140. Gelman, S. A. and Taylor, M. 1984. “How Two-Year-Old Children Interpret Proper and Common Names for Unfamiliar Objects.” Child Development 55:1535–1540. Gentner, D. 1983. “Why Nouns are Learned Before Verbs:Linguistic Relativity Versus Natural Partitioning.” In Language Development, Vol. 2, Language, Cognition, and Culture, S. A. Kuczaj (ed.). Hillsdale, N. J.:Lawrence Erlbaum Associates. Gerken, L., Landau, B. and Remez, R. E. 1990. “Function Morphemes in Young Children’s Speech Perception and Production.” Developmental Psychology 27:204–216. Gerken, L. and McIntosh, B. J. 1993. “Interplay of Function Morphemes and Prosody in Early Language.” Developmental Psychology 29:448–457. Gerken, L. and Shady, M. 1995. “Grammatical and Caregiver Cues in Early Sentence Comprehension.” In The Proceedings of the Twenty-sixth Annual Child Language
GRAMMATICAL MORPHEMES
187
Research Forum, E. V. Clark (ed.). Stanford, California:Center for the Study of Language and Information. Goldin-Meadow, S., Seligman, M. and Gelman, R. 1976. “Language in the Two-YearOld.” Cognition 4:189–202. Golinkoff, R. M., Hirsh-Pasek. K., Cauley, K. M. and Gordon, L. 1987. “The eyes Have It:Lexical and Syntactic Comprehension in a New Paradigm.” Journal of Child Language 14:23–46. Golinkoff, R. M., Hirsh-Pasek, K., Bailey, L. M. and Wenger, N. R. 1992. “Young Children and Adults Use Lexical Principles to Learn New Nouns.” Developmental Psychology 28:99–108. Golinkoff, R. M. and Hirsh-Pasek, K. 1995. “Reinterpreting Children’s Sentence Comprehension:T oward a New Framework.” In The Handbook of Child Language, P. Fletcher and B. MacWhinney (eds.). London:Blackwell. Golinkoff, R. M., Schweisguth, M. A. and Hirsh-Pasek, K. 1992. “Young Children Can Use Linguistic Information to Assign Novel Words to Syntactic Categories.” Paper presented at International Conference on Infant Studies, Miami, Florida. Hirsh-Pasek, K., Tucker, M. and Golinkoff, R. M. 1996. “Dynamical Systems:Reinter preting Prosodic Bootstrapping.” In Signal to Syntax: Bootstrapping fromSpeech to Grammar in Early Acquisition, J. Morgan and K. Demuth (eds.). Hillsdale, N. J.: Lawrence Erlbaum Associates. Hirsh-Pasek, K. and Golinkoff, R. M. 1996a. The Origins of Grammar. Cambridge, MA.: MIT Press. Hirsh-Pasek, K. and Golinkoff, R. M. 1996b. “The Intermodal Preferential Looking Paradigm Reveals Emergent Language Comprehension.” In Methods for Assessing Children’s Syntax, D. McDaniel, C. McKee and H. Cairns (eds.). Cambridge, MA: MIT Press. Katz, N., Baker, E. and MacNamara, J. 1974. “What’s in a Name? A Study of How children Learn Common and Proper Names.” Child Development 45:469–473. Kelly, M. 1992. “Using Sound to Solve Syntactic Problems:The Role of Phonology in Grammatical Category Assignments.” Psychological Review 99:349–364. Kelly, M. 1996. “The Role of Phonology in Grammatical Category Assignments.” In Signal to Syntax: Bootstrapping from Speech to Grammar in Early Acquisition, J. Morgan and K. Demuth (eds.). Hillsdale, N. J.:Lawrence Erlbaum Associates. Morgan, J., Shi, R. and Allopenna, P. 1996. “Perceptual Bases of Rudimentary Grammatical Categories:T owards a Broader Conceptualization of Bootstrapping.” In Signal to Syntax: Bootstrapping from Speech to Grammar in Early Acquisition, J. Morgan and K. Demuth (eds.). Hillsdale, N. J.:Lawrence Erlbaum Associates Pinker, St. 1984. Language Learnability and Language Development. Cambridge, MA: Harvard University Press. Saffran, J. R., Aslin, R. N. and Newport, E. L. 1996. “Statistical Learning by 8-Month-Old Infants.” Science 274:1926–1928. Shafer, V. L., Shucard, D. W., Shucard, J. L. and Gerken, L. A. In press. “An Electrophysiological Study of Infants’ Sensitivity to the Sound Patterns of English Speech.” Journal of Speech and Hearing Research.
188
R.M. GOLINKOFF, K. HIRSH-PASEK & M.A. SCHWEISGUTH
Shipley, E., Smith, C. and Gleitman, L. 1969. “A Study in the Acquisition of Language: Free Responses to Commands.” Language 45:322–342. Smith, L. B. and Thelen, E. (eds.) 1993. A Dynamic Systems Approach to Development: Applications. Cambridge, MA:MIT Press. Thelen, E. and Smith, L. B. (eds.) 1994. A Dynamic Systems Approach to the Development of Cognition and Action. Cambridge, MA:MIT Press. Tomasello, M. and Merriman, W. (eds.) 1995. Beyond Names for Things: Young Children’s Acquisition of Verbs. Hillsdale, N. J.:Lawrence Erlbaum Associates.
Predicting Grammatical Classes from Phonological Cues An empirical test Gert Durieux & Steven Gillis University of Antwerp
Abstract This paper investigates to what extent the grammatical class(es) of a word can be predicted on the basis of phonological and prosodic information only. We report on several experiments with an artificial learning system which has to assign English word forms to their appropriate grammatical class, using various types of phonological and prosodic information. First of all, we examine several phonological cues which were claimed by Kelly (1996) to be particularly good for distinguishing English nouns from verbs. Our results indicate that these cues are indeed partially predictive for the problem at hand and reveal that a combination of cues yields significantly better results than those obtained for each cue individually. We then show experimentally that ‘raw’ segmental information, augmented with word stress, allows the learning system to improve considerably upon those results. Secondly, we investigate several generalizations of the approach:basic segmental information also proves to be more predictive when the task is extended to encompass all open class words in English, and these findings can be replicated for a different (though related) language such as Dutch.
1.
Introduction
How do children learn the (major) form classes of their language? How do they learn that “table” is a , “nice” an , and “kiss” a as well as a ? Form classes may be part of children’s innate linguistic knowledge,
190
GERT DURIEUX & STEVEN GILLIS
which implies that a child “knows” that in the language (s)he is exposed to there are nouns and verbs, etc. However, just knowing that there are specific form classes is not sufficient, as this still leaves the problem of how the child determines which words in the speech stream belong to which class. One solution is to hypothesize that the child’s knowledge about form classes includes procedures for discovering the (major) form classes in the language. If those procedures are part of the child’s native endowment, they would have to consist of universal surface cues signaling grammatical category membership. At present it is unclear if such universally valid cues exist, and, if so, how they are to be characterized. An alternative solution is that the child uses information that “correlates” with form class and finds a bootstrap into the system of formal categories. Several bootstrapping approaches have been proposed: – Semantic bootstrapping:Under this approach, the meanings of words are used as a basis for inferring their form class (see, e.g., Pinker 1987; Bates & MacWhinney 1989). In this line of thinking, Gentner (1982) noted that, as object reference terms, nouns have a particularly transparent semantic mapping to the perceptual/conceptual world, and children may use this mapping to delineate the category of “nouns”. – Syntactic (also correlational or distributional) bootstrapping:This approach holds that grammatical categories can be discovered on the basis of distributional evidence (see, e.g., Maratsos & Chalkley 1980; Finch & Chater 1992; Mintz, Newport & Bever 1995). For instance, Mintz et al. (1995) show that by monitoring the immediate lexical contexts of words, the similarity of those contexts can be used to cluster lexical items and that the resulting clusters coincide with grammatical classes. More specifically, in an analysis of the lexical co-occurrence patterns, Mintz et al. show that a window of one word to either side of the target word is sufficient to identify nouns and verbs. – Prosodic and phonological bootstrapping:This approach holds that there are phonological and prosodic cues that may point the child to specific linguistic structures, e.g. clauses and phrases or specific classes of words, such as “open” vs. “closed” class words, “lexical” vs. “functional” items, or specific grammatical form classes (see, e.g., Gleitman, Gleitman, Landau & Wanner 1988; Morgan, Shi & Allopena 1996). All of these bootstrapping approaches typically emphasize the use of information from one domain, the “source” domain, to break into another domain, the “target” domain, and may thus be labeled “inter-domain” bootstrapping approaches, in contrast to the recently introduced notion of “autonomous” bootstrapping, which applies within a single domain (Cartwright & Brent 1996). In addition, it
PREDICTING GRAMMATICAL CLASSES
191
is typically argued that there is only a partial, i.e., a non-perfect correlation between the source and the target domains.
2.
Phonological cues to syntactic class
The usefulness of semantic and syntactic/distributional information is firmly established in the literature on grammatical category acquisition and assignment. The usefulness of phonological information is less straightforward. In a recent survey article, Kelly (1996) adduces several reasons why this may be the case: on the one hand, phonological cues are likely to be language-specific, and thus cannot be known in advance by the learner. By contrast, mappings between semantic and grammatical classes are assumed to be universal and may provide a useful bootstrap if the learner expects such mappings to occur. On the other hand, syntactic criteria remain the ultimate determinants of grammatical classes; any correlating phonological information can, at best, be supportive when both agree, or downright misleading when they do not. Hence, phonological cues are largely neglected as rare, unnecessary, unreliable and language-specific. Still, in Kelly (1992, 1996) a large body of evidence is presented in support of the claims that phonological correlates to grammatical classes do exist for English, and that people are sensitive to these correlates, even if they are only weakly diagnostic of grammatical classes. A fairly reliable correlate seems to be found in the stress pattern of disyllabic words:an examination of 3,000 disyllabic nouns and 1,000 disyllabic verbs, drawn from Francis and Kucera (1982), revealed that 94% of nouns have a trochaic (initial) stress pattern, whereas 69% of verbs display an iambic (final) stress pattern. More importantly, 85% of words with final stress are verbs and 90% of words with initial stress are nouns. Subsequent experiments in which subjects either had to construct sentences with a disyllabic word which could have either stress pattern, or read target sentences containing a disyllabic non-word in either nominal or verbal position, showed an outspoken preference for linking iambic words with the verb category and trochaic words with the noun category. These findings clearly indicate that a word’s phonological form cues its grammatical class and that speakers appear to be sensitive to those links between form and class. Another cue mentioned by Kelly (1996) is the number of syllables in a word:nouns tend to have more syllables than verbs, even when inflectional suffixes are added to the stem. In a corpus of English parental speech, the observed probability that a monosyllable is a noun was 38%. For disyllabic words this figure went up to 76% and for trisyllabic words to 94%. All words of
192
GERT DURIEUX & STEVEN GILLIS
four syllables were nouns. In a subsequent experiment, adults judged ambiguous (i.e., between noun and verb) monosyllables to be used more often as a verb and trisyllabic words to be used more often as a noun, when in fact both usages were equally likely. Other, less reliable, phonological correlates of the grammatical classes noun and verb in English include (a) duration, (b) vowel quality, (c) consonant quality and (d) phoneme number (Kelly 1996:252): (a) nouns are generally longer than verbs (controlling for syllable number, acoustic measurements show that the duration of nouns is generally longer than that of verbs), (b) nouns have more low vowels, (c) nouns are more likely to have nasal consonants, and (d) nouns contain more phonemes (again controlling for syllable number). Although these latter cues are deemed less reliable, Kelly (1992) does not preclude the possibility that, taken together, these individually weak cues may prove to be strong predictors of grammatical class. This brings us to a delineation of the specific research questions addressed in this paper. Assuming the existence of phonological correlates to grammatical classes and people’s sensitivity to them, we want to explore their potential value as predictive cues for category assignment, given a lexicon of reasonable size. This exploration will proceed by running machine learning experiments which involve the relevant cues. The specific machine learning algorithm used is introduced in Section 3. In Section 4 we test the predictive power of the phonological cues identified by Kelly (1996). Our first objective is to assess the predictive value of stress. In order to do so, we will report on a series of experiments covering increasingly larger segments of the English lexicon. A first test only includes disyllabic homographs with both a noun and a verb reading. A second test includes a larger number of disyllabic words, not all of which are necessarily ambiguous, and a third test includes a random sample of words containing one to four syllables. These experiments will allow us to assess whether the applicability of stress as a cue is restricted to the observed subregularity within the English lexicon, or whether it extends into broader regions of lexical space. A second objective is to assess the value of the “less reliable cues”. To this end, we have constructed a test for each of the observed minor cues, using a setup similar to the final test for stress. A related focus of interest is the question whether these cues prove more predictive when used in combination. A third objective is to determine whether the phonological makeup of words restricts their grammatical class without a priori identification of the relevant cue(s). In all previous experiments, higher order phonological cues such as vowel height, nasals versus other consonants, etc., were coded into the learning
PREDICTING GRAMMATICAL CLASSES
193
material. Thus, it was taken for granted that the learner was able to extract these cues from the learning material or that the learner somehow had access to them. The final experiments reported in Section 4 drop this precondition from the experimental setup. More specifically, it is investigated to what extent the learner can detect the link between phonology and grammatical class when no a priori identification of the relevant phonological dimensions is performed and only syllabified strings of segments are supplied as learning material. In Section 5 we will lift some of the restrictions observed in Section 4:the experiments reported on in this section will neither be restricted to English nor to the noun/verb opposition. In a first batch of experiments, we investigate to what extent the observed link between phonology and grammatical class carries over to another language. Using Dutch as a testbed, we investigate whether Kelly’s cues lead to reasonable results in predicting Dutch nouns and verbs, and whether the segmental material contains enough indications for distinguishing those categories in Dutch. In a second set of experiments, the task is broadened from predicting nouns and verbs in English and Dutch to predicting all open grammatical classes in those languages. Finally, we will deal briefly with issues of learnability and explore the relationship between the amount of training data the machine learner receives and its success in predicting grammatical classes. A related question concerns the impact of providing the learner with items of varying frequency:does the learner’s performance on the task improve when high frequency items are provided? These latter issues form the bridge to the last section, in which the crucial issue of the use of phonological bootstrapping as a language acquisition strategy will be considered in the light of the experimental results.
3.
The learning algorithm
In this study we use a modified version of Instance-Based Learning (IBL, Aha, Kibler & Albert 1991). IBL is a ‘lazy learner’:no explicit abstractions such as rules are constructed on the basis of examples. This distinguishes ‘lazy learning’ from ‘eager learning’ approaches such as C4.5 (Quinlan 1993) or connectionist learning, in which abstract data structures are extracted from the input material (viz. decision trees in the case of C4.5, matrices of connection weights in the case of connectionist nets). IBL’s learning consists of storing examples (or instances) in memory. New items are classified by examining the examples stored in memory and determining the most similar example(s) according to a similarity metric. The classification of that nearest neighbor (or those nearest
194
GERT DURIEUX & STEVEN GILLIS
neighbors) is taken as the classification of the new item. Thus, IBL assumes that similar instances have similar classifications. IBL falls within the class of supervised learning algorithms:the system is trained by presenting a number of input patterns (examples which are coded as a vector of features) together with their correct classification. Testing the system consists in presenting previously unseen word forms, suitably coded as feature vectors, and having the system predict their grammatical class. For the linguistic task in this study, viz. grammatical class assignment, IBL’s basic mode of learning is as follows:in the or phase, precategorized items are presented to the system in an incremental way. Thus, the system receives word forms and their grammatical class. These examples (the ) are stored in memory. In a phase, the system carries out the required task. In this case, IBL has to predict the grammatical class of a novel word form (a ), i.e. a word form not encountered during training. For this prediction the system relies on an explicit procedure for determining the similarity of the test item with the training items present in memory. IBL determines the most similar training item in its memory, and the grammatical category of that nearest neighbor is predicted to be the grammatical category of the test item. The basic algorithm of IBL (Aha et al. 1990) determines similarity using a straightforward overlap metric for symbolic features:it calculates the overlap between a test item and each individual memory item on an equal/non-equal basis (see (1), where X and Y are examples or instances, and xi and yi are the values of the i-th attribute of X and Y) on an equal/non-equal basis (see equation 2):
(1)
∆ ( X ,Y ) =
n
∑d( x i =1
(2)
i
, yi
)
d ( x i , y i ) = 0 if x i = y i else 1
This similarity metric treats all attributes as equally important. Consequently, if there are irrelevant attributes, two similar instances may appear to be quite dissimilar because they have different ‘unimportant’ attributes. This is why we extended the basic algorithm with a technique for automatically determining the degree of relative importance of attributes. The basic idea is to modify the matching process of the test item with the memorized items in such a way that the importance of individual features is used in making the similarity judgment. In other words, features that are important for the prediction should be made to
PREDICTING GRAMMATICAL CLASSES
195
bear more heavily on the similarity judgment. A weighting function (G(ai)) was introduced in equation (1), yielding equation (3).
(3)
∆ ( X ,Y ) =
n
∑G( a
i
i =1
) d ( x i , yi )
The function computes for each attribute its information gain over the entire set of training items or memorized instances. This information theoretic measure is perhaps best known from Quinlan’s work on the induction of decision trees (Quinlan 1986, 1993). The information gain for a particular attribute a, or in other words, the information gained by knowing the value of attribute a, is obtained by comparing the information entropy of the entire training set (H(T)) with that of the training set restricted to a known attribute a (Ha(T)). The gain of information is the difference between these measures as indicated in equation (4):
(4)
G(a) = H (T )− Ha (T )
The entropy of the training set is computed using equation (5):the entropy of the training set equals the average amount of information needed to identify the class of a single instance and is computed as the sum of the entropy of each class in proportion to its frequency in the training set.
(5)
H(T ) = −
j
∑
f ( Ci T
i =1
)
log 2
f ( Ci
)
T
The entropy of the training set restricted to each value of a particular attribute is computed in a similar way, i.e., the average information entropy of the training set restricted to each possible value of the attribute is calculated using (5). As expressed in equation (6), the weighted sum of these entropy measures yields the expected information requirement.
(6)
Ha (T ) =
n
∑ i =1
Ti T
H ( Ti
)
The information gain of an attribute (see (4)) expresses its relative importance for the required task. Used as a weighting function (as expressed in (3)) in determining similarity, attributes will not have an equal impact on determining
196
GERT DURIEUX & STEVEN GILLIS
the nearest neighbor of a test item:instances that match on important attributes (attributes with a high information gain value) will eventually turn out to be nearer neighbors than instances that only match on unimportant attributes (attributes with a low information gain value). (For a more extensive discussion of our implementation of IBL, we refer to Daelemans & van den Bosch 1992; Daelemans et al. 1994; Gillis et al. in press).
4.
Phonological cues for English nouns and verbs
The experiments reported in this section are meant to investigate how accurately English word forms can be assigned to the classes and . First, the phonological cues identified by Kelly (1996) will be used in machine learning experiments in order to assess their predictive power. Next, alternative phonological cues — as represented in the training material provided to the algorithm — will be explored and their strength will be compared to that of Kelly’s phonological cues. 4.1 Data and method All data for the experiments were taken from the CELEXv2 lexical database (Baayen, Piepenbrock & van Rijn 1991). This database was constructed on the basis of the Collins/Cobuild corpus (17,979,343 words), which was compiled at the University of Birmingham and augmented with material taken from both the Longman Dictionary of Contemporary English and the Oxford Advanced Learner’s Dictionary. The whole lexical database comprises 160,595 word forms, belonging to 52,447 lemmas. For the experiments, we restrict the database to nouns and verbs encountered at least once in the Collins/Cobuild corpus. We shall refer to this restricted database of nouns and verbs simply as ‘the database’. All experiments were run using the ‘leaving-one-out’ method (Weiss & Kulikowski 1991) to get the best estimate of the true error rate of the system. In this setup, each item in the dataset is in turn selected as the test item, while the remainder of the dataset serves as training set. This leads to as many simulations as there are items in the dataset:in each simulation the entire dataset is used for training, except for one item which is used for testing. The success rate of the algorithm is obtained by simply calculating the number of correct predictions for all words in the test set.
PREDICTING GRAMMATICAL CLASSES
197
4.2 Experiment 1: Stress In a first experiment, we investigate IBL’s ability to predict grammatical class using the stress pattern of word forms. Kelly (1996) claims that the large majority of disyllabic nouns are trochees while a majority of disyllabic verbs are iambs. For word forms such as “abstract”, which are orthographically ambiguous between a noun and a verb reading, not a single pair exists where the noun has iambic stress while the verb has trochaic stress. The experiment was set up to test Kelly’s claim that stress is a good predictor of grammatical class and to test the generality of that claim. For this purpose three datasets were constructed. The first dataset was restricted to orthographically ambiguous disyllabic words of the type “abstract”. The second dataset was compiled from all disyllabic word forms in the database, lifting the restriction that the noun and the verb should be orthographically identical. The third dataset was a selection from all noun and verb word forms in the database. Enlarging the dataset in this way will allow us to assess the predictive value of stress and to assess the generality of its predictive power. The first dataset consists of all disyllabic orthographical doublets found in the database (henceforth:“Disyllabic Homographs”). This dataset contains 212 nouns, 215 verbs and 16 ambiguous word forms. Each of these word forms is coded using two features, corresponding to the stress level of its syllables. In the encoding we use “2” to denote primary stress, “1” to denote secondary stress and “0” to indicate that the syllable bears no stress. The target categories, i.e., the grammatical classes, are coded as “N” for noun, “V” for verb, and “NV” for ambiguous word forms. For instance, in the training set, the word “abstract” is represented as the triple 〈2,0,N〉 for the noun reading and 〈0,2,V〉 for the verb reading. The first element in the triple denotes the stress pattern of the first syllable, the second element the stress pattern of the second syllable, and the third element denotes the target category, viz. the grammatical class of the word. This means that in the learning phase the algorithm encounters the word form “abstract” twice, once as the pattern 〈2 0〉 with its target category 〈N〉, and once as the pattern 〈0 2〉 with its associated target category 〈V〉. The word form “uses” is presented only once, viz. as the pattern 〈2 0〉 and its associated target category 〈NV〉. The results in Table 1 indicate that the stress pattern strongly constrains the possible grammatical categories:solely on the basis of the stress pattern, the grammatical category can be accurately predicted in 82.6% of the cases. The number of correctly predicted nouns and verbs is almost identical:in both cases the success rate exceeds 84%. Not surprisingly, word forms which are phonologically indistinguishable, such as “being”, are very poorly predicted (NV in
198
GERT DURIEUX & STEVEN GILLIS
Table 1). Although Kelly’s observation that not a single homograph exists where the noun has iambic stress and the verb trochaic stress applies to this dataset as well, this does not imply that stress makes perfect predictions. First of all, not all iambic word forms are verbs, and not all trochaic word forms are nouns:in word forms such as “uses”, the difference between the noun and verb reading lies in the (lack of) voicing of the first “s”, not in the stress pattern. In words such as “cashiers” the difference lies in the first vowel, which is reduced to schwa under the verb reading. Second, the presence of phonologically indistinguishable word forms in the dataset considerably complicates the prediction task. Table 1. Success scores for Disyllabic Homographs Category
# Word forms
# Correct
% Correct
N V NV
212 215 016
179 182 005
84.43% 84.65% 31.25%
Total
443
366
82.62%
These results indicate that word stress is a good predictor of a word’s grammatical class, provided that we restrict the dataset to disyllabic nouns and verbs which are orthographically ambiguous homographs (such as “abstract”). How general is this finding, or in other words, how robust is stress as a cue for predicting that a given word form is a noun or a verb? For this purpose we expanded the dataset to (a) a random selection of all disyllabic words, and (b) a random selection of all word forms of the database. a. The dataset was expanded to include other disyllabic words than homographs (henceforth:“Disyllabic Word forms”). Since these are far more numerous, a random stratified selection of 5,000 items was made, consisting of 3,142 nouns (62.84%), 1,465 verbs (29.3%) and 393 ambiguous word forms (7.86%). b. The dataset was expanded to include word forms of up to four syllables (henceforth:“All Word forms”): we selected a random stratified sample of 5,000 items from the database containing 3,136 nouns (62.72%), 1,457 verbs (29.14%) and 407 ambiguous word forms (8.14%). Since word forms were no longer of equal length, the coding scheme had to be adapted slightly:as in the previous experiments, we used one feature per syllable, indicating the syllable’s stress level (i.e., primary, secondary or no stress).
199
PREDICTING GRAMMATICAL CLASSES
Words containing fewer than four syllables were padded to the left with null features (“-”). This implies that word forms are aligned to the right, which is consistent with current analyses in metrical phonology where stress in English is assigned from right to left. Thus, in the Disyllabic Homographs data “abstract” is represented as the triples 〈2,0,N〉 and 〈0,2,V〉. In the Disyllabic Word forms data “word form” is represented as 〈2,0,N〉. In the All Word forms data “phonology” is represented as the quintuple 〈0,2,0,1, N〉, in which the first four values represent the stress level of the first through the fourth syllable and the fifth element represents the target category of the word. A word form with fewer than four syllables such as “word form” is represented as the quintuple 〈−,−,2,0,N〉 in which the first two values represent empty slots, the third value represents the stress level of the prefinal syllable, the fourth value the stress level of the final syllable and the fifth value the target category of the word. Table 2 displays the results of the learning experiment with these two new datasets. In comparison with the Disyllabic Homographs (overall success score: 82.6%), the overall success scores for the Disyllabic Word forms and All Word forms are far inferior:69.74% and 66.18% respectively. This drop in accuracy is most spectacular for verbs:whereas in the Disyllabic Homographs dataset verbs were correctly classified in almost 85% of the cases, this level of accuracy drops to 38.9% (Disyllabic Word forms) or less (All Word forms). In both cases, verbs were erroneously classified as nouns. The ambiguous NV category is never predicted correctly:apparently , stress is not an accurate diagnostic for this category. Table 2. Comparison of success scores for Disyllabic Homographs, Disyllabic Word forms and All Word forms Category
Disyllabic Homographs
Disyllabic Word forms
All Word forms
N V NV
84.43% 84.65% 31.25%
92.84% 38.91% 0%
94.77% 23.13% 0%
Total
82.62%
69.74%
66.18%
Taken together, these results show that stress is a good predictor in the case of disyllabic homographs, but already far less reliable a predictor when all disyllabic word forms are taken into account. When a still larger fragment of the lexicon is considered, the predictive value of stress further diminishes. It seems
200
GERT DURIEUX & STEVEN GILLIS
IGValue
then that Kelly’s characterization of stress as a reliable cue needs serious qualification: only in the case of disyllabic homographs can stress be labeled “reliable” as a predictor of grammatical class. For larger portions of the lexicon, the value of stress seems rather dubious. This conclusion is further strengthened when we study the Information Gain (see Section 3 for a formal definition) of the feature Stress in all three datasets. The Information Gain of stress is plotted in Figure 1 for the Disyllabic Homographs, Disyllabic Word forms and All Word forms datasets (restricted to the values for the final and the prefinal syllable). 0.8
0.6
Disyllabic homographs
0.4
Disyllabic word forms All word forms
0.2
0
PrefinalSyllable
FinalSyllable
Figure 1. Information Gain values for Disyllabic Homographs, Disyllabic Word forms and All Word forms
The Information Gain values for stress show a very clear picture:the value for Disyllabic Homographs is very high in comparison with the values for Disyllabic Word forms and All Word forms. This means that there is a high gain of information when the stress pattern of the word (at least the stress level of the two last syllables) is known in the case of the Disyllabic Homographs, while for the two other datasets the gain of information is far less. This difference in Information Gain value explains why the prediction of grammatical classes for
PREDICTING GRAMMATICAL CLASSES
201
Disyllabic Homographs is significantly better than the prediction for Disyllabic Word forms and All Word forms. Stress is simply a far worse predictor in the latter two conditions than in the former condition. 4.3 Experiment 2: Less reliable cues In addition to stress, Kelly (1996) identifies a number of “less reliable cues”: nouns (a) are generally longer than verbs (controlling for syllable number), (b) have more low vowels, (c) are more likely to have nasal consonants, and (d) contain more phonemes (again controlling for syllable number). In order to test the predictive value of these cues, we set up machine learning experiments similar to the ones in the previous section. Each of the cues was tested separately and an experiment with a combination of the cues was run. More specifically, the experiments cover vowel height (b), consonant quality (c) and number of phonemes (d). The first cue, duration (a), was not covered in our experiments, since the CELEX lexical database does not contain acoustic measurements that would allow a suitable encoding. For the sake of comparison, we use the same random stratified sample of 5,000 words in all the experiments we report on in this section, which is identical to the one the All Word forms dataset from the previous section was derived from. The sample contains 3,136 nouns, 1,457 verbs and 407 ambiguous word forms. Word length varies from one to four syllables. We first describe the actual encoding of the various “less reliable cues” and then present a global overview of the results and a detailed comparison of the success scores. (a) Vowel Height:Each syllable of a word form is coded for the vowel height of the nucleus. One feature per syllable is used, indicating vowel height of the syllable nucleus. Values for this feature are “high” (for the vowels /I, i˜, ~, u˜/), “mid” (for the vowels /7, Š˜/) and “low” (for the vowels /æ, f˜, #, a, %/). For diphthongs, a difference is made between “closing” diphthongs, which involve tongue movement from mid or low to high and “centering” diphthongs, where movement occurs from a peripheral to a central position. “Closing” diphthongs are /eI, aI, fI, 6~, a~/, and “centering” diphthongs /I6, 76, ~6/. For schwa and syllabic consonants the (dummy) feature value “neutral” was used. Words containing fewer than four syllables were padded to the left with null features. (b) Consonant Quality:For the second experiment word forms are coded for consonant quality. Here, two features per syllable are used, indicating the presence (“true”) or absence (“false”) of nasals in the onset and the coda of the syllable. As in the previous experiment, shorter word forms are left-padded with null features, yielding a total of eight features.
202
GERT DURIEUX & STEVEN GILLIS
(c) Number of Segments:For the third experiment, four features are used, indicating the number of segments per syllable. (d) Combined Cues:For the fourth experiment all these cues were combined: word forms were coded for vowel height, consonant quality, number of segments and stress (the latter as explained in previous section). Table 3 shows the results of these experiments:in the second column the success rates for the Stress encoding are displayed (see Table 2 in previous section), followed by those for Vowel Height, Consonant Quality, and Number of Segments. The last column contains the success scores for the combination of these cues (Combined Cues). Table 3. Success scores for individual cues and a combination of all the cues Category
Stress
Vowel Height
Consonant Quality
Number of Segments
Combined Cues
N V NV
94.77% 23.13% 0%
91.61% 34.04% 0%
87.66% 33.91% 0%
94.64% 10.50% 0%
77.59% 61.37% 11.96%
Total
66.18%
67.38%
64.86%
62.42%
67.68%
The success scores in Table 3 show that taken individually, Stress, Vowel Height, Consonant Quality and Number of Segments are good predictors for the category of nouns. They are poor cues for verbs and completely non-diagnostic for the ambiguous noun/verb category. An analysis of the global success scores (‘Total’ in Table 3) reveals that Stress is not a significantly better cue than Vowel Height (χ2 = 1.623, p < 0.2027) or Consonant Quality (χ2 = 1.928, p < 0.1649). The result for the Number of Segments is significantly different from that for Stress (χ2 = 15.397, p < 0.0001). Consequently, Kelly’s (1996) characterization of Stress as a robust cue and the three other cues as fairly weak ones is clearly contradicted by the outcome of these experiments. A second finding established by the total success scores in Table 3 is that, as hypothesized by Kelly (1996), a combination of the cues predicts the grammatical class of a word significantly better than any single cue in isolation (see the column Combined Cues in Table 3). On the basis of a combination of the four cues used in the experiment, the grammatical class of a word can be predicted with an accuracy of more than 67%. This score is significantly better than the score for Stress (χ2 = 6.552, p < 0.0105), Consonant Quality (χ2 = 15.587,
PREDICTING GRAMMATICAL CLASSES
203
p < 0.0001) and Number of Segments (χ2 = 42.022, p < 0.0001). However, there is no significant difference between the predictive power of Vowel Height and that of the Combined Cues (χ2 = 1.654, p < 0.1984). We will come back to this result later. A third outcome is that no single cue is especially powerful in predicting a particular grammatical class. When we compare the accuracy of the predictions for the individual grammatical classes, all cues are far better in predicting nouns than in predicting the other two categories. It is not the case that a particular cue is especially reliable in predicting one category and another in predicting another category. Irrespective of what cue is used, the success score for nouns is higher than that for the other categories. Nouns can be most accurately identified: up to 94% of the nouns were correctly classified. Verbs gain most from a combination of the individual cues:the best score for the individual cues is 34% for Vowel Height, which is significantly less than the 61.37% for the Combined Cues. This means that a combination of the cues appears to be necessary for distinguishing nouns and verbs:when the cues are combined the success score for verbs attains a level approaching that for nouns. And herein lies the difference between the cue Vowel Height and the Combined Cues:their global success scores do not differ significantly, but verbs cannot be identified with any reasonable accuracy on the basis of Vowel Height alone, while this does appear to be the case when the cues are combined. The high success score for nouns (around 90%) in the single cue conditions and the decrease of the success score for nouns in the Combined Cues condition, taken together with the increase of the success score for verbs in that condition, suggests that individual cues are insufficient for distinguishing the categories of nouns and verbs, while differentiation takes place when the cues are combined. In the Combined Cues condition, the ambiguous noun/verb category cannot be distinguished from the other categories on the basis of the cues selected. In conclusion, the results reported in this section indicate that there is more than an arbitrary relationship between English nouns and verbs and their phonological form. If our artificial learner had only made an ‘educated guess’, such as always predicting the most frequent category, a success rate of 62.72% (the percentage of nouns in the dataset) was to be expected. The mere fact that IBL reaches a score of more than 67% is suggestive of a closer link between grammatical classes and their phonological form. However, it may be argued that this conclusion is heavily biased because Kelly’s a priori analysis informed the coding of the learning material. In other words, IBL’s task may have been simplified because (the) relevant phonological cues were ‘precoded’ in the learning material so that the learner ‘knew’ in advance what information was
204
GERT DURIEUX & STEVEN GILLIS
relevant for the task at hand and this knowledge may have guided the discovery of the relevant cuts between grammatical classes. This issue will be taken up in the next experiment:if we do not provide the relevant phonological cues and give the learner only a string of phonemes as input, can the learner discover the relevant phonological cues for assigning words to their grammatical class? 4.4 Experiment 3: Phonological encoding In Experiment 2 all encodings were obtained by extracting specific features from the syllabified phonological string:in the Vowel Height encoding, syllabic nuclei were examined for one particular dimension, viz. vowel height. In the Consonant Quality encoding, the same was done with onsets and codas for the dimension nasality. The results for the Combined Cues indicate that considering more than one dimension at the same time gives rise to more accurate predictions. To investigate the extent to which relevant dimensions are picked up by the learner without their a priori identification, we set up an experiment in which the ‘raw’ phonological material is used. In order to asses the importance of the cues identified by Kelly (1996) these cues are now contrasted to a plain segmental encoding. If the a priori cues define the only relevant dimensions, we expect equal or poorer performance from the segmental encoding. Equal performance would indicate that the phonological a priori cues provide all the information necessary for category assignment, and that substitution of these cues by the segmental material from which they were derived, does not add any relevant information. Poorer performance may occur when the relevant oppositions, as defined by the cues, are obscured by the introduction of other, less relevant aspects of the segmental material. In this case, the learning algorithm would simply be unable to uncover the important dimensions. If, on the other hand, the a priori cues do not exhaust all relevant oppositions, the impact on the results may go in either direction. In the worst case, a scenario similar to the one described above may occur:although all potentially useful information is somehow present in the encoding, the algorithm is unable to single out the relevant dimensions, and performs less well than it should. If, however, the algorithm is capable of capitalizing on the extra information supplied in the segmental encoding, equal or better performance is expected. Equal performance would then indicate that other oppositions, although relevant, do not enhance overall prediction accuracy. Better performance would indicate that the cues as identified by Kelly (1996) leave out some important dimensions, which adversely effects the algorithm’s success rate.
205
PREDICTING GRAMMATICAL CLASSES
In Experiment 3 the same random stratified data set of 5,000 items as in Experiment 2 was used to facilitate comparison of results. The dataset was coded as follows: (a) For the first encoding, three features per syllable were used, corresponding to the onset, the nucleus and the coda (henceforth:ONC), yielding twelve features per word. As in the previous experiments, words containing less than four syllables were left-padded with null features. (b) For the second encoding, stress was added to the ONC encoding (henceforth:ONC + Stress), to allow comparison with Experiment 1. (c) For the third encoding, the Combined Cues of Experiment 2 were added to the ONC encoding (henceforth:ONC + Combined Cues). The results of the experiments are displayed in Table 4. The total success scores range from 73.86% for ONC, 78% for ONC + Stress to 78.16% for ONC + Combined Cues. For the sake of comparison, Table 4 also includes the success scores of the Combined Cues (Experiment 2, see Table 3). A comparison of the ONC encoding with the Combined Cues encoding reveals that a plain phonemic representation of the word forms results in substantially superior predictions. The global success score of the ONC encoding (73.86%) is significantly better than the one for the Combined Cues (67.68%, χ2 = 34.039, p < 0.0001). This also holds for the success scores for the individual categories (N:79.24% vs. 77.59%; V:75.7% vs. 61.37%; NV: 25.8% vs. 11.96%). Thus, significantly better results are obtained when the learning material is presented as a syllabified string of segments, which implies that the cues identified by Kelly do not provide all the relevant information, since in that case the encoding of the word forms as strings of segments would not have resulted in a significant increase of the success scores. Further qualitative analyses will have to reveal if indeed IBL uses cues similar to those established by Kelly and/or if the algorithm bases its predictions on (entirely) different information. Table 4. Success scores for phonological encodings Category
ONC
ONC + Stress
ONC + Combined Cues Combined Cues
N V NV
79.24 75.70 25.80
84.18 79.41 25.80
83.74 80.65 26.29
77.59% 61.37% 11.96%
Total
73.86
78.04
78.16
67.68%
206
GERT DURIEUX & STEVEN GILLIS
A second finding is that stress is indeed a relevant factor for predicting nouns and verbs:the success score for ONC, viz. 73.86%, increases to 78.04% (χ2 = 23.914, p < 0.0001) when in addition to the segmental material suprasegmental information (viz. word stress) is added to the encoding of the training material. On the other hand, adding in the other cues (viz. Vowel Height, Consonant Quality and Number of Segments) does not bring about a significant increase in accuracy (ONC + Stress vs. ONC + Combined Cues:78.04 vs. 78.16, χ2 = 0.021, p < 0.8846). This observation indicates that those cues do not add any relevant information beyond that which IBL already uses in the ONC + Stress encoding. In other words, the higher level phonological information that these cues bring to the task of category assignment does not significantly affect performance, which implies that this information is, in fact, redundant with respect to the segmental encoding. In conclusion, the experiments in this section show that the best performance in grammatical class prediction is attained by presenting the ‘raw’ phonological facts to the learning algorithm, i.e., syllabified strings of segments and the stress pattern of the word form. Adding higher level phonological information does not lead to significantly better predictions. The fact that using the a priori cues results in inferior performance indicates that in abstracting away from the actual phonological facts, important information for solving the task is lost. Now that we have shown that there is a close link between the segments and the stress pattern of English nouns and verbs, two questions about the generalizability of this finding come to mind. First of all, can this link also be shown to exist for other languages and, if so, how tight is this link? Secondly, can the link be shown to exist for all open class words, i.e., also for adjectives and adverbs in addition to nouns and verbs? The following sections address these questions. After extending the approach to another language, viz. Dutch, we proceed to extend the grammatical classes to be predicted to all open classes.
5.
Generalization
In the experiments reported on in the previous section, the predictability of English nouns and verbs (and the ambiguous ⁄ category) was investigated. It was shown empirically that the ‘raw’ phonemic encoding supplemented with prosodic information (stress pattern) yields the highest success score and that adding higher level phonological information does not improve the success score significantly. In this section, we will investigate to what extent these findings can be generalized. First, it will be examined if
PREDICTING GRAMMATICAL CLASSES
207
similar results can be obtained for another language. Mainly due to the immediate availability of appropriate data in the CELEX database, Dutch was chosen for this purpose. Secondly, it will be examined if the predictability of grammatical classes can be extended to all open classes in English as well as Dutch. 5.1 How general are Kelly’s phonological cues? In the previous section, the use of phonological cues for grammatical class assignment was investigated. More specifically, we investigated Kelly’s (1996) claim that there are cues with varying predictive power for the categories noun and verb in English. Kelly (1996) assumes that the phonological cues are likely to be language-specific. Intuitively plausible as this assumption may be, it seems to be contradicted by an investigation of Morgan, Shi and Allopenna (1996) and Shi, Morgan and Allopenna (1998). They investigated if various “presyntactic cues” (such as number of syllables, presence of complex syllable nucleus, presence of syllable coda, and syllable duration, to name only a few phonologically relevant cues used by Shi et al. (1998:174)) are sufficient to guide assignment of words to rudimentary grammatical categories. Their investigation of English (Morgan et al. 1996), Mandarin Chinese and Turkish (Shi et al. 1998) shows that “sets of distributional, phonological, and acoustic cues distinguishing lexical and functional items are available in infant-directed speech across such typologically distinct languages as Mandarin and Turkish.” (Shi et al. 1998:199). Thus it may well be the case that the cues identified by Kelly are crosslinguistically valid — be it to a different extent for each language. This finding would be in line with the findings of Shi et al. (1998:169): “Despite differences in mean values between categories, distributions of values typically displayed substantial overlap.” In order to explore this issue, we conducted experiments similar to those reported in the previous section, but using Dutch word forms instead of English ones. In Experiment 4, the cues identified by Kelly (1996) are used in a grammatical class assignment task involving Dutch nouns and verbs. In Experiment 5, the conclusion from Experiment 3 that predictions based on “raw” phonemic material yield better results than predictions based on higher level phonological cues will be tested on the Dutch material. 5.1.1 Experiment 4: Predicting grammatical classes in Dutch using phonological cues The aim of this experiment is the same as that of the second experiment:we investigate IBL’s ability to predict grammatical class using the phonological cues identified by Kelly (1996), viz. the stress pattern of a word form, the quality of
208
GERT DURIEUX & STEVEN GILLIS
the vowels, the quality of the consonants and the number of phonemes (controlling for syllable number). These cues are represented in the training material in machine learning experiments. Cues are represented individually as well as in combination, as was the case for the experiments with English word forms. In Experiment 4, Dutch word forms are used. All the Dutch data for the experiments were taken from the CELEX lexical database. This database was constructed on the basis of the INL corpus (42,380,000 word tokens) compiled by the Institute for Dutch Lexicology in Leiden. The whole database contains 124,136 Dutch lemmas and 381,292 word forms. A random stratified set of 5,000 word forms was selected from the CELEX lexical database for this experiment. The sample contains 3,214 nouns (64.28%), 1,658 verbs (33.16%) and 128 (2.56%) of ambiguous word forms. Word length varies from one to four syllables. The encoding schemes for the training data mirror those used in Experiments 1 and 2 closely. For the cue Stress each word form was encoded using four features, the value of which indicated the stress level of the syllable (primary or no stress). Since CELEX, unfortunately, does not code Dutch word forms for secondary stress, this feature value was lacking from our encoding as well. For the cue Vowel Height, each syllable was coded for a single feature corresponding to the syllable nucleus. Values for this feature were based on Kager’s (1989) description of the Dutch vowel system: “high” (/i˜, i˜˜, u˜, y˜, y˜˜, I, }/), “mid” (/7˜, œ˜, #˜, e˜, f˜, o˜, 7, œ, f/), “low” (/a˜, "/), “diph” (/7I, "u, œy/), “neutral” (/6/). Standard Dutch does not have syllabic consonants. For the cue Consonant Quality two features per syllable were used, one for the presence or absence of nasals in the onset and in the coda. For the cue Number of Segments, the number of segments in each syllable was coded. Moreover, as was the case in the experiment with English word forms, word forms with fewer than four syllables were left-padded with null features so as to satisfy fixed length input required by the implementation of the algorithm. Table 5 shows the results of these experiments. The first column shows the target categories. In the second column the success scores for the cue Stress are mentioned, followed by Vowel Height, Consonant Quality and Number of Segments. The last column contains the success scores for the combination of the cues. The algorithm reaches a total success score that ranges from 58% for Stress to 75% for Combined Cues. If we take as a base line the success score of the algorithm when it would always predict the most frequent category, i.e., the category Noun, the base line would be 64%. Stress and Consonant Quality score significantly below this base line (Stress: χ2 = 42.2832, p < 0.0001; Consonant Quality: χ2 = 28.8183, p < 0.001), whereas Vowel Height and Number of Segments
209
PREDICTING GRAMMATICAL CLASSES Table 5. Success scores for individual cues and a combination of all the cues (Dutch data) Category
Stress
Vowel Height
Consonant Quality
Number of Segments
Combined Cues
N V NV
76.88% 23.70% 25.78%
75.67% 52.29% 18.75%
85.19% 11.58% 17.97%
74.92% 56.21% 0.0%
81.77% 67.97% 4.69%
Total
57.94%
66.46%
59.06%
66.80%
75.22%
score above the base line (Vowel Height: χ2 = 5.2483, p < 0.0221; Number of Segments: χ2 = 7.0294, p < 0.0080). The Combined Cues show a considerable increase of the success score, i.e., the success score for the combined cues is significantly higher than that for the individual cues (Stress–Combined Cues: χ2 = 335.488, p < 0.0001; Vowel Height–Combined Cues: χ2 = 295.920, p < 0.0001; Consonant Quality–Combined Cues: χ2 = 295.920, p < 0.0001; Number of Segments–Combined Cues: χ2 = 86.099, p < 0.0001). A comparison of the results for Dutch with those for English (see Section 4.3) reveals some striking similarities. First of all, in both languages nouns appear to be much easier to predict than verbs on the basis of phonological cues. The ambiguous noun/verb category appears to be impossible to delineate, and hence, to predict. Secondly, a combination of the partially predictive cues yields a significantly better success score than all individual cues taken in isolation. This means that even the cues that do not seem to be very informative in isolation bring valuable information to solving the task when that information is combined with other information. A third and very remarkable finding is that even though the cues were initially designed for English nouns and verbs, they yield a fairly good result in predicting Dutch nouns and verbs. The success score for Dutch is even better than that for English:68% for English and 75% for Dutch. This suggests at least two interpretations. The cues described by Kelly (1996) may reveal more than mere idiosyncrasies of the English language:even though Dutch is typologically closely related to English, the cues identified for English also hold for Dutch. Further investigation of other languages could shed light on the question whether in the phonological system of languages of the world there are particular dimensions that correlate with particular grammatical classes. Alternatively, it may well be the case that the relationship between Dutch phonology and syntax (grammatical classes) is so transparent that even very weak cues allow for fairly good predictions.
210
GERT DURIEUX & STEVEN GILLIS
The first possibility, viz. that there are phonological cues to grammatical class that transcend language idiosyncrasies, requires a comparison of languages that goes far beyond the scope of this paper. Nevertheless, the results for Dutch are very striking in the sense that, in the literature reviewed in the introductory section, there appeared to be a consensus that phonological cues do not qualify for anything more than language idiosyncratic tendencies. The experiments reported on in this section show that, even with cues defined for English, Dutch nouns and verbs can be predicted correctly in three out of four cases, a success score that is higher, incidentally, than the one obtained for English. This seems to suggest a more than language specific link between phonological and syntactic structure. At this point we do not want to suggest that Kelly’s cues qualify for universal validity. On the contrary, the only suggestion is — in line with Shi et al. (1998) — that it may be fruitful to start the quest for cues that link phonology and syntax in a more than idiosyncratic way. That Kelly’s cues may be improved upon during such an undertaking, can be exemplified by a simple additional experiment we performed:if we reformulate the cue Consonant Quality (the presence or absence of nasals in the onset and coda of syllables) in terms of the cluster types that occur in those positions, the success rates for both English and Dutch improve significantly: from 64.86% to 68.70% for English and from 59.06% to 72.20% for Dutch. In other words, simply taking into account what qualifies as a legal syllable onset or syllable coda permits the algorithm to predict grammatical class membership of 74.8% of the Dutch nouns and 71.2% of the Dutch verbs and 75.67% of the English nouns and 66.3% of the English verbs. The second possibility hinted at above was that the link between phonology and parts of speech is simply more transparent in Dutch than in English. This brings us to the question how well nouns and verbs can be predicted in Dutch. In the experiments on grammatical class assignment in English, it was shown that using “raw” segmental material instead of the cues abstracted from the segmental material yielded significantly better results in terms of success scores. In the following experiment this segmental encoding of the training material was applied to Dutch word forms. 5.1.2 Experiment 5: Phonological encoding of Dutch word forms Experiment 5 was designed to test if like the outcome of Experiment 3 involving English word forms a phonological encoding of Dutch word forms yields a better success score than the other encodings in which abstract features such as vowel height were used to code the learning material. The same random stratified data set of 5,000 items as in Experiment 4 was used. The data were coded in a similar way as indicated for the experiment with English word forms (see
211
PREDICTING GRAMMATICAL CLASSES
Section 4.4), viz. (a) an ONC encoding in which the segments in the onset, the nucleus and the coda of each syllable are taken as the values of the features Onset, Nucleus and Coda; (b) an ONC + Stress encoding in which in addition to the ONC encoding also the stress level (primary or no stress) of each syllable is coded; and (c) an ONC + Combined Cues encoding in which in addition to the ONC encoding also the cues identified by Kelly were used. The results of the experiment are displayed in Table 6. It appears that the three encodings yield a very similar total success score:82% for the ONC encoding and 83% for the ONC + Stress and the ONC + Combined Cues encodings. The main finding of the experiment is that in more than 80% of the cases segmental material suffices to classify nouns and verbs appropriately. Only the difference between the ONC encoding and the ONC + Combined Cues encoding is statistically significant (χ2 = 4.459, p < 0.0347). This means that adding the suprasegmental feature stress to the ONC encoding does not lead to a significantly better prediction of the grammatical classes noun and verb in Dutch, in contrast to the results obtained for English. This may be due to the fact that the stress encoding for Dutch, which only indicates primary or no stress, is less informative than the one for English, which also indicated secondary stress. The individual categories are identified quite accurately as well on the basis of the segmental material:nouns (N) and verbs (V) are classified correctly in approximately 85% of the cases. However, the ambiguous NV category hardly reaches 30%. In other words, IBL is able to detect the relevant cues for assigning nouns and verbs to their appropriate class in more than eight out of ten cases, which is well above chance level. The ambiguous words are hard to classify: there does not appear to be robust information in the segmental material for distinguishing nouns and verbs from the ambiguous noun/verb word forms. A second important finding is that a segmental encoding yields significantly better results than an encoding in terms of more abstract phonological features. The column Combined Cues in Table 6 is taken from Experiment 4 (see Table 5) Table 6. Success scores for Dutch word forms: Phonemic encoding of nouns and verbs Category
ONC
ONC + Stress
ONC + Combined Cues
Combined Cues
N V NV
83.82% 83.66% 29.69%
85.28% 84.08% 29.69%
85.97% 84.26% 29.69%
77.59% 61.37% 11.96%
Total
82.38%
83.46%
83.96%
67.68%
212
GERT DURIEUX & STEVEN GILLIS
in which the data were encoded with Kelly’s cues. The success score of the ONC encoding, 82%, is significantly higher (χ2 = 76.79, p < 0.0001) than the success score for the Combined Cues encoding (68%). This finding replicates the one found for English:in both languages the segmental encoding yields significantly better results than the encoding in terms of more abstract phonological features. A comparison of the results for English and Dutch reveals that the relationship between the phonological form of a word and its grammatical class is more transparent in Dutch than in English. The ONC encoding yields a success score of 74% for English and 82% for Dutch, and adding stress to the encoding increases the success score to 78% for English and 84% for Dutch. 5.2 Experiment 6: Predicting open classes in English and Dutch In the experiments presented in the previous sections, only nouns and verbs were considered. We investigated to what extent segmental and suprasegmental information was sufficient to predict whether a particular word is a noun, a verb, or belongs to the ambiguous noun/verb category. In this section, we consider all open grammatical classes, viz. verbs, nouns, adjectives, and adverbs and all ambiguous categories (such as noun/verb, verb/adjective, noun/verb/adjective, etc.). In this sense, the experiments reported in this section are complementary to the ones reported by Morgan et al. (1996) and Shi et al. (1998) who show that there are phonological (as well as other) cues that make open class and closed class items (lexical and functional items) detectable in principle. In our experiments, we investigate if on the basis of phonological information a further differentiation of the open class items is possible in principle. The data for these experiments were extracted from the CELEX lexical database. Again a random stratified sample of 5,000 items was selected for each language. Table 7 gives an overview of the frequency distribution of the different word classes. These distributions as extracted from the CELEX lexical database are based on a count of 42,380,000 Dutch word forms and 17,979,343 English word forms. Table 7 shows that in both languages more than half of the word forms are nouns and around one quarter are verbs. For these categories, Dutch has approximately 4% more word forms than English. In both languages there are around 12% adjectives and less than 5% adverbs. An important difference is the relative frequency of the ambiguous categories:94.5% of the word forms are unambiguous in Dutch and 90% in English. The only ambiguous category in Dutch that outnumbers its English counterpart is the Adj/V category: Dutch has relatively many V/Adj word forms, mainly participles.
213
PREDICTING GRAMMATICAL CLASSES Table 7. Frequency distribution of English and Dutch Word forms in CELEX Category
English
Dutch
N V Adj Adv N/Adj N/V N/Adv Adj/V Adj/Adv Adv/V N/V/Adj N/Adj/Adv V/Adj/Adv N/V/Adv N/V/Adj/Adv
50.46% 23.14% 12.14% 4.22% 1.66% 6.44% 0.06% 0.98% 0.42% 0.04% 0.22% 0.14% 0.06% 0% 0.02%
54.4% 27.28% 11.92% 0.92% 0.84% 2.08% 0.04% 2.28% 0.02% 0% 0.22% 0% 0% 0% 0%
Each word was encoded according to the ONC + Stress scheme that appeared as the most powerful in the previous experiments, that is, every item was represented as a syllabified string of phonemes and the stress level of each syllable was also added. Table 8 displays the success scores for English and Dutch. The global success score is 66.62% for English and 71.02% for Dutch, which is well above chance level in both cases. The unambiguous categories reach an accuracy of 71.77% in English and 74.21% in Dutch. In both languages ambiguous categories are hard to predict:a success score of 20.6% in English and 15.69% in Dutch. For the ambiguous categories, Table 8 contains a second success score between brackets. That score is calculated using the CELEX frequencies for word forms: if a word form is ambiguous between two categories, e.g., the word form could be a noun as well as a verb, the noun and the verb reading are usually not equally frequent. In calculating the bracketed success score, we took this frequency difference into account in this sense that if the algorithm predicted the most frequent category, that prediction was taken to be correct. In doing so, we allowed for underextension in category assignment, a phenomenon documented in children’s language by Nelson (1995). When we allow for underextension, the algorithm’s success score increases to 68.42% for English and to 71.78% for Dutch. The results of this experiment show that the segmental material enriched with information about a word form’s stress pattern allows to predict its word
214
GERT DURIEUX & STEVEN GILLIS
Table 8. Success scores for all open classes (Dutch and English data) Category
English
Dutch
N V Adj Adv N/Adj N/V N/Adv Adj/V Adj/Adv Adv/V N/V/Adj N/Adj/Adv V/Adj/Adv N/V/Adv N/V/Adj/Adv
76.30% 71.48% 50.58% 80.09% 1.20% (40.96%) 29.50% (42.24%) 0.00% (0.00%) 8.16% (28.57%) 14.29% (28.57%) 0% (0%) 0.00% (27.27%) 0.00% (0.00%) 0.00% (0.00%) / 0.00% (0.00%)
78.60% 75.37% 55.70% 21.74% 4.76% (45.24%) 17.31% (35.58%) 0.00% (50.00%) 20.18% (20.18%) 0.00% (0.00%) /a 0.00% (9.09%) / / / /
Total
66.62% (68.42%)
71.02% (71.78%)
a
A slash means that this category was not represented in the data.
class with an accuracy of up to approximately 70%. In comparison with the previous experiments, in which only the categories noun and verb were involved, this success score is significantly lower (approximately 78% for English and 84% for Dutch). However, a success rate of 7 out of 10 means, at least, that there is a more than arbitrary relationship between phonology and grammatical class, a relationship that the algorithm can exploit given all open class categories in the language. A consistent finding in the experiments is that in Dutch the link between phonology and grammatical class is significantly more transparent than in English. When we compare the success scores for nouns and verbs (for instance the ONC + Stress encoding, see Table 4 for English and Table 6 for Dutch), Dutch scores consistently higher than English (nouns:84% for English vs. 85% for Dutch; verbs:79% for English vs. 84% for Dutch; noun/verb:26% for English vs. 30% for Dutch; overall success score:78% for English vs. 83% for Dutch). The results in Table 8 which comprise all open class word forms point in the same direction:the overall success score for Dutch is significantly higher than the one for English, and the same also holds for the success scores for the individual categories. Adverbs are the main exception to this observation:here
215
PREDICTING GRAMMATICAL CLASSES
English (80%) scores significantly better than Dutch (22%), which is mainly due to the fairly consistent marking of English adverbs with the suffix -ly. The difference in transparency between English and Dutch was also found in related work (see Gillis, Durieux & Daelemans 1996) where the main focus was on differences in morphological structure and its impact on the phonology/grammatical class connection in English and Dutch. In machine learning experiments using the IBL algorithm, Gillis et al. (1996) investigated the predictability of the grammatical class of morphologically simplex and complex word forms. More specifically, (a) monomorphemes and morphologically complex words (compounds and derivations) and (b) uninflected and inflected word forms were compared. A combination of these factors yielded four categories for each language. The results of the machine learning experiments in which an ONC + stress encoding was used with 8,000 training items in each condition, are summarized in Table 9. A comparison of the results in Table 9 shows that in each corresponding cell, Dutch scores significantly higher than English. Irrespective of the level of morphological complexity represented in the learning material, the relationship between the phonological structure of word forms and their grammatical class Table 9. Success score for word forms of different morphological complexity in English and Dutch English Uninflected
Inflected
Morphologically Simplex: Monomorphemes
51%
64%
Morphologically Simplex & Complex: Monomorphemes & Compounds & Derivations
58%
59%
Uninflected
Inflected
Morphologically Simplex: Monomorphemes
79%
74%
Morphologically Simplex & Complex: Monomorphemes & Compounds & Derivations
89%
68%
Dutch
216
GERT DURIEUX & STEVEN GILLIS
was easier to detect in Dutch than in English. Hence, the interface appears to be more transparent in Dutch. In addition it can be seen in Table 9 that in both languages compounding and derivation lead to better results:in English the grammatical class of morphologically simplex words can be accurately predicted in 51% of the cases and the success rate increases to 59% for morphologically complex words. A similar picture holds for Dutch:an increase from 79% for simplex words to 89% for complex words. Inflection does not have the same effect in both languages. In English it has a disambiguating effect:the success rate of simplex and complex words increases when inflection is added:from 51% to 64% for simplex words and from 58% to 59% for complex words. Dutch shows the reverse picture: inflection appears to add ambiguity, and hence the success rates decrease:from 79% to 74% for simplex words and from 89% to 64% for complex words. In all the previous experiments two important factors were disregarded, viz. (a) the amount of training data, and (b) the frequency of the word forms. Indeed, all the experiments were performed with 5,000 training items. It may well be that, with 5,000 items, the algorithm reached its peak performance in the noun-verb experiments in which only two unambiguous and one ambiguous category had to be predicted. However, it is very well possible that, when all open classes are involved, the maximum accuracy may not yet have been reached, e.g. because certain categories are underrepresented in the training data. In the next section, we report on a learning experiment in which the number of training data is systematically increased, thus allowing a study of the algorithm’s learning curve. A second factor that was neglected in the previous experiments was the frequency of the word forms. The correlation between irregular forms and their relative frequency is a well known phenomenon:irregular forms (e.g., irregular past tense verbforms) tend to be situated in the higher frequency classes, while the lower the frequency of a word form, the more likely that it is regular. In the previous experiments, the frequency of the word forms in the training sets was not controlled for. In the next section, the role of frequency will be further investigated. 5.3 Learning curves and issues of itemfr equency In the previous experiments (2–6), all datasets contained the same 5,000 word forms. This methodology was used in order to reliably compare the impact of different coding schemes. In Experiment 7 we turn to the relationship between the number of items in the training set and the accuracy of the predictions:what is the evolution of the algorithm’s predictions vis-à-vis the number of training items? In other words, is there a learning effect when more items are added to the
PREDICTING GRAMMATICAL CLASSES
217
training set, or does the algorithm hit upon the right cues with only a relatively small set? A second issue that will be investigated concerns the relationship between the frequency of words and the accuracy of predictions:not only the number of training items but also their frequency may influence the algorithm’s performance. In Experiment 8 the relationship between several frequency regions in the lexicon and the algorithm’s performance will be examined.
% Correctly predicted
5.3.1 Experiment 7: Learning curve The data for this experiment were once more collected from the CELEX lexical database. Datasets with English and Dutch word forms were incrementally formed, starting with 500 items. At each step 500 items were added. From 10,000 items onwards 1,000 items were added at each step. Each dataset represented a random stratified selection of word forms in which the relative frequencies of the various categories remained constant and in agreement with their relative frequencies in the entire CELEX database. Word forms were coded according to the ONC + Stress scheme, i.e., a word form was represented as a syllabified string of segments and for each syllable the stress level was added. The results are displayed in Figure 2 and Figure 3. 100 90 80 70 60 50 40 30 20 Dutch 10
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
0
English
# Training Items (×1000)
Figure 2. Global success scores for English and Dutch open class word forms
218
100
GERT DURIEUX & STEVEN GILLIS
100
a
90
90
80
80
70
70
60
60
50
50
40
40
30
30
20
b
20 Dutch-Noun
10
English-Noun 0
Dutch-Verb
10
English-Verb
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
0 # Training Items (×1000)
100
# Training Items (×1000)
100
c
90
90
80
80
70
70
60
60
50
50
40
40
30
30
20 10
d
20
Dutch-Adjective
10
English-Adjective 0
Dutch-Adverb English-Adverb
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
0
# Training Items (×1000)
# Training Items (×1000)
PREDICTING GRAMMATICAL CLASSES
100
e
219
Dutch-Ambiguous
90
English-Ambiguous
80 70 60 50 40 30 20 10
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
0 # Training Items (
1000) ×
Figure 3. Success scores for English and Dutch open class word forms (a: nouns, b: verbs, c: adjectives, d: adverbs, e: ambiguous)
Figure 2 shows the global success scores for English and Dutch as a function of the number of training items (range 500–20,000). Figure 3 shows the results for the individual grammatical categories (nouns, verbs, adverbs, adjectives) and the ambiguous categories. Figure 2 shows that for English as well as for Dutch there is a significant increase of the success score in predicting open class categories:for English the score increases from 57% with 500 training items to 69% when trained with 20,000 items. The success score stabilizes around 69%. For Dutch there is an increase from 59% with 500 items to 75% with 20,000 items. Even at that last point, there is still a slight increase in the success score. The latter point is clearly shown in Figure 3:while the success scores for Dutch and English nouns, verbs and ambiguous categories have stabilized well before the 20,000 end point (Dutch nouns:83%, English nouns:79%, Dutch verbs:78%, English verbs: 74%, Dutch ambiguous:15%, English ambiguous:19%), adjectives and adverbs show a different picture. At least for Dutch, there is still an increase of the success score
220
GERT DURIEUX & STEVEN GILLIS
for adjectives (endpoint 64% for adjectives and 40% for adverbs). English adverbs, on the other hand, have reached their peak success score (82%) already with 2,500 items, and the success score for English adjectives has stabilized at around 57%. 5.3.2 Experiment 8: Word form frequency Experiment 8 was designed to assess the influence of a word form’s frequency class on the predictability of its grammatical category. In the previous experiment the relative frequencies of the open classes were kept constant, viz. in each training set they reflected the frequency of each category in the entire database. However, all grammatical categories are not equally distributed over the entire lexicon. This can easily be shown as follows. We divided the open class word forms of the CELEX lexical database into frequency classes according to the scheme developed by Martin (1983) as specified by Frauenfelder, Baayen, Hellwig and Schreuder (1993:786). Six frequency classes are identified ranging from very frequent word forms to extremely rare ones:“very frequent/common” (f ≥ 1:10,000), “frequent/common” (1:100,000 ≤ f < 1:10,000), “upper neutral” (1:500,000 ≤ f < 1:100,000), “lower neutral” (1:1,000,000 ≤ f < 1:500:000), “rare” (1:10,000,000 ≤ f < 1:1,000,000) and “extremely rare” (f < 1:10,000,000). Table 10 gives an overview of how the open classes are distributed over these frequency classes. For the sake of convenience the frequency classes were labeled as “n1” (“very frequent/common”) through “n6” (“extremely rare”), and due to the sparsity of the data the classes with the highest frequency word forms, viz. n1 and n2, were merged (hence, n1/2). First of all, Table 10 shows a close parallel between English and Dutch in that, in all frequency classes, nouns outnumber all other grammatical categories. Moreover, the most dramatic differences between the frequency classes lies in the percentage of nouns and the percentage of the ambiguous category (in which all categories such as N/V, N/Adj, V/Adv, etc. are collected):the percentage of nouns increases from n1/2 to n6 (English:37.9% to 65.2%; Dutch:29.4% to 60.2%), while the percentage of the ambiguous categories decreases from n1/2 to n6 (English:30.76% to 0%; Dutch:21.76% to 0.72%). This means that in the high frequency classes there are many more grammatically ambiguous words than in the low frequency classes. And in the low frequency classes there are relatively more nouns than in the high frequency classes. Given the results of the previous experiments that nouns can be best predicted while ambiguous categories are very hard to predict, this picture leads to the following prediction:success scores in the grammatical class prediction task will increase as frequency decreases, or in other words, the success score will be lower for high frequency items than for low frequency items.
221
PREDICTING GRAMMATICAL CLASSES Table 10. Distribution of grammatical classes over frequency classes Frequency Class
English
%Nouns %Verbs %Adjectives %Adverbs %Ambiguous
n1/2
n3
n4
n5
n6
37.9 17.64 8.64 5.06 30.76
43 23.9 11.6 3.68 17.82
46.48 25.3 14.52 3.4 10.3
54.28 24.72 13.02 3.68 4.3
65.2 17.26 12.18 5.36 0
Frequency Class
Dutch
%Nouns %Verbs %Adjectives %Adverbs %Ambiguous
n1/2
n3
n4
n5
n6
39.4 21 13.2 4.64 21.76
45.4 25.72 14.96 1.58 12.34
49.4 27.56 15.24 0.94 6.86
58.52 25.16 12.7 0.54 3.08
60.2 26.84 11.9 0.34 0.72
This prediction was tested in an experiment. For each frequency class 5,000 word forms were randomly selected from the CELEX lexical database. The relative frequencies of the grammatical categories were in agreement with the ones shown in Table 10. The word forms were coded according to the ONC + Stress coding scheme. Figure 4 shows the success scores per frequency class. It clearly appears that the success score of the algorithm increases as the frequency of word forms decreases:for both the English and the Dutch data, the success rate is significantly lower for the most frequent word forms (n1/2) as compared to the most infrequent word forms (n6). The trend is monotonous:each step (from n1/2 to n3 to n4 …) brings about an increase in the global success score. This means that in both English and Dutch more frequent word forms appear to have a less transparent mapping between their phonology and their grammatical class.
222 % Correctly predicted
GERT DURIEUX & STEVEN GILLIS 100 90 80 70 60 50 40 30 20 Dutch 10
English
n6
n5
n4
n3
n1/2
0 FrequencyClass
Figure 4. Global success scores for Dutch and English word Forms relative to the frequency classes
6.
Discussion and conclusion
We set out to investigate to what extent grammatical classes can be predicted from the phonological form of a word. Correlations between a word form’s phonological form and its grammatical class were indicated by amongst others, Kelly (1996), and the potential of phonological bootstrapping as a useful strategy for cracking the grammatical code has been suggested in the language
PREDICTING GRAMMATICAL CLASSES
223
acquisition literature. From the point of view of acquisition, phonological bootstrapping seems to be a helpful strategy in principle. There are some examples scattered throughout the literature of how phonological bootstrapping may work in first language acquisition. A case in point is provided by De Haan, Frijn and De Haan (1995) who show that in disentangling the intricate system of verb-second placement in Dutch, children appear to use the syllable structure of the verb in discovering the relationship between verb placement and the verb’s morphology. At some point in the acquisition process children work with the (partially correct) idea that disyllabic trochaic verbs (non-finite verbforms in the adult language) are to be placed in final position and monosyllabic verbs (finite verbs) are to be placed in second position. This strategy is an overgeneralization but it is a good example of how phonology may be helpful in solving a morphosyntactic problem at a particular point in the acquisition process (see also Wijnen & Verrips 1998). In this paper, we addressed the question of how far the language learner can get in exploiting phonological bootstrapping as a strategy in acquisition. If phonological bootstrapping is a useful strategy, to what extent can it assist the child in cracking the grammatical code? For this purpose we conducted machine learning experiments that enable us to draw the boundaries of the strategy:if phonological material can be used by the learner to predict the grammatical category (or categories) of the word forms he encounters, how accurate will these predictions be? In machine learning experiments specific variables can be systematically varied and the consequences of the variation can be closely monitored, so that the ultimate quantitative consequences of particular hypotheses can be investigated. The first part of this paper explicitly addressed such a hypothesis, viz. Kelly’s (1996) overview of phonological cues that distinguish nouns and verbs in English. The predictive power of those cues was tested in machine learning experiments, in which the learning system had to predict the grammatical class of unseen word forms based on prior exposure to a representative sample of word form/category pairs. By varying the number and nature of the phonological cues in the input representation of word forms, the relative impact of each cue on prediction accuracy could be evaluated. In a first experiment, the predictive value of stress was investigated, since this feature was claimed to be a reliable indicator of grammatical class. Our results indicated that this claim could only be supported for a very limited subset of the lexicon, viz. for disyllabic word forms which are orthographically ambiguous between a noun and a verb reading. For larger subsets of the lexicon, the predictive value of stress was shown to be considerably lower. The “less reliable cues”, viz.
224
GERT DURIEUX & STEVEN GILLIS
Vowel Height, Consonant Quality, and Number of Segments were also tested. When considered individually, none of these cues turned out to be good predictors of grammatical class, although vowel height (a cue denoted by Kelly as “weak”) was found to yield better predictions than the “strong” cue Stress. A combination of these cues, however, resulted in a significant increase in predictive accuracy, which confirms Kelly’s (1996) hypothesis that individually weak cues may put stronger constraints on grammatical class when considered collectively. A similar conclusion was reached by Shi et al. (1998) with respect to the identification of open and closed class words. These experiments relied on the a priori identification of the relevant cues: the learner ‘knew’ in advance which dimensions were relevant for solving the task of grammatical category assignment, since the cues were precoded in the learning material. When that information was removed from the learning material, a ‘raw’ phonological encoding appeared:a syllabified string of segments. This ‘raw’ phonological encoding yielded significantly better results than any of the encodings in terms of more abstract phonological cues. Augmenting this phonological encoding with stress information brought about a significant increase in performance; adding the other cues had no such effect. These findings are taken to indicate that the cues which had been identified in the literature do not cover all relevant dimensions of the problem domain. Overall, the results of these experiments strongly support the claim that for English nouns and verbs there is a more than arbitrary relation between phonological form and grammatical category. In almost eight out of ten cases the algorithm can predict the grammatical category of a word form when the only relevant information is a syllabified string of segments and the stress pattern of the word. In Experiments 4 and 5 these findings were replicated for Dutch:for Dutch nouns and verbs, a combination of phonological cues yields more accurate predictions than those obtained for each cue individually. And for Dutch too, better performance was obtained with a ‘raw’ segmental encoding than with an encoding that relies on phonological cues abstracted from the segmental material. What is the validity of the cues identified by Kelly (1996)? As the comparison of the different encodings showed, valuable information is lost by abstracting away from the segmental details. This could mean that either the segmental representation is the most appropriate one for the task of predicting grammatical category membership, or that the cues identified by Kelly constitute only a subset of the relevant dimensions. In this case, additional cues should be identified in order to obtain an accuracy comparable to that reached with the segmental encoding. That this may prove to be a valuable enterprise is suggested by the encoding of the Dutch material in terms of Kelly’s cues. The outcome of
PREDICTING GRAMMATICAL CLASSES
225
Experiment 4, in which the Dutch data were coded in terms of the cues originally designed for English, was at least surprising:the predictions for Dutch nouns and verbs were significantly better than those for their English counterparts. Of course, Dutch and English are typologically very close, and thus far-reaching conclusions about a more than language specific validity of Kelly’s cues should be avoided given the present evidence. It is clear that experiments similar to the ones reported on in this paper should be carried out with data from a typologically more varied sample of languages. Nevertheless, the outcome of our experiments shows that any statement about the idiosyncracy of the link between phonology and grammatical class should be pronounced with care. A second general conclusion that can be drawn from our experiments (especially Experiments 6 and 7) is that the phonology–grammatical class link can be generalized to all open class items in English and Dutch. For Dutch, the prediction of the categories noun, verb, adjective and adverb reached an accuracy of 71% with 5,000 training items and 75% with 20,000 training items. For English, prediction accuracy was 67% with 5,000 items and 69% with 20,000 items. At the same time, the ambiguous categories prove to be very hard to predict and cannot be identified above chance level. For the unambiguous categories, the results are very promising:with a relatively small number of examples, fairly accurate identification of the main parts of speech is possible. Adding in more examples further improves the delineation of the various categories. A third general conclusion is that accuracy of the prediction of grammatical classes on the basis of phonological characteristics differs from language to language. In all the experiments, the success scores for Dutch are superior to those obtained for English. This means that the mapping from segmental representations to grammatical classes is more transparent in Dutch than it is in English. A counterexample to this general trend is the class of English adverbs: the success score for English adverbs is above 80% early in the learning curve. The well-known fact that a high number of English adverbs end in -ly, which is reflected as such in the segmental encoding of the learning material, provides a very straightforward mapping between phonology and part-of-speech, a transparent mapping that is lacking for Dutch adverbs. The conclusion of our machine learning experiments is that ‘in principle’ the link between phonology and grammatical class can be exploited with a certain degree of confidence. The question now is whether and how children exploit this link. Our experiments cannot provide an adequate answer to this question, because in the present set of experiments a number of abstractions were made that prevent us from going beyond the ‘in principle’ answer.
226
GERT DURIEUX & STEVEN GILLIS
The approach taken in this study carries a number of abstractions. First of all, the machine learning algorithm used in the experiments incorporates a supervised learning technique. The learner is first trained with precategorized material and is then tested on its ability to generalize from that material. Of course, children do not get similar input:they do not receive a properly tagged corpus from which they can generalize. They do not get a part-of-speech label with every single word they hear. A non-supervised approach will have to be taken in order to relieve this constraint (see, e.g., Shi et al. 1998). A second abstraction is that the phonological information is presented in isolation. The learner only ‘sees’ or ‘hears’ a word form’s phonological form and does not have access to other information, such as the syntactic patterns in which it occurs, or its meaning. It may well be the case that a child uses all these knowledge sources concurrently as a bootstrap:De Haan et al. (1995) and Krikhaar and Wijnen (1995) have shown that children are able to use correlated syntactic, semantic and phonological cues in solving particular grammatical problems. Taken together, these two abstractions may lead us in the right direction for future experiments in a supervised framework. Indeed, children do not get a part of speech label with each word they hear, but at least for some words they get appropriate contextual information to make sense of the type of entity involved (i.e., semantic information). At the same time they get structural information in the sentences they hear:those sentences carry distributional information of target words. These phonological, structural and semantic cues may not be reliable cues in isolation, but taken together, they may lead the learner towards a reliable delineation of major word classes. This, of course, remains a research direction that needs to be explored further. The experiments reported in this paper raise still other important questions that were sharply identified and illustrated with quantitative data. First of all, the algorithm is tested with several thousands of word forms. Does this mean that phonological bootstrapping only becomes a useful strategy when a child ‘knows’ (presumably, comprehends) a few thousand words? Given the evidence on bootstrapping as suggested in the studies of De Haan et al. (1995) and Krikhaar and Wijnen (1995) this does not appear to be the case:even with a relatively modest lexicon, two-year-olds appear to rely on phonological characteristics of words to resolve structural problems. However, they do not appear to display an across the board strategy (e.g., they do not use phonology to solve grammatical class membership, as illustrated in this paper). Instead, their use of phonological cues appears to be geared towards solving particular structural problems (e.g., they use phonology in a task-oriented manner, such as relating the phonological
PREDICTING GRAMMATICAL CLASSES
227
characteristics of particular words to solve a specific structural puzzle, as illustrated by De Haan et al. (1995) and Krikhaar and Wijnen (1995)). Another problem raised by our experiments concerns the role of frequency. In Experiment 8 the role of token frequency was investigated. The main result was that the more frequent a word form is the less transparent the mapping between its phonology and grammatical category is. For phonological bootstrapping in language acquisition this means that infrequent words are better for discovering the intricacies of the phonology–syntax interface than high frequency words. At first sight this poses a serious problem, since it is at least intuitively clear that the words children hear stem from the high frequency regions of the lexicon. It is exactly in the high frequency regions that the most ambiguous word forms appear (see Table 11):up to 30% in English and more than 20% in Dutch, and IBL’s performance was poor on ambiguous word forms. At second glance, the algorithm opted for a strategy of predicting unambiguous categories instead of ambiguous ones. Moreover, IBL predicted the more frequent category that agreed with the phonological form of the word. And this also seems to be the strategy adopted by children:children appear to bypass the problem of ambiguous word forms in the sense that they employ a ‘one form–one function’ strategy according to Nelson (1995), who specifically addresses the issue of dual category forms:“For most forms, most children seemed to obey the one form–one function principle in production, …” (Nelson 1996:246). This result is promising but it leaves the question unanswered of how ambiguous categories are eventually acquired. The outcome of the experiments reported in this paper should thus be seen as an indication of how far the principle of phonological bootstrapping can lead the child in discovering grammatical categories, which is only a first step in showing how phonological information is used together with distributional patterns, semantic information, and other kinds of information.
Acknowledgments Preparation of this paper was supported by a VNC project of FWO-NWO (contract number G.2201.96) and by a GOA grant (contract number 98/3). Thanks are due to Walter Daelemans, Frank Wijnen and the participants in the TROPICS conference for interesting discussions of phonological bootstrapping, and to Annick De Houwer for her critical reading of the manuscript.
228
GERT DURIEUX & STEVEN GILLIS
References Aha, D., Kibler, D. and Albert, M. 1991. “Instance-based learning algorithms.” Machine Learning 6:37–66. Baayen, H., Piepenbrock, R. and van Rijn, H. 1993. The CELEX Lexical Database (CD-ROM). Philadelphia, PA.:Linguistic Data Consortium. Bates, E. and MacWhinney, B. 1989. “Functionalism and the competition model.” In The Crosslinguistic Study of Language Processing, B. MacWhinney and E. Bates (eds.). New York:Cambridge University Press. Cartwright, T. and Brent, M. 1996. “Early acquisition of syntactic categories.” Ms. Daelemans, W. and van den Bosch, A. 1992. “Generalisation performance of backpropagation learning on a syllabification task.” In TWLT3: Connectionismand Natural Language Processing, M. Drossaers and A. Nijholt (eds.). Enschede:T wente University. Daelemans, W., Gillis, S. and Durieux G. 1994. “The acquisition of stress:a data-oriented approach.” Computational Linguistics 20:421–451. De Haan, G., Frijn, J. and De Haan, A. 1995. “Syllabestructuur en werkwoordsverwerving.” TABU 25:148–152. Finch, S. and Chater, N. 1992. “Bootstrapping syntactic categories.” In Proceedings of the Fourteenth Annual Conference of the Cognitive Science Society of America, 820–825. Francis, W. and Kucera, H. 1982. Frequency Analysis of English Usage: Lexicon and Grammar. Boston:Houghton-Mi fflin. Frauenfelder, U., Baayen, H., Hellwig, F. and Schreuder, R. 1993. “Neighborhood density and frequency across languages and modalities.” Journal of Memory and Language 32:781–804. Gentner, D. 1982. “Why nouns are learned before verbs:linguistic relativity versus natural partitioning.” In Language Development: Language, Culture, and Cognition, S. Kuczaj (ed.). Hillsdale, N. J.:Erlbaum. Gillis, S. and Durieux, G. 1997. “Learning grammatical classes from phonological cues.” In Language Acquisition: Knowledge Representation and Processing, A. Sorace, C. Heycock and R. Shillcock (eds.). Edinburgh:HCRC. Gillis, S., Durieux, G. and Daelemans, W. 1996. “On phonological bootstrapping:l’ arbitraire du signe revisited.” Paper presented at the VIIth International Congress for the Study of Child Language, Istanbul. Gillis, S., Daelemans, W. and Durieux, G. 2000. “Lazy Learning.” In Cognitive Models of Language Acquisition, J. Murre and P. Broeder (eds.). Oxford:Oxford University Press. Gleitman, L., Gleitman, H., Landau, B. and Wanner, E. 1988. “Where learning begins.” In The Cambridge Linguistic Survey, Vol. 3, F. Newmeyer (ed.). New York: Cambridge University Press. Kager, R. 1989. A Metrical Theory of Stress and Destressing in English and Dutch. Dordrecht:Foris.
PREDICTING GRAMMATICAL CLASSES
229
Kelly, M. 1992. “Using sound to solve syntactic problems.” Psychological Review 99:349–364. Kelly, M. 1996. “The role of phonology in grammatical category assignment.” In From Signal to Syntax, J. Morgan and K. Demuth (eds.). Hillsdale, N. J.:Erlbaum. Krikhaar, E. and Wijnen, F. 1995. “Children’s categorization of novel verbs:syntactic cues and semantic bias.” In The Proceedings of the twenty-seventh Annual Child Language Research Forum, E. Clark (ed.). Stanford Linguistics Association, Center for the Study of Language and Information. Maratsos, M. and Chalkley, M. A. 1980. “The internal language of children’s syntax.” In Children’s Language, Vol. 2, K. Nelson (ed.). New York:Gardner Press. Martin, W. 1983. “On the construction of a basic vocabulary.” In Proceedings of the 6th International Conference on Computers and the Humanities, S. Burton and D. Short (eds.). Mintz, T., Newport, E. and Bever, T. 1995. “Distributional regularities in speech to young children.” In Proceedings of NELS 25, 43–54. Morgan, J., Shi, R. and Allopenna, P. 1996. “Perceptual bases of rudimentary grammatical categories.” In FromSignal to Syntax, J. Morgan and K. Demuth (eds.). Hillsdale, N. J.:Erlbaum. Nelson, K. 1995. “The dual category problem in the acquisition of action words.” In Beyond Names for Things, M. Tomasello and W. Merriman (eds.). Hillsdale, N. J.: Erlbaum. Pinker, S. 1987. “The bootstrapping problem in language acquisition.” In Mechanisms of Language Acquisition, B. MacWhinney (ed.). Hillsdale, N. J.:Erlbaum. Quinlan, J. R. 1986. “Induction of decision trees.” Machine Learning 1:81–106. Shi, R., Morgan, J. and Allopenna, P. 1998. “Phonological and acoustic bases for earliest grammatical category assignment:a cross-linguistic perspective.” Journal of Child Language 25:169–201. Weiss, S. and Kulikowski, C. 1991. Computer Systems that Learn. San Mateo:Mor gan Kaufmann. Wijnen, F. and Verrips, M. 1998. “The acquisition of Dutch syntax.” In The Acquisition of Dutch, S. Gillis and A. De Houwer (eds.). Amsterdam:Benjamins.
Pre-lexical Setting of the Head Complement Parameter through Prosody Maria Teresa Guasti University of Siena
Marina Nespor
HIL and University of Amsterdam, Amsterdam
Anne Christophe
LSCP, EHESS-CNRS, Paris, and MRC-CDU, London
Brit van Ooyen
LSCP, EHESS-CNRS, Paris
1.
Introduction
The linear order of constituents is one of the language-specific properties that has to be acquired by young language learners through exposure to the linguistic environment. For example, while in Italian the typical relative order of verb and object is VO, as in Ho scritto il libro ‘(I) wrote the book’, in Turkish it is the reverse, i.e., OV, as in KitabI yazdim ‘(I) the book wrote’. In this article, we discuss how infants come to learn about this property of their mother tongue with the help of prosodic information. Syntactic phrasal constituents in all human languages share a basic configuration expressed by the X′ structure (Chomsky 1981). This structure includes a head, X0, that determines the phrasal type, its complement and specifiers. X0 varies over syntactic categories, such as N(oun), V(erb), P(reposition), A(djective). Inside the maximal projection of X the relative order of heads and complements and of heads and specifiers varies across languages. In this article
232
M.T. GUASTI, A. CHRISTOPHE, B. VAN OOYEN & M. NESPOR
we will concentrate on the relative order of heads and complements. In certain languages, the complements follow their head, regardless of the phrasal type as in Italian and French. In other languages the complements precede their head as in Turkish and Japanese, again regardless of the phrasal type. This clearcut picture does not hold in every language. In some languages, in fact, the order of heads and complements is not uniform, but is sensitive to the phrasal type. This is the case in Dutch and German, where complements follow the head in the case of NPs, APs and most PPs and precede it in the case of VPs and of certain PPs. In the unmarked case, however, a language has a uniform relative order of heads and complements. It has been suggested that the mixed order of heads and complements represents the marked case in that it stems from a diachronic change in progress (Feng, in press). The crosslinguistic variation in the relative order of heads and complements is captured by the Head–Complement parameter. In the Principles and Parameters framework (Chomsky 1981), the assumption is that at some point of the linguistic development a child sets the value of the various parameters on the basis of the language she or he is exposed to. The implicit assumption in the literature is that babies learn some words in isolation and then when they hear these words in sentences, they can set the Head–Complement parameter. For example, if a child has learned separately the words close and door, when she or he hears a sentence such as close the door, she or he will deduce that in her/his language the object follows the verb (Radford 1990). However, as pointed out by Mazuka (1996), this would require a lot of sophisticated processing already:the baby has to be able to segment the sentence into words and to know the category of the individual words. Notice, however, that this knowledge requires that the child has a syntactic representation already and that she or he knows how phrases are internally organized in her/his language. But if she or he has this knowledge, then she or he must have already set the Head–Complement parameter. This leads to a paradox:the prerequisites for the setting of the Head–Complement parameter presuppose that the parameter has already been set. To break this circularity, Mazuka (1996) proposes that the Head–Complement parameter is set on the basis of prosodic cues. This view is further developed in Nespor, Guasti and Christophe (1996). In addition to these theoretical arguments, there is some empirical evidence that the Head–Complement parameter is set very early. Children do not seem to make mistakes on word order when they start combining words at about 20 months of age (Hirsh-Pasek & Golinkoff 1996 and references cited there). Earlier than that, one-word stage children, at about 17 months of age, appear to use the knowledge about word order in comprehending sentences (Hirsh-Pasek & Golinkoff 1996). Using the intermodal preferential looking paradigm, Hirsh-
HEAD-DIRECTION PARAMETER
233
Pasek and Golinkoff tested children to determine whether they use word order to comprehend active reversible sentences. While seated in front of two television screens, infants were hearing a sentence, such as “Big Bird is tickling Cookie Monster”. One screen showed Big Bird tickling Cookie Monster and the other Cookie Monster tickling Big Bird. The result of the experiment is that infants prefer to watch the screen that matches the sentence they hear rather than the nonmatching one. This converging evidence leads us to hypothesize that the X′ configuration, with its language-specific settings, in particular the relative order of heads and complements, is known to the child at the beginning of lexical and syntactic acquisition, towards the end of the first year of life. It is clear that such knowledge would greatly help lexical and syntactic acquisition. Let us take the example of close the door! again. If babies knew about the X′ structure of English, then relatively little additional information would help them figure out the whole structure of the sentence together with the meaning and the syntactic category of its words. For instance, if babies know that the sentence is about doors closing (knowledge of the situation), they may infer that it contains a verb and a direct object and therefore that the first word must be the verb (close) and the second one the object (door). Or, if they know the meaning of the word door already, they may infer from the relative position of close and door that close is a verb that may take door as object. This knowledge, together with the context, may be enough to infer what the sentence means. To summarize the argument, the task of acquiring a language would be facilitated if babies knew about the order of heads and complements before multi-word utterances are produced. As a matter of fact, there is evidence that this may be the case. In this paper, following Nespor, Guasti and Christophe (1996), we wish to examine how babies may learn about the word order of their native language at the prelexical stage. Following Mazuka (1996), we would like to propose that babies may use prosodic information to reach this goal. While Mazuka argues that the relevant prosodic cue for Japanese is the one associated with complex sentences, we propose that the relevant prosodic information is included in much smaller chunks of utterances. We will substantiate our prosodic bootstrapping hypothesis with empirical evidence obtained in an experiment with babies. Finally, we will discuss the residual problem posed by languages with a mixed word order and by sign languages (Nespor & Sandler 1999) and we will make a suggestion as to another syntactic parameter that may be set on the basis of prosodic information.
234 2.
M.T. GUASTI, A. CHRISTOPHE, B. VAN OOYEN & M. NESPOR
The prosodic hierarchy
In this section, we are going to present the relevant notions of the theory of prosodic phonology that are useful to spell out our proposal (see Nespor & Vogel 1986; Selkirk 1981, 1986; Truckenbrodt 1999). The prosodic hierarchy is a representation built on the basis of information contained in the syntactic tree and organized in various levels, each of which exaustively includes the constituent of the preceding level. These constituents are the syllable, the foot, the word, the phonological phrase, the intonational phrase and the utterance. The two constituents that concern us here are the phonological and the intonational phrase. We refer the reader to the authors cited above for a definition of the other constituents. The phonological phrase (F) (adapted from Nespor & Vogel 1986) is defined as follows: (1)
The F domain consists of a lexical head (X0), all clitics that it phonologically hosts and all the material on its nonrecursive side up to another lexical head outside the maximal projection of X0.
It roughly corresponds to a syntactic phrase, although generally it does not include the complements of the lexical head. Consider the structure in (2), whose prosodic phrasing is given in (3). (2) (3)
[IP Gianni avrà [VP già mangiato [le belle mele]]] ‘Gianni will have already eaten the good apples’ [Gianni]F [avrà già mangiato]F [le belle mele]F
As can be seen the complement and the subject are phrased by themselves; the verb is phrased with adverbs located in its specifier and with the auxiliary, which may belong to the extended projection of the verb in the sense of Grimshaw (1991). The second constituent that is relevant for our discussion is the Intonational phrase (I), whose domain consists of (4)
a. b.
all F’s of a string consisting of parentheticals, nonrestrictive relative clauses and extraposed constituents; every other sequence of adjacent F’s inside a root sentence.
In (5), we give an example of a sentence with the intonational phrase bracketing. (5)
[I mesi estivi]I, [che sono quelli che godo di più]I, [passano velocissimi]I ‘Summer months, which are those I like the most, run away very quickly’
HEAD-DIRECTION PARAMETER
235
Constituents of the prosodic representation are assigned relative main prominence. The F-prominence is assigned depending on branchingness direction of the language as seen in (6) (from Nespor & Vogel 1986): (6)
In a language whose syntactic trees are right branching, the rightmost node of F is labelled strong; in languages whose syntactic trees are left branching, the leftmost node of F is labelled strong. All sister nodes of strong are labelled weak.
Let us consider first how main prominence is assigned in right recursive languages, like Italian. In (7) we can see that main prominence is assigned to the rightmost word of F. Obviously, if a F only includes one lexical item, this will get the main prominence. (7)
[Gianni]F [avrà già mangiato]F [mele]F Gianni will have already eaten apples
In (7), the complement is constituted by just one word. In this case, the F containing it can restructure with the F containing the verb and forms a single F. Inside the restructured F, only one word bears the main prominence, as seen in (8).1 (8)
[Gianni]F [avrà già mangiato mele]F Gianni will have already eaten apples
Consider now a structure in a left recursive language, like Turkish. In this case main prominence is assigned to the leftmost word in a F, as seen in (9). (9)
[Mehmet]F [cumartesinden sonra]F [buraya]F [gelececk]F Mehmet Sunday after here will come
As for the Italian example, restructuring can apply as in (10) and main prominence is assigned to only one word of the restructured F. (10)
[Mehmet]F [cumartesinden sonra]F [buraya gelececk]F Mehmet Sunday after here will come
It is clear that main prominence inside the F gives the information about the branching direction of the language. Beyond F, also I is assigned relative prominence, as indicated in (11) (from Hayes & Lahiri 1991). (11)
a. b.
A F with narrow focus receives the strongest stress of its I-phrase. Under neutral focus, the rightmost F-phrase within I is the strongest.
236
M.T. GUASTI, A. CHRISTOPHE, B. VAN OOYEN & M. NESPOR
Notice that within prosodic phonology, the accent inside the I(ntonational) phrase and the main prominence within F are two distinct phenomena. The clauses in (11) state that in condition of broad focus main prominence inside I falls on the rightmost F within I. In condition of narrow focus, main prominence falls on the focalized element (see Guasti & Nespor 1999). Only main prominence inside F is relevant to the present discussion. Main prominence inside I has to do with the information structure conveyed by a sentence. In (12) and (13) we give examples of prosodic representations with main prominence of I. (12) (13)
[[Gianni]F [avrà già mangiato]F[le belle mele]F]I [[Marina’nin dilbilmci]F [oldugunu biliyorum]F]I Marina- linguist be know-1 ‘I know that Marina is a linguist’
Unlike main prominence within F, main prominence within I does not discriminate between Italian and Turkish; in both cases it is assigned to the rightmost F.
3.
Prosodic bootstrapping and the rhythmic activation principle
As we have seen, the Head–Complement parameter has a prosodic correlate: within phonological phrases prominence systematically falls on the rightmost words in head-initial languages such as French or Italian and on the leftmost word in head-final languages, such as Turkish or Bengali (Nespor & Vogel 1986; Hayes & Lahiri 1990). Perceiving prominence inside phonological phrases may thus help babies to infer the relative order of heads and complements in their native language (Nespor 1994; Nespor, Guasti & Christophe 1996). For this hypothesis to be plausible it must be shown that infants are able to perceive the relative prominence of the elements that constitute the phonological phrase as well as the boundaries of the intonational phrase, the unit that directly dominates several phonological phrases. The rhythm within an intonational phrase is a sequence of constituents containing either one or several weak words followed by a strong word (the word that has main prominence) in Head–Complement languages and a sequence of constituents containing a strong word followed by a weak word in Complement–Head languages, as seen in (14) and (15), respectively.2 (14) (15)
HEAD-DIRECTION PARAMETER
237
Following Nespor, Guasti and Christophe (1996), we propose that the Head–Complement parameter may be set through the Rhythmic Activation Principle, stated in (16). (16)
Rhythmic Activation Principle When you hear sequences of ()* within an set the Head–Complement parameter Head–Complement. When you hear sequences of ()* within an set the Head–Complement parameter with the Head.
Intonational Phrase, with the value Intonational Phrase, value Complement–
The Rhythmic Activation Principle (RAP) is proposed to be an innate principle that guides language acquisition. It exploits the correlation that exists between the prosodic and the syntactic structure. When an infant hears sequences of weak–strong elements, she or he will decide that her/his language is head-initial; when she or he will hear strong–weak sequences, she or he will decide that her/his language is head-final. RAP does notmake any claim abouthow the parameter is de facto set. We are not arguing that a single sequence of / is enough to trigger the setting of the parameter. It is more likely that infants set the value of the parameter on the basis of the most frequent pattern inside I that they hear, i.e., either sequences of or . For one thing, there is evidence that children are sensitive to the frequency of stress patterns at the word level (see Jusczyk, Cutler & Redanz 1993). Setting the parameter based on the most frequent trigger has the merit of refraining the child from making an incorrect choice, based on a single sequence that may happen to have a stress pattern, which is nott he mostcommon in the language. It should be noticed that the Head–Complement parameter is also responsible for the relative order of main and subordinate clauses: in the unmarked case, head-initial languages have main clauses that precede subordinate clauses, while in head-final languages the opposite order represents the unmarked case (see later for the discussion of exceptions). Therefore, once the Head–Complement parameter is set through RAP, the child is in a position to deal with many word order phenomena. A potential problem for prosodic bootstrapping of syntactic parameters is represented by sign languages. Notice, however, that RAP is stated in a modality independent way, that is, it does not refer to the spoken modality. Thus, if we find the gestual correlates of the relative prominence of phonological phrases, we may hypothesize that RAP guides the acquisition of the Head–Complement parameter in sign language as well.
238
M.T. GUASTI, A. CHRISTOPHE, B. VAN OOYEN & M. NESPOR
Nespor and Sandler (1999) have shown that phonological phrases in Israeli Sign Language, a head-initial language, have the same constituency of phonological phrases in Head–Complement oral languages and, in addition, that a signed correlate of prominence exists:the strong sign of a phonological phrase is characterized by a hold and/or reduplication. There is an asymmetry between the development of vision and the development of hearing. The full capacity of focussing is reached, in the development of vision at about one year of age. Hearing, instead, is totally developed at birth so that an infant is capable of hearing all the linguistic distinctions of a phonological system. One might thus draw the conclusion that the Head–Complement parameter in sign languages should be set in a later stage of development. Notice however that newborns have the capacity to focus within a short distance and this may be enough to develop sensitivity to the signed prominence. In summary, the crucial aspect of our prosodic bootstrapping hypothesis is that babies should be able to perceive prominence within phonological phrases. The experiments described in the next section have been designed to test this hypothesis.
4.
Experimental evidence
As a first step to verify the RAP hypothesis, we carried out a categorization experiment with 5 adult native speakers of French, to determine whether prominence within phonological phrases can be perceived by human ears. The material used for this experiment consisted of 44 matched pairs of French and Turkish sentences. These languages were chosen because they have a similar phonological structure, in that they both have final word stress, a relatively simple syllabic structure, with between words resyllabification, a partly similar set of phonotactic constraints and no vowel reduction. But while French is headinitial, Turkish is head-final. The phonological phrase prominence is thus final in French and initial in Turkish. The original sentences, pronounced by one native speaker for each language, were resynthesized through diphone synthesis using an algorithm developed at Eindhoven (Holland), so that the Turkish and the French sentences become similar in their phonemic content (both sentences consisted of Dutch phonemes) but remained different in their prosody (which was copied from the original sentences using the PSOLA algorithm; see also Ramus & Mehler, in press).3 This change was introduced to make sure that speakers were not classifying languages on the basis of phonemic properties. Specifically, all vowels were mapped to schwa, and consonants were mapped by manner of articulation:stops were mapped to [p], fricatives to [s], nasals to [m],
HEAD-DIRECTION PARAMETER
239
liquids to [l] and semivowels to [j]. A pair of original as well as resynthesized French and Turkish sentences is given in (17) and (18), respectively. The sentences in (a) are the original ones and those in (b) the resynthesized speech. Bracketing indicates F boundaries. The underlined words bear the main prominence of their phonological phrase. (17)
a. b.
(18)
a. b.
[Le grand orang-outang]F [était énervé]F ‘The big orang-outang was nervous’ [leplempelem epem]F [epe pemelse]F [Yeni kitabami]F [almak istiyor]F ‘(She or he) wants to buy my new book’ [jeme pepepeme] [elmep espejel]
Adult speakers of French were told that they would hear resynthesized French and Turkish sentences and were instructed to categorize each sentence as either French or Turkish. Subjects went through a first phase during which they received feedback as to the correctness of their categorization. In order to move to the test phase, they had to categorize correctly at least 75% of the sentences. If the criterion was not reached upon hearing the training sentences once each, the training phase could be repeated two more times. We observed that all subjects but one performed better than chance (66% correct) during the test phase. These results, reported in Figure 1, show that adult native speakers of French can learn to categorize French and Turkish sentences on the basis of the limited information preserved in the resynthesized sentences (mainly prosodic information). Therefore we can conclude that there is some perceivable difference between the resynthesized French and Turkish sentences. The next step is to establish whether infants are similarly sensitive to prosody. Preliminary results, presenting French 2-month-olds with the same resynthesized sentences in a high-amplitude sucking paradigm, suggest that babies are sensitive to prosodic information, possibly the location of prominence within phonological phrases (Christophe et al., in preparation). To further assess the possibility that babies distinguish French from Turkish on the basis of the prosodic cue relevant for setting the Head–Complement parameter, two control experiments have been considered. One uses matched pairs of sentences in two languages that, besides sharing many phonological properties, as French and Turkish do, also have an identical value for the Head–Complement parameter, thus an identical location of prominence within the phonological phrase. Such pair of languages is represented by Argentinian Spanish and Greek. A sample of a pair of matched sentences in these languages as well as their resynthesized counterparts are given in (19) and (20), respectively.
240
M.T. GUASTI, A. CHRISTOPHE, B. VAN OOYEN & M. NESPOR Training 1 Training 2 Training 3 Test
100 90
Percent correct
80 70 60 50 40 30 20 10 0
S1
S2
S3
Subject
S4
S5
Figure 1. Results of a categorization experiment with 5 adult native speakers of French. The y-axis represents the percentage of correct responses for each phase of the experiment. The training phase (with feedback) is represented in pale grey; it could be repeated 3 times, until the subject exceeded 75% correct classification on the training sentences. The test phase (without feedback, different sentences) is represented in dark grey. All subjects but one classified the sentences better than chance (66% correct, see dotted line) on the test phase. The results indicate that it is possible to learn to classify French and Turkish sentences on the basis of their prosody, although it is by no means a simple task.
(19)
a.
b. (20)
a. b.
[No puedo decirte]F [cuantos hombres]F [not can-1 tell-to+you [how many men [estabamos aqui]F [be-1 here ‘I cannot tell you how many men were here’ [mepjepe peselpe]F [pjempes emples]F [espepemes epe]F [den kséro akoma]F [posus mines]F [a minume eki]F [not know yet [how many months [will stay here ‘I don’t know yet how many months we will stay here’ [sempsele epeme]F [peses memes]F [semememe epe]F
If the languages are indeed well matched in their prosody, we expect that Spanish 2-month-olds should not be able to distinguish between these sets of stimuli. This result in turn would make it plausible that in the case of French and
HEAD-DIRECTION PARAMETER
241
Turkish sentences, babies reacted to the prosodic difference correlated with the Head–Complement parameter. The second control consists of using matched pairs of sentences in which every phonological phrase contains only one word:in such cases, phonological phrase prominence falls on the single word in both French and Turkish, which become indistinguishable as far as prominence is concerned. Therefore, if babies can discriminate between French and Turkish sentences which contain at least one multi-word phonological phrase, but not between French and Turkish sentences which contain only single-word phonological phrases, it must be that they react to prominence within phonological phrases. Such results, if they obtain, would make plausible the hypothesis that babies may use prominence within phonological phrases in order to decide about the word order of their language. This second control is in progress.
5.
Setting the Head–Complement parameter in mixed languages
In some languages, e.g., Dutch and German, the Head–Complement parameter is not set uniformly for all phrases, but, depending on the type of syntactic phrase, it takes a different value. The main prominence inside F varies accordingly. For concreteness, we base our discussion on Dutch. However, most of the points can be extended to German as well. NPs (or DPs in most current approaches) as well as Adjectival Phrases (AP) are uniformly head-initial and the main prominence falls on the rightmost word of the F, as seen in (21) and (22), respectively. (21)
a. b.
(22)
a. b.
[de dikke Kees]F the fat Kees [een donker pak]F a dark suit [erg interessant]F very interesting [zeer vrolijk]F very cheerful
PPs may be either head-initial or head-final and the prominence varies accordingly. In the former case, prominence is on the rightmost element and in the latter case on the leftmost one. The choice between the two options is lexically determined.
242
M.T. GUASTI, A. CHRISTOPHE, B. VAN OOYEN & M. NESPOR
(23) (24)
[op de tafel]F on the table op]F [de trap the staircase up
VPs are head-final and, as expected, the main prominence falls on the leftmost element of the F. In the example in (25), the NP complement belongs to the same F containing the embedded verb. This is the result of a restructuring process by which the complement of the verb is joined with the F including the verb. Since VPs are head-final, the main prominence is on the leftmost node. (25)
(Ik geloof dat hij) [gedichten leest]F (I believe that he [poems reads ‘(I believe that he) reads poems.’
In (26a) we have a verb with a separable verb prefix. These verbs can be analyzed as heads of VPs that select an intransitive PP headed by the particle (van Riemsdijk 1988), as seen in (26b). At the prosodic level, the particle and the verb belong to the same F. Since VPs are head-final, main prominence is expected to fall on the leftmost node of F, i.e., on the particle and this is precisely what happens, as seen in (26a) (See Cinque 1993 for German and references cited there). (26)
a. b.
Ik heb de taart opgegeten I have the cake eaten [VP de taart [[PP [P op] [V gegeten]]]
However, there are some intricacies in Dutch, but not in standard German, due to structures with verb raising. Verb raising is a process that applies obligatory to the complement verb governed by a range of verbs including modals, perception verbs and causatives. It moves the complement verb from the base generated position to the left of the main verb, as in (27a) to the right of this, as in (27b). In the sentence in (27b) the verb ‘schrijven’ has undergone Verb-raising. In this sentence ‘kunnen’ and ‘schrijven’ belong to two separate phonological phrases. As a consequence of restructuring they form a single phonological phrase and the main prominence falls on rightmost element of F, i.e., the raised verb. (27)
a. b.
(Jan had een boek) schrijven kunnen (Jan had een boek) [kunnen schrijven]F (Jan had a book can write ‘Jan could have written a book.’
HEAD-DIRECTION PARAMETER
243
Sentences with verb raising seem toindicate that VPs may be head-initial. Another complexity is due to V2 sentences. All matrix clauses display an order in which the verb appears in the second clausal position and is preceded by a constituent that can be the subject, but also any other clausal constituent. An example is given below: (28)
Vanavond schrijft Jan veel gedichten this evening writes Jan many poems ‘This evening, Jan will write many poems’
The V2 order is distinct from the order governed by the Head–Complement parameter witnessing the fact that there are V2 languages with the Head–Complement order, such as Swedish and V2 languages with the Complement–Head order, such as German/Dutch. The V2 property of a language is governed by a distinct parameter, which we will call V2 parameter and that is also likely to be cued by prosodic information, as argued by Prinzhorn (p.c.). It is assumed that in V2 languages the first clausal constituent is focalized, at least when it is not the subject, and bears stress. Somehow this prosodic feature may be the trigger for the V2 parameter. Notice that the stress on the first non-subject constituent is assigned at the level of I level, and as such it is distinct from main prominence inside F (see the definition in (11); (see also later). (29)
[Vanavond]F [schrijft]F [Jan]F [veel gedichten]F this evening writes Jan many poems ‘This evening, Jan will write many poems’
Summarizing, thus far we have seen that while NPs and APs are uniformly headinitial, PPs and VPs, though at different degrees and often in lexically determined ways, are mixed. In each case, the main prominence reflects the head initial/final character of the phrase. Languages like Dutch pose a problem for our proposal. How can a Dutch child discover the order of words at the prelexical stage on the basis of main prominence within F, since its location varies depending on the category? Given that at the two word stage, word order errors are not reported in the acquisition literature, either for Dutch nor for German (see Penner & Weissenborn 1996; Schoenenberger, Penner & Weissenborn 1997, see also the discussion in HirshPasek & Golinkoff 1996), it must be the case that the child has the means to solve the problem at the one word stage in Dutch as well. A way out of the puzzle may be found by looking at the general phrasal rhythm of Dutch. It may be observed that the pattern is by and large a sequence of phonological phrases with prominence on the right, with the pattern weak-strong having a widespread
244
M.T. GUASTI, A. CHRISTOPHE, B. VAN OOYEN & M. NESPOR
diffusion. Since infants are sensitive to frequency of occurring prosodic patterns (see Jusczyk, Cutler & Redanz 1993), it is likely that just by hearing the general rhythm of Dutch, they come to the conclusion that Dutch is basically Head–Complement, though not all phrases are uniformly headed. Infants may get some additional information if they confine their attention to the stress pattern inside the I phrase. In fact, inside I, there is just one location where a phonological phrase with main prominence on the left may occur and this is in I final position. Thus, in Dutch the rhythm of an intonational phrase, which is constituted by a sequence of Fs, is the following: (30) (()) In other words, one phonological phrase has prominence on the left and this may occur only in sentence final position, which is always also the I final position. On the basis of the rhythm in (30), the Dutch infant may come to know that her/his language is generally head-initial, i.e. most phrases have the order Head–Complement. As a matter of fact, all, but one phonological phrase, are right prominent. By hearing that a F at the end of an I is , Dutch infants realize that at the level of phrases the Head-Complement order is not always the same inside a phrase. The Dutch infant may initially set the parameter with the value Head–Complement, but at the same time she or he must keep track of the fact that this does not hold for all phrases. She or he has to discover which phrase is deviating. Our proposal for Dutch implies that learners are sensitive not just to the accentual pattern of F, but also to the general rhythm of an I, which roughly corresponds to a clause. This additional requirement is not crucial for ‘regular’ languages, because in this case, the general rhythm and the rhythm of a F always coincide and one does not see why once the rhythm in F is known, the rhythm within I would be crucial as well. If learners are sensitive to the general pattern within I, they must also be sensitive to I boundaries. In fact, at 6 months infants are sensitive to cues for intonational phrases (see Hirsh-Pasek et al. 1987 and Christophe 1993). At 9 months, but not at 6, infants are sensitive to prosodic cues signaling boundaries of smaller constituents, likely Fs (Jusczyk, Hirsh-Pasek, Kemler Nelson, Kennedy, Woodward & Piwoz 1992), which roughly correspond to syntactic phrases. How and when could a child understand which phrases are head-final? We have seen in Section 4 that French babies can discriminate head-final from headinitial languages at around 2 months of age. This does not necessarily imply that they set the Head–Complement parameter at this age, probably they may set it later. Our result simply implies that infants have the perceptual capabilities to use prosody to set a syntactic parameter. As for Dutch, the infant’s task is more
HEAD-DIRECTION PARAMETER
245
complex and requires some additional capabilities to set the word order parameter for all phrases:one is the ability to detect I boundaries, which seems to be in place at 6 months. The other is the capability of discovering which phrase is exceptional. This discovery may be accomplished before children start to combine words in either of two ways. As proposed in Nespor, Guasti and Christophe (1996), a deductive mechanism can help in figuring out which is the deviant phrase. Recall that the deviant phrase is just one per I (where I corresponds roughly to a clause) and is in the final portion of I. Given that, the NP is not a suitable candidate, because it may occur more than once in a sentence; for example with transitive verbs, there are two NPs. The same considerations extend to APs. Hence, the child can come to the conclusion that the single constituent that occurs at the end of the intonational phrase with a pattern is the VP. Alternatively, the child may use some piece of lexical information, such as knowing the category of words. As a matter of fact, the child comprehends some words already at 12 months, well before she or he starts to put words together (see Jusczyk 1997 for review). Hence, she or he may use this information and come to the conclusion that VPs are head final and are associated with a pattern. In summary, on the basis of the general rhythm of word primary stresses, the learner of Dutch comes to the conclusion that in her/his language generally complements follow the head. Since at the end of a I, there is a F that does not have the general rhythm , she or he has to discover which head takes its complements to the left. We suggested that she or he can do this around 12 months, either through a deductive mechanism or when she or he gains some lexical information. We may thus draw the conclusion that when the child reaches the lexical stage she or he already knows the basic properties concerning the order of words. Therefore, when she or he combines words, she puts them in the right order. At the two words stage, she may thus produce both and sequences.
6.
Conclusion
In this paper, we presented a prosodic bootstrapping hypothesis for the setting the Head–Complement parameter:If babies can perceive prominence within phonological phrases, they may decide whether their maternal language is headfinal or head-initial on the basis of prosodic information only. We presented an experiment that suggests that babies may be sensitive to precisely that prosodic information well before the end of the first year of life:2- to 3-month-old babies
246
M.T. GUASTI, A. CHRISTOPHE, B. VAN OOYEN & M. NESPOR
were able to distinguish between sentences in French (head-initial) and Turkish (head-final), even though these sentences had been resynthesized to take most of the phonemic information away, leaving mainly prosodic information. One advantage of our proposal is that this prosodic information is easily available to babies, in that it is present in most of the utterances they hear, even short ones. Importantly, the Head–Complement parameter could be set prelexically, i.e. without any knowledge of the lexicon. This implies that as soon as babies comprehend some words (around the age of 12 months, cf. Oviatt 1980), they may start working on the meaning of whole sentences. If our proposal is to be of any help in solving the bootstrapping problem for syntactic acquisition, one expects that other syntactic parameters should have a phonological correlate and thus be cued by phonological information available in the speech signal itself, such as prosodic information. One parameter for which this may be the case is the Verb Second parameter. Some languages such as German, Dutch and the Scandinavian languages require that the finite verb of the main clause be in the second clausal position and that a subject or a nonsubject constituent be in the first clausal position. Since the subject is initial in most languages, whether they are Verb Second or not, only sentences with a nonsubject constituent in the initial position are relevant for the setting of the parameter. It has been proposed that exactly this type of sentences have a marked prosody (Prinzhorn, p.c.). It is conceivable that this cue may be used during the development of language to assign a positive value to the Verb Second parameter.
Notes 1. Restructuring of the verb and the complement is an optional prosodic process that joins the verb and its complement into a single phonological phrase under conditions that vary from one language to the other. Nespor and Vogel (1986) claim that in Italian a complement can restructure with the verb only if it is non-branching, i.e., it is a bare noun. According to Hayes (1989), in English, branching complements can restructure with the verb if they do not contain more than one clitic group. Dutch is likely to be like English, although a more precise investigation is needed. The (non-)occurrence of restructuring is evidenced by the (non-)applicability of sandhi rules. 2. There is an asymmetry between the pattern in (14) and that in (15):unlike in (15), in (14), it is often the case that more than one weak element intervenes between two strong elements. For a discussion of this asymmetry, cf. Nespor, Guasti and Christophe (1996). 3. Many thanks to Rene’ Collier and Jan Roelof de Pijper for allowing us to use facilities at IPO, Eindhoven, Holland.
HEAD-DIRECTION PARAMETER
247
References Chomsky, N. 1981. Lectures on Government and Binding. Dordrecht: Foris. Christophe, A. 1993. Rôle de la prosodie dans la segmentation en mots. Ph.D. Dissertation EHHSS, Paris. Christophe, A., Nespor, M., Guasti, M. T. and van Ooyen B. (in preparation). “Prosodic bootstrapping of a syntactic parameter.” Cinque, G. 1993. “A null theory of phrase and compound stress.” Linguistic Inquiry 24: 239–297. de Pijper, J. R. and Sanderman, A. A. 1994. “On the perceptual strength of prosodic boundaries and its relation to suprasegmental cues.” Journal of the Acoustical Society of America 96: 2037–2047. Feng, S. In press. “Prosodically constrained syntactic changes in early archaic Chinese.” Journal of East Asian linguistics. Fodor, J. 1998. “Unambiguous triggers.” Linguistic Inquiry 29: 1–36. Gerken, L., Jusczyk, P. W. and Mandel, D. 1994. “When prosody fails to cue syntactic structure: 9-month-olds’ sensitivity to phonological versus syntactic phrases.” Cognition 51: 237–265. Grimshaw, J. 1991. “Extended projection.” Ms. Brandeis University. Guasti, M. T. and Nespor, M. (1999). “Is syntax phonology free?” In Phrasal phonology, W. Zonnefeld (ed.). Nijmegen: Nijmegen University Press. Hayes, B. 1989. “The prosodic hierarchy in meter.” In Phonetics and phonology. Rhythm and meter, P. Kiparsky and G. Youmans (eds.). New York: Academic Press. Hayes, B. and Lahiri, A. 1991. “Bengali intonational phonology.” Natural language and linguistic theory 9: 47–96. Hesketh, S., Christophe, A. and Dehaene-Lambertz, G. 1997. “Nonnutritive sucking and sentence processing.” Infant Behavior and Development 20, 263–269. Hirsh-Pasek, K., Kemler Nelson, D. G., Jusczyk, P. W., WrightCassidy , K., Druss, B. and Kennedy, L. 1987. “Clauses are perceptual units for young infants.” Cognition 26: 269–286. Hirsh-Pasek, K. and Golinkoff, R. M. 1996. The origins of grammar: evidence from early language comprehension. Cambridge, Mass.: MIT Press. Jusczyk, P. W., Hirsh-Pasek K., Kemler Nelson, D. G., Kennedy, L., Woodward, A. and Piwoz, J. 1992. “Perception of acoustic correlates of major phrasal units by young infants.” Cognitive psychology 24: 253–393. Jusczyk, P. W., Cutler, A. and Redanz, N. 1993. “Preference for the predominant stress patterns of English words.” Child Development 64: 675–687. Jusczyk, P. W. 1997. The discovery of spoken language. Cambridge, Mass.: MIT Press. Mazuka, R. 1996. “How can a grammatical parameter be set before the first word?” In Signal to Syntax: Bootstrapping from speech to grammar in early acquisition, J. L. Morgan and K. Demuth (eds.). Mahwah, NJ: Lawrence Erlbaum Associates.
248
M.T. GUASTI, A. CHRISTOPHE, B. VAN OOYEN & M. NESPOR
Morgan, J. L. and Demuth, K. 1996. “Signal to Syntax: an overview.” In Signal to Syntax: Bootstrapping from speech to grammar in early acquisition, J. L. Morgan and K. Demuth (eds.). Mahwah, NJ: Lawrence Erlbaum Associates. Nespor, M. 1994. “Setting syntactic parameters at a pre-lexical stage”. Proceedings of ABRALIN XXV, Salvador, Brasil. Nespor, M., Guasti, M. T. and Christophe, A. 1996. “Selecting word order: the Rhythmic Activation Principle.” In Interfaces in Phonology, U. Kleinhenz (ed.). Berlin: Akademie Verlag. Nespor, M. and Sandler, W. 1999. “Prosody in Israeli sign language.” In Language and Speech — Special issue edited by Wendy Sandler. Nespor, M. and Vogel, I. 1986. Prosodic Phonology. Dordrecht: Foris. Oviatt, S. L. (1980). “The emerging ability to comprehend language: an experimental approach.” Child Development 51: 97–106. Penner, Z. and Weissenborn, J. 1990. “Strong continuity, parameter setting and the trigger hierarchy. On the acquisition of the DP in Bernese Swiss German and High German.” In Generative perspectives on language acquisition: Empirical findings, theoretical considerations, crossliguistic comparisons, H. Clahsen (ed.). Amsterdam and Philadelphia: John Benjamins. Radford, A. 1990. Syntactic theory and the acquisition of English syntax. Cambridge, England: Basil Blackwell. Ramus, F. and Mehler, J. In press. “Language identification with suprasegmental cues: A study based on speech resynthesis.” Journal of the Acoustical Society of America. Riemsdijk van, H. 1988. “Remark on incorporation.” Paper presented at the University of Maryland, College Park. Saffran, J. R., Aslin, R. N. and Newport E. L. 1996. “Statistical learning by 8-Month-Old infants.” Science 274: 1926–1928. Schoenenberger, M., Penner, Z. and Weissenborn, J. 1997. “Objectplacementand early German grammar.” In Proceedings of the 21st Annual Boston Conference on language development, Vol. 2, M. Hughes and A. Greenhill (eds.). Somerville, Mass.: Cascadilla Press. Truckenbrodt, H. 1999. “On the relation between Syntactic Phrases and Phonological Phrases”. Linguistic Inquiry 30: 219–255.
Discovering Word Order Regularities The role of prosodic information for early parameter setting Barbara Höhle, Jürgen Weissenborn & Michaela Schmitz University of Potsdam
Anja Ischebeck
University of Nijmegen
Introduction Children seem to obey language specific word order regularities as soon as they produce their first multi word utterances (cf., e.g., Braine 1976; Pinker 1984; Penner & Weissenborn 1996). Considered within a parameter setting account of language acquisition this pattern of almost errorfree production from the outset suggests that the basic configurational parameters that are relevant for word order — such as Head Direction or Branching Direction — must have been set already at earlier stages of language acquisition, i.e. before the child actively produces these structures. Findings from various areas of language acquisition do in fact show that language specific grammatical knowledge is present long before it is manifested in the child‘s own utterances (Gerken & McIntosh 1993; Golinkoff et al., this volume; Santelmann & Jusczyk 1998), thus supporting the assumption of a very early parameter setting seems to be plausible. In a recent paper Nespor, Guasti and Christophe (1996; cf. Guasti et al., this volume) suggested that specific prosodic features play a critical role as trigger information for the early setting of configurational parameters. In the following study of the sensitivity of 18 to 21 months old German children to word order violations we will provide new evidence for this type of prosodic bootstrapping suggesting in addition that prosodic information may also be used by the child to distinguish between different
250
B. HÖHLE, J. WEISSENBORN, M. SCHMITZ & A. ISCHEBECK
types of syntactic structures like verb-complement vs. verb-modifier constructions.
The acquisition of verb placement in German: Production Although German presents conflicting input with respect to the position of the finite verb in main clauses and subordinate clauses, verb placement is practically correct from the outset (e.g. Clahsen 1982; Penner & Weissenborn 1996; Rothweiler 1989; Weissenborn 1991, 1994). These observations from spontaneous speech are supported by our own experimental data. We conducted a sentence repetition task with German children from two and a half to five years of age (for details see Weissenborn et al. 1998). In this study we presented subordinate clauses with grammatical and ungrammatical verb placement to the children (see Table 1). Table 1. Sentence types used in the repetition task Sentence type
grammatical (n = 60)
ungrammatical (n = 60)
subordinate clause with complementizer (n = 60)
Bert sagt, dass Lisa Oma hilft ‘Bertsays thatLisa grandmother helps.’
*Bert sagt, dass Lisa hilft Oma ‘Bertsays thatLisa helps grandmother.’
subordinate clause without Bert sagt, Lisa spielt draussen complementizer (n = 60) ‘Bertsays Lisa plays outside.’
*Bert sagt, Lisa draussen spielt ‘Bertsays Lisa outside plays.’
All the sentences were matrix-subordinate clause constructions with sagen ‘to say’ as the verb of the matrix sentence. This verb takes complement sentences that may be introduced by a complementizer (usually dass) butitallows, like other verbs of the same type, e.g. glauben ‘to believe’, denken ‘to think’, also for complement sentences without a complementizer. The presence or absence of a complementizer determines the position of the finite verb of the subordinate clause. If the subordinate clause is introduced by a complementizer the finite verb appears in clause final position (e.g. Bert sagt, dass Lisa Oma hilft ‘Bert says that Lisa grandmother helps’). If there is no complementizer the finite verb appears in second position (e.g. Bert sagt, Lisa hilft Oma ‘Bertsays that Lisa helps grandmother’). The ungrammatical sentences used in the experiment differed from the grammatical ones only with respect to the position of the finite verb: it either appeared incorrectly in second position of a subordinate clause introduced by a complementizer (e.g. *Bert sagt, dass Lisa hilft Oma) — or it
DISCOVERING WORD ORDER REGULARITIES
251
appeared incorrectly in final position of a subordinate clause without a complementizer (e.g. *Bert sagt, Lisa Oma hilft). We found that as soon as the children were able to repeat at least the crucial part, i.e. the subordinate clause of the complex sentences, their responses showed a clear sensitivity to the grammaticality or the ungrammaticality of the sentences. That is, we found less exact repetitions of the ungrammatical stimulus sentences than of the grammatical ones. This difference was observed in both contexts, that is for the incorrect second position of the verb in the sentences with complementizers as well as for the incorrect final position of the verb in the sentences without complementizer (Figure 1). Literal responses: 2 to 3-year-olds 100 80
%
60
grammatical ungrammatical
40 20 0 with comp.
without comp.
Figure 1. Percentages of literal responses to grammatical and ungrammatical stimulus sentences
The clearest evidence for the children’s knowledge of the correct verb position was the type of changes they made to the ungrammatical stimulus sentences:as shown in Figure 2 the majority (almost 50%) of the responses to the ungrammatical stimulus sentences were in fact corrections. These results clearly indicate that children from two and a half years of age on already systematically differentiate between the grammatical and the ungrammatical position of the finite verb. This shows that the knowledge of the language specific word order regularities must have been acquired earlier than can be shown with this kind of experimental task. In order to investigate the question from which age on children are able to discriminate between sentences
252
B. HÖHLE, J. WEISSENBORN, M. SCHMITZ & A. ISCHEBECK Types of responses to ungrammatical sentences 80 60 % 40 20
at
r he
gr
am
m
ot
al ic
io co
rre
ct un
lit
er
al
re
sp
on
se
s
ns
0
Figure 2. Percentages of different responses to ungrammatical sentences
with grammatical and ungrammatical positions of the finite verb we conducted an experiment with the head-turn preference paradigm.
The acquisition of verb placement: Perception The headturn preference paradigm has mainly been used to investigate infants’ sensitivity to prosodic and phonotactic features of their target language and the development of word segmentation skills (see e.g. Jusczyk, this volume) during the first year of life. Only recently, experiments by Santelmann and Jusczyk (1998) have shown that the head turn preference paradigm can be used to study morphosyntactic knowledge of children in their second year of life. During the experiment the child is seated on the lap of a caregiver sitting on a chair in a test booth. This test booth is equipped with a centrally located green lamp and a red lamp on each of the sidewalls. Two loudspeakers are mounted outside the sidewall behind the red lamps. Each experimental trial is started by blinking of the green lamp. When the child focusses on the green lamp the lamp goes out and one of the red lamps on the sidewall starts to blink. When the child turns her head towards the blinking red lamp the presentation of the speech stimulus starts. The speech stimulus is only presented from one of the two loudspeakers, namely on the side where the red lamp is blinking. The presentation
DISCOVERING WORD ORDER REGULARITIES
253
of the speech stimulus is stopped when the child turns her head away for more than 2 seconds or when the end of the speech file is reached. The dependent variable is the amount of time the child holds her head towards the loudspeaker which presents the speech stimulus. This is what we call the orientation time. In our experiment we used stimulus material containing the same grammaticality contrast as the material for the sentence repetition task described in the preceding section. We restricted the stimulus sentences to grammatical and ungrammatical subordinate clauses with complementizers. Overall 120 different subordinate clauses were constructed. Half of them contained a transitive verb combined with an object — for example Bert sagt, dass Lisa Oma hilft, the other half contained an intransitive verb combined with an adverb — for example Bert sagt, dass Lisa draussen spielt (see Table 2). To avoid too much prosodic variation between the sentences all the subordinate clauses contained only monosyllabic verbs and bisyllabic trochaic objects or adverbs. The object nouns were never combined with an article. Table 2. Types of sentences used in the head turn preference experiment Sentence type
grammatical (n = 60)
ungrammatical (n = 60)
transitive (n = 60)
Bert sagt, dass Lisa Oma hilft ‘Bert says that Lisa grandmother helps.’
*Bert sagt, dass Lisa hilft Oma ‘Bert says that Lisa helps grandmother.’
intransitive (n = 60)
Bert sagt, dass Lisa draussen spielt *Bert sagt, dass Lisa spielt draussen ‘Bert says that Lisa outside plays.’ ‘Bert says that Lisa plays outside.’
As in the repetition task the ungrammatical sentences differed from the grammatical ones only by the position of the finite verb which wrongly appeared in second position in these sentences. The test sentences were presented in blocks of six sentences each. Only sentences of the same structural type were combined in a block. There were 5 blocks for each type of sentences, i.e. 5 blocks of grammatical sentences with object, 5 blocks of grammatical sentences with adverbs, etc. The 20 blocks were presentend to the children in randomized order. 64 children from 18 to 21 months old completed the experiment with head turn durations of at least 3 seconds on the average for every sentence type. With mean head turn durations of 7571 ms for the grammatical sentences and of 8151 ms for the ungrammatical sentences we failed to find any significant differences between the grammatical and the ungrammatical sentences (F(1,63) = 2.64; p > 0.10) for the whole group of children (Figure 3).
254
B. HÖHLE, J. WEISSENBORN, M. SCHMITZ & A. ISCHEBECK Mean orientation time 9000 8000
ms
7000 6000 5000 4000
grammatical
ungrammatical
Figure 3. Mean orientation times to grammatical and ungrammatical sentences
Mean orientation time 9000 grammatical ungrammatical
8000
ms
7000 6000 5000 4000
18–19 month-olds
20–21 month-olds
Figure 4. Mean orientation times to grammatical and ungrammatical sentences for younger and older children
DISCOVERING WORD ORDER REGULARITIES
255
In order to find out whether our failure to find a grammaticality effect was the result of the rather broad age range of the children we splitted up the whole group in a group of younger children from 18 to 19 months and a group of older children from 20 to 21 months. Both groups comprised 32 children. As Figure 4 shows, no age effect was found. Both subgroups show exactly the same pattern as the whole group with no significant differences between the grammatical and the ungrammatical sentences. In a further analysis, we looked at the data for the sentences with objects and for the sentences with adverbs separately. Taking again the whole group of children together the following picture emerged (Figure 5). We found a clear grammaticality effect for the object-sentences with mean orientation times of 7477 ms for the grammatical sentences and mean orientation times of 8668 ms for the ungrammatical ones (F(1,63) = 6.03; p < 0.05). For the adverb sentences no significant difference between the grammatical (7663 ms) and the ungrammatical sentences (7469 ms) was observed (F(1,63) < 1). Looking again at the results of our two age groups separately we see that both groups react in the same way to the sentences with objects with longer orientation times to the ungrammatical as compared to the grammatical sentences (F(1,62) = 6.09; p < 0.05) (Figures 6 and 7). Even though the difference in orientation times between the grammatical and the ungrammatical sentences is much smaller Mean orientation time 9000 grammatical ungrammatical
8000
ms
7000 6000 5000 4000
object sentences
adverb sentences
Figure 5. Mean orientation times to different sentence types
256
B. HÖHLE, J. WEISSENBORN, M. SCHMITZ & A. ISCHEBECK Mean orientation time 18–19 month-olds 9000 grammatical ungrammatical
8000
ms
7000 6000 5000 4000
object sentences
adverb sentences
Figure 6. Mean orientation times to the different sentence types for younger children
Mean orientation time 20–21 month-olds 9000 grammatical ungrammatical
8000
ms
7000 6000 5000 4000
object sentences
adverb sentences
Figure 7. Mean orientation times to the different sentence types for older children
DISCOVERING WORD ORDER REGULARITIES
257
for the younger than for the older children there is no significant interaction between age group and grammaticality (F(1,62) = 1.67; p > 0.10). For the adverb sentences neither the younger age group (F(1,31) = 1.15; p > .10) nor the older group (F(1,31) = 2.05; p > .10) shows a significant difference between the grammatical and the ungrammatical sentences. How can these differences in the reactions to the sentences with objects and those with adverbs be explained? It has been shown for several languages (Cinque 1993; Nespor et al. 1996; Schmerling 1976; Selkirk 1984) that headargument relations are clearly marked by differences in prosodic prominence between the head and the argument with the argument being more prominent than the head. According to Nespor et al. (1996) this association between structural and prosodic properties leads to constant rhythmical patterns within branching phonological phrases and furthermore within intonational phrases at least in languages that are uniform with respect to the position of the head in different types of syntactic phrases. In head initial languages, like Italian, the prosodically most prominent element appears at the right side of a phrase. In contrast, head final languages like Turkish have their prosodically most prominent element on the left side of a phrase. The examples in Table 3 from these languages show these different patterns. Table 3. Head initial and head final languages Italian (head initial)
[Gianni] [avra gia mangiato] [dolci] (Gianni) (will already have eaten) (sweets)
Turkish (head final)
[Mehmet] [Cumartesinde sonra] [buraya gelecek] (Mehmet) (Sunday after) (here will come)
[ ] = boundaries of phonological phrases; bold = prosodically prominent element. Examples taken from Nespor, Guasti & Christophe (1996)
Nespor et al. (1996) assume that the systematic relationship between syntactic headedness and prosodic prominence plays an important role in the aquisition of word order regularities. Their so-called “Rhythmic Activation Principle” assumes that the configurational parameters like the Branching Direction and the Head Direction Parameter are set on the basis of this type of prosodic information in the input. If the input contains phonological phrases that are most prominent on the right side the head direction parameter is set to “head initial”. If the input contains only phonological phrases that are most prominent on the left side the head direction parameter is set to “head final”. There is
258
B. HÖHLE, J. WEISSENBORN, M. SCHMITZ & A. ISCHEBECK
evidence that 5 month old infants are sensitive to these rhythmic differences between languages (Guasti, Nespor, Christophe & van Ooyen, this volume). In the case of head-modifier relations there is no such clear prosodic relation between these syntactic elements. According to Truckenbrodt (1998) in German there is no systematic prominence difference between head and modifier. His analysis suggests that in German head–argument relations are separated prosodically from head–modifier relations by the prominence patterns: whereas different degrees of prominence are expected for head-argument relations, there are no prominence differences between the head and its modifiers. Given the great sensitivity of infants for prosodic information it may be the case that the different results for the sentences with objects and the sentences with adverbs in our headturn preference experiment is related to prosodic differences between the sentences. In order to test this hypothesis we conducted a detailed phonetic analysis of the prosodic features of the verb–object and the verb-adverb combinations.
Phonetic cues to structural properties The perception of phonological prominence is dependent on different phonetic parameters, namely duration, pitch and intensity (e.g. Ladd 1996). Stressed syllables are generally perceived as longer, louder and higher than unstressed syllables. Since it has been shown that these parameters can dissociate (Lehiste 1973) all three parameters were included in our analysis. We measured the following features: (a) Duration of verbs, objects and adverbs in the grammatical and the ungrammatical sentences (b) Maximal fundamental frequency as a measure for pitch on the verbs, objects and adverbs of the grammatical and the ungrammatical sentences (c) Intensity in decibel as a measure for perceived intensity on the verbs, objects and adverbs of the grammatical and the ungrammatical sentences For this phonetic analysis the same recordings of the stimulus sentences were used as in the head-turn preference experiment. These sentences had been recorded by a single female native speaker of German. The speaker had only been instructed to pronounce the sentences in a child-directed manner. Since a phonetic analysis of the sentences was not intended at the time of recording we can be sure that the prosodic features of the sentences are not especially enhanced by the knowledge of the speaker about an intended phonetic analysis. The
259
DISCOVERING WORD ORDER REGULARITIES
following description of the results of this phonetic analysis focusses on a comparison of the values of the respective phonetic parameters for the penultimate and the ultimate words of the different sentence types. One has to keep in mind that these positions are filled by words of different syntactic categories (verbs, nouns, adverbs) depending on sentence type and grammaticality. Duration In all sentences the last word had numerically a longer duration than the preceding word. This difference in duration was highly significant for all sentences with adverbs (grammatical:F (1,48) = 29,45; p < .01; ungrammatical: F(1,48) = 90,39; p < .01) and for the ungrammatical sentences with object (F(1,48) = 216,6; p < .01) but not for the grammatical sentences with objects (F(1,48) < 1) (Figure 8).
Duration 500
adverb verb
450 object
msec
400 350
object
verb
adverb
verb
300
grammatical
250
ungrammatical
verb 200
penultimate
ultimate
Sentences with object
penultimate
ultimate
Sentences with adverb
Figure 8. Mean duration of the penultimate and ultimate words
260
B. HÖHLE, J. WEISSENBORN, M. SCHMITZ & A. ISCHEBECK
The parameter of duration does not systematically reflect the expected prominence relations since at least for the grammatical sentences with object higher duration values for the penultimate words as compared to the ultimate words were expected. But exactly for this sentence type no difference in duration between penultimate and ultimate word was found. Duration is the basis for the marking of prosodic boundaries through final lengthening (e.g. Klatt 1975; Scott 1982). Since in all our sentences the analysed words appeared at the end of the sentences the higher duration values of the ultimate words probably reflects final lengthening. The fact that in the grammatical sentences with objects there is no duration difference between the ultimate and the penultimate word may indicate prosodic prominence of the penultimate word which leads to same duration values. Fundamental frequency For this analysis we chose the parameter of maximal fundamental frequency measured for the penultimate word and for the ultimate word. The results were the following:In the sentences with an adverb in the grammatical as well as in the ungrammatical sentences the penultimate word on the average had a significant higher maximal fundamental frequency than the final word (grammatical sentences:F (1,48) = 4,17; p < .05; ungrammatical sentences:F (1,48) = 7,28; p< = .01). In the sentences with an object this difference between penultimate and ultimate word was statistically significant only in the grammatical sentences (F(1,48) = 34.7; p < .01) but not in the ungrammatical ones (F(1,48) < 1) (Figure 9). Intensity As a third parameter intensity measured in decibel was analysed. Again maximal decibel values for the penultimate and the ultimate words were considered. For this parameter we found the following pattern. For the grammatical sentences with objects we found significant higher decibel-values for the penultimate as compared to the ultimate words (F(1,48) = 39,7; p < .01). The reverse pattern, i. e. a significant higher decibel-value for the ultimate as for the penultimate word was found in the ungrammatical sentences (F(1,48) = 5,39; p < .01). For the sentences with adverbs a significant difference between the penultimate and the ultimate words only appeared in the grammatical (F(1,48) = 10,05; p < .05) but not in the ungrammatical sentences (F(1,48) < 1) (Figure 10).
261
DISCOVERING WORD ORDER REGULARITIES Maximal fundamental frequency 310 300
object
grammatical verb
290
ungrammatical
Hz
280 270
verb
object
adverb
260
verb
250
adverb
240 230
verb penultimate
ultimate
Sentences with object
penultimate
ultimate
Sentences with adverb
Figure 9. Maximal fundamental frequency in the penultimate and the ultimate words
Summarizing the results for the three phonetic parameters we see differences between the object and the adverb sentences. For the grammatical sentences with objects we found strong prominence differences between the object and the verb which are phonetically realized by a higher pitch and a higher intensity of the object noun as compared to the verb. In the grammatical sentences with adverbs we see the same tendencies, but the differences between adverb and verb with respect to these two parameters are much weaker. These findings support the assumption that prosodic differences reflect the structural differences between verb-argument and verb-modifier structures. Furthermore, if one compares the prosodic pattern of the grammatical and the ungrammatical sentences, one can see that the prosodic differences between the grammatical and the ungrammatical sentences are greater for sentences with objects than for sentences with adverbs. Especially with respect to intensity and maximal fundamental frequency we see clear differences between the penultimate and the ultimate words for the
262
B. HÖHLE, J. WEISSENBORN, M. SCHMITZ & A. ISCHEBECK Maximal intensity
−9
object
−10 −11 −12
adverb
verb
adverb object verb
dB
−13
verb
−14 −15 −16
grammatical
−17 −18
ungrammatical
verb penultimate
ultimate
Sentences with object
penultimate
ultimate
Sentences with adverb
Figure 10. Mean decibel values for the penultimate and the ultimate words
grammatical sentences with objects but only small differences for the ungrammatical sentences with objects. For the sentences with adverbs these differences are relatively small irrespectively of the grammaticality status of the sentences.
General discussion Our phonetic analysis has shown prosodic differences between grammatical sentences with objects and grammatical sentences with adverbs. Furthermore, we found greater prosodic differences between grammatical and ungrammatical sentences with objects than between the grammatical and the ungrammatical sentences with adverbs. That is, the results of the phonetic analysis show the expected prominence differences in the case of head–argument relations. In the case of modifier–head constructions this prominence difference was much
DISCOVERING WORD ORDER REGULARITIES
263
weaker. These findings provide evidence for the assumption that the input to the child may contain prosodic information that (a) helps the child to determine the word order regularities of the target language and (b) helps the child to discriminate between structures containing head–argument relations and structures containing head–modifier relations, i.e. to discriminate between arguments and modifiers. Furthermore, it is possible that the results of our headturn preference experiment are related to the prosodic features of the sentences presented to the children. The younger children of 18 and 19 months already showed the tendency of discriminating the grammatical from the ungrammatical sentences in the case of object sentences, a tendency which was even more pronounced for the older children of 20 and 21 months. In contrast, the younger children did not discriminate between the grammatical and the ungrammatical versions of the adverb sentences, whereas the older children showed at least a tendency to do so. This suggests that children differentiate earlier and more consistently between grammatical and ungrammatical sentences that differ clearly with respect to the prosodic features. At this point it is still an open question whether our findings can be explained by the prosodic properties of our test sentences alone. An alternative explanation would be that our results are due to the sensitivity of the children to the structural differences between the test sentences. That is, it could be that the children are initially more sensitive to word order violations which involve elements of the subcategorisation frame of the verb, i.e. its arguments than to word order violations which involve adjuncts. The earlier sensitivity to word order regularities concerning the verb and its arguments may in turn be related to the more prominent prosodic structure of the object–verb constructions as compared to the adverb–verb constructions. Thus, to summarize, both explanations support the idea of prosodic bootstrapping into language specific word order. Which of these explanations is to be preferred will be the object of future investigations.
Acknowledgments We want to thank Caroline Féry and Susan Powers for helpful comments on earlier versions of this paper. The usual disclaimers apply. The study has been supported by a grant of the German Science Foundation (DFG) to Jürgen Weissenborn in the framework of an interdisciplinary research project on ‘Formal Models of Cognitive Complexity’.
264
B. HÖHLE, J. WEISSENBORN, M. SCHMITZ & A. ISCHEBECK
References Braine, M. 1976. “Children’s first word combinations.” Monographs of the Society for Research in Child Development 41. Cinque, G. 1993. “A null theory of phrase and compound stress.” Linguistic Inquiry 24:239–297. Clahsen, H. 1982. Spracherwerb in der Kindheit. Eine Untersuchung zur Entwicklung der Syntax bei Kleinkindern. Tübingen:Narr . Gerken, L. A. and McIntosh, B. J. 1993. “Interplay of function morphemes and prosody in early language.” Developmental Psychology 29:448–457. Klatt, D. H. 1975. “Vowel lengthening is syntactically determined in a connected discourse.” Journal of Phonetics 3:129–140. Ladd, D. R. 1996. Intonational phonology. Cambridge:University Press. Lehiste, I. 1973. “Rhythmic units and syntactic units in production and perception.” Journal of the Acoustical Society of America 54:1228–1234. Nespor, M., Guasti, M. T. and Christophe, A. 1996 “Selection word order:The rhythmic activation principle.” In Interfaces in Phonology, U. Kleinhenz (ed.). Berlin: Akademie Verlag. Penner, Z. and Weissenborn, J. 1996. “Strong continuity, parameter setting and the trigger hierarchy. On the acquisition of the DP in Bernese Swiss German and High German.” In Generative Perspectives on Language Acquisition: Empirical Findings, Theoretical Considerations, Crosslinguistic Comparisons, H. Clahsen (ed.). Amsterdam:John Benjamins. Pinker, S. 1984. Language Learnability and Language Development. Cambridge, MA.: Harvard University Press. Rothweiler, M. 1989. Nebensatzerwerb imDeutschen. Eine empirische Untersuchung zum Primärspracherwerb. Ph.D. Dissertation, University of Tübingen. Santelmann, L. M. and Jusczyk, P. W. 1998. “Sensitivity to discontinuous dependencies in language learners:evidence for limitations in processing space.” Cognition 69:105–134. Scott, D. R. 1982. “Duration as a cue to the perception of a phrase boundary.” Journal of the Acoustical Society of America 71:996–1007. Schmerling, S. 1976 Aspects of English sentence stress. Austin:University of Texas Press. Selkirk, E. 1984. Syntax and Phonology. The Relation between Sound and Structure. Cambridge, MA.:MIT Press. Truckenbrodt, H. 1998. “Phrasale Betonung im Deutschen.” Paper presented at the Berlin Phonology Workshop. Zentrum für Allgemeine Sprachwissenschaft, Berlin. Weissenborn, J. 1991. “Functional categories and verb movement in early German. The acquisition of German syntax reconsidered.” In Spracherwerb und Grammatik. Linguistische Untersuchungen zumErwerb von Syntax und Morphologie, M. Rothweiler (ed.). Linguistische Berichte, Special Issue 3/1990. Weissenborn, J. 1994. “Constraining the child’s grammar:Local wellformedness in the development of verb movement in German and French.” In Syntactic Theory and
DISCOVERING WORD ORDER REGULARITIES
265
Language Acquisition: Crosslinguistic Perspectives. Vol. 1: Phrase Structure, M. Suner, B. Lust and J. Whitman (eds.). Hillsdale, N. J.:Lawrence Erlbaum. Weissenborn, J., Höhle, B., Kiefer, D. and Cavar, D. 1998. “Children’s sensitivity to word-order violations in German:Evidence for very early parameter-setting.” In Proceedings of the 22nd Annual Boston University Conference on Language Development, A. Greenhill et al. (eds.). Somerville:Cascadilla Press.
On the Prosody–Lexicon Interface in Learning Word Order A study of normally developing and language-impaired children Zvi Penner, Karin Wymann University of Konstanz
Jürgen Weissenborn University of Potsdam
Recent literature in developmental psycholinguistics has emphasized the role of prosodic bootstrapping in discovering syntactic constituents and word order regularities in early grammar.1 One of the puzzles this theoretical paradigm leaves open is the “perception (or comprehension) prior to production” maxim.2 In many cases a puzzling discrepancy between the child’s perceptive capacity and speech production is observed which goes beyond the well-known phenomenon of “truncated grammatical morphemes”. It is, for instance, by no means clear why children are capable of establishing trochees as a prosodic unit for the purpose of speech segmentation at the age of 7 months, but fail to establish the trochaic template until late in the second year at the level of production.3 The present paper addresses the problem of “perception prior to production”. It focuses on the relationship between access to prosodic information in the input for the purpose of discovering the rule of object placement in early German and the application of the same rule in speech production. A careful, computer-aided evaluation of the data from both normally developing and language-impaired children reveals a surprising mismatching between the syntactic and the prosodic data. While all children target-consistently apply the OV rule of object placement at the syntactic level from early on, they all fail to apply the prosodic rule of relative prominence within the same phrases in speech production.
268
ZVI PENNER, KARIN WYMANN & JÜRGEN WEISSENBORN
Assuming the Rhythmic Activation Principle of Nespor et al. (1996) and Guasti et al. (this volume), we will argue that this data reflect a genuine discrepancy between perception and production. We will claim that this discrepancy is not simply amenable to a performance-motivated lag, but rather follows in a predictable way from learnability constraints such as the “Avoid Irreversible Wrong Decisions” maxim. The paper is organized as follows. Section 1 introduces the learning algorithm, focusing on the prosody/syntax mapping account of Nespor et al. (1996) and Guasti et al. (this volume). Section 2 is an analysis of the earliest object–verb constructions in German. The data base consists of 7 corpora (3 normally developing children and 4 language-impaired children). Section 3 proposes an account of the discrepancy between syntax and prosody in speech production in terms of learnability constraints.
1.
An algorithm for learning the object placement rule
One of the word order rules the child obeys from the beginning of the two-wordstage is the directionality of object placement. The syntactic rule of object placement is best captured in terms of parametrization (cf. Koopman 1984; Travis 1984):A given complement C in a language L either precedes (head-final) or follows its head (head-initial). This parameter is responsible for the difference between languages like English and German. As can be seen from (1), the verbal phrase in English is head-initial, while its German counterpart is head-final: (1)
German VP NP Brot
V essen
English VP V eat
NP bread
From a learning-theoretical point of view, the acquisition task of the child in German is by no means trivial, given that the syntactic information she or he is exposed to is notoriously ambiguous with regard to object placement. This is due to the fact that the V2 rule in main clauses raises the verb to a position preceding the object. This is shown in (2):
THE PROSODY–LEXICON INTERFACE
(2)
a. b. c.
269
Brot essen (O > V in root and embedded infinitives) bread eat jeden Tag essen wir Brot (V > O in main clauses (V2)) every day eat we bread … dass wir Brot essen (O > V in subordinate clauses) … that we bread eat
Given these data, the child might mistakenly conclude that object placement in German is underlyingly variable. That is, it may freely occur either on the right or on the left side of the verb. This conclusion might have some far-reaching negative consequences for the child’s grammar as a whole, given that the underlying directionality parameter determines the language-specific syntactic behavior at higher levels. So for instance, as suggested in Koster (1987) and Bayer (1990), the applicability of wh-extraction rules is one kind of syntactic phenomena which seems to be dependent on the underlying directionality parameter. In this vein, we will assume that the child has to find out that the OV ordering in embedded structures (1b–c) reflects the underlying directionality parameter, whereas the V2 rule of root clauses (which allows the VO pattern) secondarily emerges from the application of an independent parameter. The fact that exposure to pure syntactic configurations as in (1a–c) may be misleading for the acquisition of the object placement parameter in German raises the question of whether the child does not resort to another source of input information than finite main clauses. Nespor et al. (1996) and Guasti et al. (this volume) propose that the child succeeds in setting the directionality parameter by virtue of the Rhythmic Activation Principle (RAP). The RAP is based on the relative prominence between the daughter nodes within prosodic constituents and its syntactic correlates. Confining the discussion to the Phonological Phrase F, the rule of prosody/syntax mapping can be reduced to (3) (simplified version): (3)
Φ relative prominence In head-first languages the rightmost node of F (the complement) is labeled ; in head-last languages the leftmost node of F (the complement) is labeled ; all sister nodes of are labeled .
This generalization is confirmed by verbal phrases in German where the object is the more prominent sister (cf. Wiese 1995:302 ff.). The syntax/prosody correspondence for verbal phrases in German is given in (4):
270
ZVI PENNER, KARIN WYMANN & JÜRGEN WEISSENBORN
VP
(4)
ϕ
NP
V
ω
ω
Brot
essen
s Brot
w essen
Informally, the RAP-based algorithm for prosodic bootstrapping is given in (5): (5)
The Rhythmic Activation Principle (RAP) If the child hears a weak–strong pattern within the phonological phrase she or he will set the parameter on [right recursive] (headinitial), while the opposite pattern will give rise to left recursive structures (head-final).
Given these considerations, the RAP seems to be a powerful tool for the purpose of prosodic bootstrapping in German. At first glance, there is no fundamental reason not to conceive of the RAP as a simple comprehension/production homology model which makes a twofold prediction with regard to the acquisition of OV structures in German: (6)
a.
b.
On the assumption that the learner has access to the rhythmic structure in the pre-linguistic period, we expect the child to be capable of setting the directionality parameter very early. In other words, the RAP predicts that object placement in the child’s production be target-consistent from the onset of the two-word-stage. Given that the RAP algorithm refers to the prosody/syntax mapping, we expect the target-consistent word order (OV) to co-occur with the correct production of the prosodic strong–weak pattern.
Let us now systematically examine these two predictions.
2.
The data
In this section we will explore the earliest object–verb constructions in infinitives of the type Brot essen ‘bread eat’ (with and without verb prefixation). Special attention will be paid to the correlation between rhythm (the assignment of relative prominence) and word order. In addition, we will examine the parallel
THE PROSODY–LEXICON INTERFACE
271
development of stress assignment in compounds. As will be shown in the discussion section, compounds and object-verb structures in German are closely related in terms of word order and prominence pattern. It will be argued that this overlapping imposes a considerable influence on the acquisition of prominence relationship in object–verb constructions. The syntactic study of early word order in German in Schönenberger, Penner and Weissenborn (1997) clearly confirms the prediction in (6a). The data are summarized in (7). Note that in all the data examined here the verb is non-finite. (7)
a.
b.
S.’s corpus 1;10–2;2 618 non-finite utterances 98.4% OV Zähne putze(n) teeth (to) brush Schuh ausziehn shoe (to) off-take 1.6% VO *nich buttmache(n) lumlum not (to) destroy balloon J.’s corpus 1;2–2;4 374 non-finite utterances 98.7% OV Bauue hole ball (to) get Dip fundet jeep found 1.3% VO *hole Buech (to) fetch book
S. (1;10,22) S. (1;11,13) S. (2;00,01)
J. (1;08,22) J. (1;08,25) J. (2;01,25)
These data show that the parameter of head directionality in German is acquired extremely early. These findings confirm previous work on the acquisition of German (Stern & Stern 1928; Roeper 1972). Interestingly, comparable numbers are also found in a study of 10 language impaired children. An examination of 84 transitive infinitives in the corpus (Penner 1998) shows that target-inconsistent VO patterns occur only in two utterances (2.4%). That is, the target-consistent OV pattern seems to be extremely stable not only in early child grammar, but also in the speech production of language impaired children. So much for the word order data. We now turn to a more detailed examination of the prosody/syntax interface in the domain of object placement.4 We will be mainly concerned with the prediction (6b) of the RAP account. For the purpose of answering this question, we will explore the earliest data of 3 normally developing children and 4 young, language impaired children. The prosodic analysis of the data is based on CSL representations (Computerized
272
ZVI PENNER, KARIN WYMANN & JÜRGEN WEISSENBORN
Speech Lab Kay Elemetrics Corp. 5.05).5 Using the parameter “intensity”, we evaluated each relevant utterance according to two features, namely [Stress] and [Break]. The basis of the evaluation was a comparison with the control data of language unimpaired children in the University day care as well as the adult pronunciation of the target expression. While the primary data of this study are the OV structures, we will add data concerning the parallel development of compounds in the survey of 5 out of the total of 7 children. As for the feature [Stress], three values have been distinguished, namely [s(trong) > w(eak)] (target-consistent)
[w(eak) > s(trong)] (target-violation)
[Level Stress]
The notion of “level stress” (or equal stress) refers to the absence of relative prominence and has a default character. “Level stress” is typical of intermediate stages in the acquisition of prosody at both word and compound level (cf. Fikkert 1994 and this volume) as well as of stagnation in the prosodic development in language impaired children (cf. Fikkert & Penner 1998; Fikkert, Penner & Wymann 1998; Penner, Wymann & Dietz 1998; Penner & Wymann 1999). In agreement with our control data, we define Level Stress as “less than 7dB difference between the s peak of Word1 and the s peak of Word2”. The CSL representation in Figure 1 is a typical and representative example for “level stress” and is taken from the production data of the language impaired child N.Q. The feature [Break] measures the duration of phonetic inactivity between two units (e.g. between the object and the verb). As confirmed by the control data, the basic assumption is that there is no such inactive period between object and verb in the target language within OV constructions. We will treat the items separated through a break as pausal forms. The canonical definition of phrasal prosody is based on the two parameters [Break] and [Stress]: (8)
Canonical Phrasal Prosody (German) The phonological phrase is prosodically canonical iff the energy difference between the s peaks is not lower than 7 dB and there is no pause (phonetic inactivity) between the complement and the head
THE PROSODY–LEXICON INTERFACE
273
wasser holen [water get] ‘get water’ Figure 1. CSL representation of an utterance of N.Q.’s production as an example for level stress
2.1 Normally developing children 2.1.1 H.’s data 22 transitive infinitives are attested in H.’s corpus until the age of 2;0.6 As can be seen from the data in Table 1, there is a clear discrepancy between the prosodic and the syntactic data. While none of the utterances violate the OV rule, the target pattern (without a break) is found only in 4 cases (18%) (repetitions are left out). Based on the evidence arrived at through the analysis of H.’s data and in line with our definition of the canonical phrasal prosody in (8), we will assume that H’s OV structures are pre-canonical.
274
ZVI PENNER, KARIN WYMANN & JÜRGEN WEISSENBORN
Table 1. H. OV structures 1;06,13 1;08,27 1;08,27 1;09,03 1;09,06 1;09,07 1;10,16 1;10,20 1;10,26 1;11,19 1;11,19 1;11,26 1;11,28 1;11,29 1;11,29 1;11,29
1.0 blau nehmen 2.0 Fleisch gekauft 3.0 Ball malen 4.0 des waschen 5.0 Fisch essen 6.0 Milch trinken 7.0 Auto gemalt 8.0 auto mal 9.0 Brot essen 10. eile machen (heile machen) 11. eile machen 12. ei:s daufen (Eis kaufen) 13. ei machen (streicheln) 14. Hände waschen 15. eimer o:l (Eimer holen) 16. eimer o:l (Eimer holen)
[+OV; +Level Stress; −Break] [+OV; +Level Stress; +Break 350 ms] [+OV; +Level Stress; +Break 1400 ms] [+OV; +Level Stress; +Break 150 ms] [+OV; ; ?Break] [+OV; ; −Break] [+OV; + Level Stress; −Break] [+OV; ; +Break 450 ms] [+OV; +Level Stress; −Break] [+OV; +Level Stress; −Break] [+OV; +Level Stress; −Break] [+OV; +Level Stress; −Break] [+OV; ; −Break] [+OV; ; −Break] [+OV; ; −Break] [+OV; +Level Stress; −Break]
2.1.2 K. ’s data K.’s first object–verb constructions are summarized in Table 2. K.’s data display a clear discrepancy between word order and prosodic representation. While none Table 2. K. OV structures 1;09,15 1;09,21 1;09,28
1;10,25 1;11,21
1. 0Hände putzen 2. 0Hand putzen 3. 0Bein waschen 4. 0Buch gucken 5. 0(Mund?) Hand putzen 6. 0Auto holen 7. 0Markt schauen 8. 0Zug zeigen 9. 0dieda bauen 10. Tücher putzen 11. beiden bauen 12. Haus machen 13. Fernsehen gucken 14. alle mischen 15. Keks essen 16. dreirad fahren
[+OV; [+OV; [+OV; [+OV; [+OV; [+OV; [+OV; [+OV; [+OV; [+OV; [+OV; [+OV; [+OV; [+OV; [+OV; [+OV;
+Level Stress; −Break] ; +Break 600 ms] ; +Break 500 ms] ; +Break 740 ms] ; +Break 900 ms] ; +Break 300 ms] ; +Break 500 ms] ; +Break 600 ms] +Level Stress; +Break 350 ms] ; +Break 400 ms] ; +Break 300 ms] ; +Break 350 ms] ; +Break 100 ms] +Level Stress; −Break] +Level Stress; −Break] ; −Break]
275
THE PROSODY–LEXICON INTERFACE
of the utterances in Table 2 violates the object placement rule, the first and only OV construction with a canonical prosodic representation is utterance no. 16 (6%). The main characteristic of K.’s developmental path is the feature [Break] which subsequently decreases with each file (Figure 2). 100.00%
100.00%
90.00%
83%
80.00%
79.00%
number
70.00% 60.00% 50.00%
42%
40.00% 30.00% 20.00% 10.00%
0
0.00% 0.6 sec
0.5 sec
0.48 sec
0.25 sec
0 sec
time
Figure 2. Decrease of [Break] time in the OV structures of K
As can be seen from Table 2, the feature [+Level Stress] is attested only in 4 cases (1; 9; 14; 15) with no clear pattern of decrease over time. In line with our definition of the canonical phrasal prosody in (8), we will assume that K.’s OV structures are non-canonical. In order to examine whether or not the acquisition of the compound stress rule (cf. (9)) correlates with the development of OV structures, we regularly elicitated compounds. The results are summarized in Table 3. The feature [+Level Stress] is attested in 8 cases (66%). Interestingly enough, the 4 instances of sw are disyllabic (either in the (truncated) child’s template or in the target word itself). The feature [Break] is assigned [+] in 8 cases (5 = multi-syllabic; 3 disyllabic). Note that the duration of the break in the
276
ZVI PENNER, KARIN WYMANN & JÜRGEN WEISSENBORN
Table 3. K. compounds 1;09,15
1. 0Regenjacke
[+Level Stress; +Break 80 ms]
1;09,21
2. 0Hasentier
[+Level Stress; +Break 100 ms]
3. 0Ringelschwanz
[+Level Stress; −Break]
4. 0Motorrad
[; −Break]
5. 0Blumentopf
[; +Break 200 ms]
1;10,25
6. 0Fahrradschlüssel
[+Level Stress, +Break 350 ms]
1;11,02
7. 0Flugzeug
[+Level Stress, +Break 150 ms]
8. 0Autogarage
[+Level Stress, +Break 200 ms]
9. 0Holzsteine
[+Level Stress, −Break]
10. Babycousin
[+Level Stress, +Break 700 ms]
11. Tierhaus
[; −Break]
12. Tierbuch
[; +Break 200 ms]
1;09,27
2;00,9
compounds is significantly shorter than in (early) OV structures. Given these data, K.’s compounds, on a par with his OV structures, are prosodically non-canonical. 2.1.3 E.’s data E.’s first object–verb constructions are summarized in Table 4 below.7 Table 4. E. OV structures 1;05,07 1;05,29 1;06,09
1. 2. 3. 4. 5. 6. 7.
Nuss malen (baln) Fische angeln (nisse) Turm bauen Turm bauen Gitarre spielen desse malen Turm bauen
[+OV; [+OV; [+OV; [+OV; [+OV; [+OV; [+OV;
+Level Stress, ; −Break] +Level Stress; ; −Break] +Level Stress; +Level Stress; +Level Stress;
+Break 50–100 ms] −Break] −Break] −Break] −Break]
E.’s data show the same discrepancy between word order and prosodic representation already observed in H. and K. While all the relevant utterances adhere to the target rule (i.e. they are target-consistent OV’s), none of the OV structures is prosodically canonical according to our definition in (8).
277
THE PROSODY–LEXICON INTERFACE
A similar non-canonical prosodic representation with the predominant pattern [+Level Stress] is also found in E.’s compounds. The data are summarized in Table 5. Table 5. E.’s first compounds 1;05,29
1;06,09
1. 2. 3. 4. 5.
Motorrad (motogolat) Fussball Spielplatz Kokoadeeis (Schokoladeneis) Neckehaus (Schneckenhaus)
[; −Break] [+Level Stress; [+Level Stress; [+Level Stress; [+Level Stress;
−Break] −Break] −Break] −Break]
2.2 Language impaired children This subsection outlines the phonology of the earliest object–verb constructions and compounds (if at hand) in 4 language-impaired children.8 2.2.1 V.’s data The OV constructions of V.9 show the typical “level stress”-pattern. The data are summarized in Table 6. Table 6. V. OV structures 2;02;23 2;02;23 2;02;23 2;03;06 2;03;26 2;04;04
1. 2. 3. 4. 5. 6.
Be(tt) ge(hen) grossen machen Kuche(n) machen Mittag essen Wasser (ge)holt Zug holen
[+OV; [+OV; [+OV; [+OV; [+OV; [+OV;
+Level Stress; −Break] ; −Break] , + short Break 50 ms] +Level Stress; + Break 100 ms] +Level Stress; + short Break 50 ms] ; + Break 600 ms]
Note that although the OV pattern in item 3 (Table 6) is , the object noun itself is , thus violating the trochaic pattern of the target word. We will thus conclude that only item 2 in V.’s early object–verb constructions is canonical (17%). There is thus a clear discrepancy between the target-consistent word order and the corresponding prosodic pattern in V.’s data. To examine the correlation of the acquisition of the compound stress rule with the development of the OV constructions, we summarize V.’s compounds in
278
ZVI PENNER, KARIN WYMANN & JÜRGEN WEISSENBORN
Table 7. As can be seen from Table 7, V.’s first compounds are all prosodically noncanonical. Table 7. V.’s early compounds 2;03;06 2;03;06 2;03;26 2;03;26 2;04;04
1. 2. 3. 4. 5.
Autobahn Rucksack Fischbrunnen Regenschirm Güterzug
[+Level Stress; +short Break 100ms] [; +short Break 80 ms] [+Level Stress; +Break 300 ms] [+Level Stress; +Break 500 ms] [; +Break 250 ms]
2.2.2 N.W.’s Data N.W.’s first object–verb constructions are summarized in Table 8.10 Table 8. N.W. OV structures 3;00;28 3;01;11 3;02;00 3;02;00
1. 2. 3. 4.
“mämmäm” neh(men) (Essen) Auto neh(men) “täfi” neh(men) (Bonbon) Schneemann bau(en)
[+OV; +Level Stress; −Break] [+OV; +Level Stress; −Break] [+OV; ; −Break] [+OV; +Level Stress; + short Break 75 ms]
The early OV data of N.W. are typical examples illustrating the feature [+Level Stress]. Note that, as inthe case of V.’s corpus, the patternin3 (Table 8) is not entirely target-consistent due to the fact that the object noun is erroneously . Under these circumstances, all OV constructions are target-consistent with respect to word order, while none of them is prosodically canonical. There are only two compounds in the early data of N.W. both of which are [+Level Stress] (Table 9). Table 9. N.W. compounds 3;00;28 3;00;28
1. Schneemann 2. Schneemannhund
[+Level Stress; −Break] [+Level Stress; +Break 780ms]
THE PROSODY–LEXICON INTERFACE
279
2.2.3 D.’s data D.’s first object–verb constructions are summarized in Table 10 (repetitions are left out).11 Table 10. D. first OV structures 2;03;17 2;03;17 2;03;17
1. Ei neh(men) 2. Foto schauen 3. Velo fahren
[OV; +Level Stress; −Break] [OV; +Level Stress (slightly ); +Break 150 ms] [OV; +Level Stress; −Break]
The early OV data of D. regularly display the feature [Level stress] while the feature [Break] appears only in one of D.’s earliest object–verb constructions. The non-canonicity of the prosodic pattern contrasts with the target-consistent word order. As in the OV structures, the predominant pattern in D.’s early compounds is [Level Stress] (Table 11).
Table 11. D.’s first compounds 2;03;17 2;03;17 2;03;17 2;03;17
1. 2. 3. 4.
Auspuff Os(ter)has(e) Postauto Kohlewagen
[+Level Stress; +Break 240 ms] [+Level Stress; +Break 200 ms] [+Level Stress; +Break 150 ms] [; −Break]
2.2.4 N.Q.’s data N.Q.’s object–verb constructions are summarized in Table 12.12 Table 12. N.Q. earliest OV structures 3;05;14 4;03;28 4;03;28
1. Augen zu zu (machen) 2. Velo fahren 3. Brot essen
[+OV; + Level Stress; +Break 100 ms] [+OV; + Level Stress; +Break 100 ms] [+OV; + Level Stress; −Break]
There is no violation of the word order rule in N.Q.’s OV structures. This contrasts with the fact that they are all prosodically non-canonical.
280
ZVI PENNER, KARIN WYMANN & JÜRGEN WEISSENBORN
2.3 Summary: Target-consistent word order vs. the non-canonicity of the phrasal prosody To sum up this section, the earliest object–verb structures in both normally developing and language impaired children display a clear discrepancy between word order and prosody. While all the children correctly produce OV’s (with one single exception in V.’s corpus), the corresponding prosodic representation of these structures is unequivocally non-canonical in our terminology. The data are summarized in Table 13. Table 13. Object placement vs. phrasal prosody. A summary Ch.
Language development
Syntactic Representation
Prosodic Representation
Prevalent Pattern
H.
normally developing
OV target-consistent
non-canonical
[Level Stress]
K.
normally developing
OV target-consistent
non-canonical
[Break]
E.
normally developing
OV target-consistent
non-canonical
[Level Stress]
V.
language impaired
OV target-consistent (one single exception)
non-canonical
[Level Stress/ws]
D.
language impaired
OV target-consistent
non-canonical
[Level Stress]
N.W.
language impaired
OV target-consistent
non-canonical
[Level Stress]
N.Q.
language impaired
OV target-consistent
non-canonical
[Level Stress]
These findings go hand in hand with the observation that the prosodic representation of compounds is non-canonical as well. This correlation holds primarily for E.’s data and for the data of all language impaired children. In these children, both compounds and object–verb constructions display level stress. In K.’s data the state of affairs is somewhat more intricate due to the fact that disyllabic compounds are more likely to display the correct sw pattern.
THE PROSODY–LEXICON INTERFACE
3.
281
Discussion
The data of both language impaired and young, normally developing children exhibit an evident discrepancy between word order and prosodic representation. All the children adhere to the object placement rule (OV), regardless of whether or not they produce the canonical prosodic representation of VPs. These robust findings confirm the prediction in (6a), but disconfirm (6b), repeated here for convenience: (6)
a.
b.
On the assumption that the learner has access to the rhythmic structure in the pre-linguistic period, we expect the child to be capable of setting the directionality parameter very early. In other words, the RAP predicts that object placement in the child’s production be target-consistent from the onset of the two-word-stage. Given that the RAP algorithm refers to the prosody/syntax mapping, we expect the target-consistent word order (OV) to co-occur with the correct production of the prosodic strong–weak pattern.
In this section we will address the question of how this discrepancy emerges in the child’s early speech production.13 In their pilot study, Guasti et al. (this volume) report that French babies from 6 to 12 weeks of age are capable of discriminating between French and Turkish utterances on the basis of relative prominence within phonological phrases. The authors conclude that the directionality parameter can be set extremely early by means of referring to the prominence relationship between head and complement as the main prosodic cue. In the light of these findings it would not be entirely implausible to assume that this kind of early sensitivity to the distribution of strong and weak elements plays a central role in discovering the object placement rule in the prelinguistic period. This would account for the target-consistent production of OV structures from the onset in both language impaired and normally developing children. If this conclusion is basically correct, Guasti et al.’s theory implies that the child is capable of mapping rhythmic data on the syntax of phrases at a very early stage (for recent work on this issue with respect to the acquisition of German word order see Höhle, Weissenborn, Schmitz & Ischebeck, this volume). Unfortunately, this hypothesis leaves the glaring syntax/prosody asymmetry at the production level unexplained. These considerations raise the question whether the syntax/prosody asymmetry in our findings can be adequately accounted for as a special case of the “perception prior to production” maxim in language acquisition. This hypothesis
282
ZVI PENNER, KARIN WYMANN & JÜRGEN WEISSENBORN
refers to the observation that infants are sensitive to grammatical morphemes long before they produce them (for a detailed overview of this issue cf. Golinkoff et al., this volume). So, for instance, children not yet producing bound morphemes like English -ing or determiners are nonetheless capable of analyzing its function in the speech flow and using it in sentence comprehension and segmentation (for recent work on the perception of determiners in German see Höhle & Weissenborn 2000). Due to recent experimental techniques like the Head Turn Preference procedure, an impressive body of evidence has been gathered to support the “perception prior to production” hypothesis. The assumption that infants have considerably more grammatical knowledge than is reflected in their early utterances seems to be empirically established. Within the framework of the “perception prior to production” hypothesis, the acquisition of the object placement rule in German would be best captured as an epiphenomenon of two independent performances with one performance taking precedence over the other. However, this assumption is not at all trivial. While it seems to be trivially plausible that no prosodically correct OV structures may occur prior to the two-word stage, it is not clear at all why children go through a long period of time during which they adhere to the word order rule but systematically fail to apply the corresponding prosodic rule which they are supposed to have mastered. This obvious asymmetry deserves more explanation. A detailed analysis of object placement in German from a learnability point of view may shed some light on this issue.14 Let us assume that the apparent discrepancy between the word order data and the prosodic representation in the child’s production can be best accounted for in terms of input opacity and learnability constraints. In order to understand how this discrepancy emerges, we first need a more detailed analysis of the input with regard to the interaction of different stress rules. A more careful examination of the stress rules in German reveals some potential difficulties for the RAP as an algorithm of setting the directionality parameter.15 This is due to the fact that the “triggering domain” the child is supposed to pay special attention to, namely the phonological phrase, involves not just the basic representation of relative prominence of heads vs. complements, but also various . The latter cause changes in the prominence representation due to which the information needed in order to succesfully map prosodic data onto syntactic configurations may become opaque for the child. One such mechanism of stress shift takes place in compounds. Basically, compounds (like full-fledged VPs) in German are taken to be phonological phrases F (cf. Wiese 1995:298 ff.). From the point of view of relative prominence, compounds indeed behave like object–verb constructions of the type Brot essen
283
THE PROSODY–LEXICON INTERFACE
‘bread eat’ in (4), displaying the sw pattern. That is, on a par with (4), the complement (which is the leftmost element) is , whereas the head of the compound is . This holds for both verbal and nominal compounds as shown in (9): (9)
a.
b.
ϕ
ϕ
ω
ω
ω
ω
s Staub dust
w saugen suck
s Klavier piano
w spielen play
c.
d.
ϕ
ϕ
ω
ω
ω
ω
s Wein wine
w glas glass
s Fuss foot
w ball ball
Being exposed to an input fragment which includes utterances as (9a–d), it seems at first glance that the RAP-learner would successfully derive the correct complement > head order by referring to the compound as the relevant triggering domain. The following factors, however, indicate that this cannot be the correct generalization and that the distribution of strong and weak elements is subject to more complex interface rules than just the basic prominence relationship. Note first that the compound stress rule generally holds for simple, twoword compounds such as Fussball ‘foot-ball’.16 However, things become more intricate once compounding involves more than two simple members. In this case, the compound stress rule is sensitive to branchingness. That is, it is regularly the case that the branching sister is assigned , while the nonbranching sister is , regardless of its position within the compound.17 Within the branching segment of the compound the sw pattern remains unchanged:The complement is marked , while the head is . This is shown in (10) ([Stadtplanungs[büro]] ‘office for city planning’ vs. [Stadt[planungsbüro]] ‘planning office of the city’):
284
ZVI PENNER, KARIN WYMANN & JÜRGEN WEISSENBORN
(10)
a.
ϕ ϕ
ω
s
w
ω
ω
s stadt
w planungs
b.
Büro ϕ
ω
ϕ
w
s
stadt
ω
ω
s planungs
w Büro
In such cases, one may assume that the embedded F (i.e. the branching constituent) is automatically assigned , while the ω constituent is . Interestingly enough, this is not what happens in VPs. At this level of constituency the object is always more prominent (), irrespectively of whether or not the head branches. This is shown in (11):
ϕ
(11)
ω
ϕ
s
w
den Teppich
staub
saugen
Another type of stress shift rule is the so-called Rhythmic Reversal. This rule, which is motivated by the well-formedness principle “Avoid Clash” (e.g. two
285
THE PROSODY–LEXICON INTERFACE
subsequent s’s), applies equally in both verbal phrases and compounds. An example with a simplified grid notation is given in (12):18 (12)
* * * * * * * * * * * * * * * * * Ausziehen → den Rock ausziehen off-put (the) skirt off-put
Accent shift * * * * * * * * * * * → den Rock ausziehen
The same rule also applies in compounds (with an additional step): (13)
* * * * * * * * Hand Arbeit → hand work
* * * * * * * * * Handarbeit hand made
→
* * * * * * * * * Handarbeit
Accent shift * * * * * * * → Handarbeit
Interestingly, although branchingness and Rhythmic Reversal trigger stress shifts, the edges of the phonological phrase seem to preserve the underlying sw pattern of compounds and VPs. As can be seen from (14), although the leftmost syllable may be less prominent than the second one (due to the branchingness effect), it is always more stressed than the rightmost edge: (14)
* * * * * * * Welt [spar world saving
* * [tag]] day
This “Prominence Preserving Principle” is crucial for understanding the bootstrapping procedure proposed below. So far, a brief overview of the stress shift rules in the target language.19 Returning to the RAP algorithm, it is obvious that stress shift phenomena in phonological phrases may make the relevant prosodic cues opaque for the RAP-learner. This follows automatically from the assumption that if the child
286
ZVI PENNER, KARIN WYMANN & JÜRGEN WEISSENBORN
would apply the RAP to any F in her input, stress shift effects would erroneously may give rise to wrong decisions with regard to the procedure of the prosodyto-syntax mapping. We know, however, that children systematically avoid wrong decisions. It has been repeatedly proposed in recent literature on language acquisition that children, given the inaccessibility of negative evidence, must obey some continuity restrictions in order to avoid irreversible wrong decisions (cf., i.a., Wexler & Manzini 1987; Roeper & de Villiers 1992; Weissenborn 1994; Penner 1996; Penner et al. 1998). This state of affairs yields a familiar learnability problem: in order to avoid wrong decisions, the child has to know which kind of stress shift rules may modify the underlying prominence relationship. The crucial question is thus:to what extent does the child have access to the basic ingredients of the stress shift rules? Take for example the categorial labeling F-trees. As can be seen from the examples discussed above, the child’s knowledge with regard to the distribution of the stress shift rules presupposes the ability to distinguish beween X0-F’s (compounds) and XP-F’s (e.g. full VPs).20 However, this task is by no means trivial for the child. In fact, there is substantial evidence that German speaking, normally developing children acquire the distinction between (verbal) compounds and verbal phrases long after the emergence of OV structures, namely around 2;3–6. A detailed discussion of this issue would take us too far afield. For a comprehensive analysis the reader is referred to Penner, Wymann and Dietz (1998) who argue that this delay is connected to a long procedure of acquiring features like aspectuality, event structure, and genericity. The inability to assess the categorial identity of the phonological phrase at the onset of the two-word stage obscures the information the child has to refer to as a RAP-learner. This raises the question how the child can overcome this opacity without risking wrong decisions. The issue of input opacity has been extensively discussed with respect to the notions of “triggering domain” and “underspecification” in recent literature. Roeper and Weissenborn (1990), Roeper and de Villiers (1992), and Penner (1994) propose that, in the absence of negative evidence, children must adhere to principles such as the “Avoid Irreversible Wrong Decisions”. This maxim can be maintained if the child systematically succeeds in restricting the triggering domain to designated contexts of minimal ambiguity. This is, for instance, the case of embedded clauses as a triggering domain for the pro-drop parameter. Roeper and Weissenborn (1990) argue that main clauses constitute unreliable contexts for the pro drop parameter. Their basic argument is that subject pronouns in main clauses may undergo deletion in
THE PROSODY–LEXICON INTERFACE
287
specific contexts of discourse representation. (15) is a good example for this phenomenon: (15)
raining out today
This kind of discourse-driven subject drop is restricted to main clauses. Given that discourse cannot influence subordinate clauses, subject drop of this kind is ruled out in non-root environments: (16) *I think raining out today In this regard, English differs from true null subject languages like Italian in which the subject may be equally dropped in both root and non-root contexts with a missing (or pro) subject in the subordinate clause: (17)
Pia ha detto che é andata al cinema Pia said that (pro) went to the movies
In other words, root clauses are structurally epiphenomenal, given that not only syntactic rules, but also discourse-representational mechanisms apply in this domain. This is not the case in embedded clauses which seem to be resistent to these kind of additional discourse effects. In order not to draw wrong conclusions with regard to sentential structures, the child initially has to ignore the obscuring data of the root clause and confine herself to the material in embedded sentences as a default choice. A similar argument can be made with regard to the initial stage of learning the OV rule on the basis of the RAP algorithm. We assume that, in order to overcome the opacity problem caused by stress shift phenomena without risking (irreversible) conflicts with the target language, the child must systematically reduce the range of prosodic data she or he is exposed to to a specific subset. One possible solution in applying the RAP algorithm at the level of the phonological phrase would be for the child to limit her attention to the edges of the phonological phrase, while ignoring the grid-internal distribution of weak and strong syllables. If the “Prominence Preserving Principle” alluded to in (14) above, is indeed a stable rule of the ambient language, then the output of this procedure should be uniform for the child:the leftmost edge is invariably stronger than the rightmost one. In mapping this information onto the corresponding syntactic configuration, the child can successfully derive the value of the head directionality parameter on the basis of the RAP algorithm. In agreement with the “Avoid Irreversible Wrong Decisions” maxim reference to the edges is selective enough to enable the child to add more rhythmic information in the course of development without having to revise the basic configuration.
288
ZVI PENNER, KARIN WYMANN & JÜRGEN WEISSENBORN
The F-tree representation the child is initially forced to refer to is radically underspecified. We assume that the observed deviations from the pattern of the target language in the children’s production are amenable to this kind of underspecification of the F-tree. More technically, as long as the F-tree is radically underspecified, the child would avoid the projection of stress patterns above word level. The initially typical prosodic “errors” such as level stress and illicit breaks are the overt expression of this limitation. They indicate that the child realizes the constituents of the of the F-tree as if they were independent words. We assume that the bootstrapping procedure proposed here is extremely robust. As can be seen from the data, the RAP algorithm is successfully applied both by very young, normally developing children and language impaired children. The main difference between these groups is the duration of the interim solution. While the underspecified representation is a short-lived stage in normally developing children, it seems to become persistent in language impaired children. To conclude this chapter, let us briefly summarize our arguments. The point of departure in our discussion has been the RAP algorithm. This algorithm says that the child derives the head directionality by mapping the pattern of relative prominence in the phonological phrase onto the corresponding complement-head configurations in the syntax. An examination of the production revealed an unexpected, but clear-cut asymmetry between systematic violations of the sw pattern in speech production and target-consistent word order. To account for the data, we have argued that this discrepancy is rooted in the opacity of the input which emerges due to the interaction of the underlying pattern of relative prominence with additional rules of stress shift. The full representation of this interaction is not accessible to the child during the initial period. Adhering to the “Avoid Irreversible Wrong Decisions” maxim the child is forced to resort to a radically underspecified representation of the phonological phrase, filtering out non-edge stress. The deviations from the sw pattern (level stress, breaks) overtly express this underspecification. If our account of the learning procedure is basically correct, then the child’s initial failure to produce sw phonological phrases is not amenable to mysterious performance factors, but rather to a conservative learning principle, namely the “Avoid Irreversible Wrong Decisions” maxim. Within this theoretical framework the notion of “perception precedes (prosodic) production” becomes epiphenomenal. That is, it is not simply the case that there is some arbitrary lag between perception and production of the same structures. The delay at the production level follows from the fact that intricate interface data force the child to resort to intermediate representations in which the affected modules are reduced in a predictable way.
THE PROSODY–LEXICON INTERFACE
289
Acknowledgments We thank Andreas Fischer, Cornelia Dietz, Sandra Kieseheuer and Uli Scharnhorst for a continuing support in collecting and analyzing the data. We are grateful to Christiane von Stutterheim for putting H’s data at our disposal. Special thanks to Caroline Féry, Paula Fikkert, and Barbara Höhle for helpful comments. The usual disclaimers apply. The work of Jürgen Weissenborn has been supported by a grant of the German Science Foundation (DFG) in the framework of an interdisciplinary research project on ‘Formal Models of Cognitive Complexity’.
Notes 1. For a detailed overview s. Golinkoff & Hirsh-Pasek (1996), Jusczyk (1997), and Höhle & Weissenborn (1999). 2. For an overview cf. Golinkoff et al. (this volume). 3. The fact that children first go through a period of subminimal words in speech production is well documented in Fikkert (1994) and related work. 4. The data base of this study consists of 3 corpora of normally developing children (H. (f.), K. (m.), and E (m.)) and 4 corpora of young, language impaired children (D. (m.), V. (f.), NW (m.), and NQ (f.)). H.’s corpus documents the child’s development from 1;0 to 3;04. It comprises 236 transcribed recordings with 24,687 utterances (72,900 words). H.’s corpus has been recorded and prepared for analysis by Ch. von Stutterheim and U. Scharnhorst. K.’s (m.) and E.’s (m.) corpora document the child’s language development from 1;6 and 0;11, respectively. All the data with the exception of H.’s corpus are part of the data bank of the research project “Normaler und Gestörter Erwerb der Lexikon/Syntax-Schnittstelle und die Entstehung lexikalischer Variation” (Sonderforschungsbereich “Variation und Entwicklung im Lexikon” University of Konstanz). Recording and analysis of the data of the langugage impaired children forms part of a longitudinal study. D.’s corpus documents the child’s language development from 1;09 to 4;11. V.’s corpus documents the language development from the age of 1;09 to 5 years and older. N.W.’s data corpus documents the child’s development from 2;02 to 5 years and older. The earliest part of N.Q.’s data has been recorded by her parents. The study of N.Q.’s corpus begins at the babbling stage and documents the child’s language development until the age of 5;6 and older. 5. With only very few exceptions (in cases where background noise made the computerized analysis impossible) the entire relevant data of all children tested was analysed and carefully cross-checked on the basis of CSL representations. 6. The relative low score of object–verb constructions in H.’s large corpus is amenable to the fact that in many utterances the verb and its object display a complementary distribution (i.e. either the object or the verb is dropped). An account for this phenomenon is found in Penner, Wymann and Dietz (1998). 7. The following OV constructions could not be completely evaluated due to background noise: Katze malen (1; 05; 07); Sonne malen (1; 06; 09); Augen malen (1; 06; 09); (K)atze malen (1; 06; 09); Wasser trinken (1; 06; 09). 8. The analysis presented here is based on early production data within a longitudinal study. The survey of the entire data corpus clearly indicates that these young language impaired children suffer from a severe language acquisition delay. According to the literature, 50% of the group
290
ZVI PENNER, KARIN WYMANN & JÜRGEN WEISSENBORN of 13–20% language acquistion delayed children develop persistent language disorders, i.e. become dysphasic. Their lexical and grammatical development often stagnates after short periods of active learning. In the period between 2;0 and 3,0 these late talkers typically begin to produce multiple word utterances (cf. Fikkert, Penner & Wymann (1998) and Penner, Wymann & Dietz (1998)). These children have not caught up on their delay compared to normally speaking children. The data of these language acquisition delayed children form a part of a larger data corpus which contains data of 6 late talkers. The children are speakers of Bernese, a Swiss German dialect.
9. V.’s onset of language production was at the age of 21 months which is very typical for late talkers. V.’s mother reports that V. never babbled and that the communication between mother and child was non-verbal. The first verbal particles occur at the age of 2;01. V. employs one deictic particle as a place holder for all deictic particles. The grammar also shows a considerable delay in several modules, namely phonology, prosody, and syntax. V.’s comprehension is very poor (especially in wh-questions). 10. N.W. is a very seriously language acquisition delayed child. The development of the vocabulary stagnated up to the age of 2;11. The verbal part of the lexicon was distinctively particleoriented. At the age of 3;0 child N.W. reached the period of the “vocabulary spurt” to a reduced extent which normally developing children typically reach at the age of 1;06. The prosodic delay of N.W. is still manifest at the age of 3;05. The question comprehension of N.W. is, as in V.’s case, extremely reduced. The first focus auch ‘also, too’ and verbal particles appear at 2;10. At the age of 3;04 the first sentential wh-questions appeared. At this age the development of N.W.’s verb lexicon stagnates. 11. D. is a language acquisition delayed child with a late onset of language production. Compared to other late talkers the development of D.’s lexicon is faster. The OV structures are more advanced, but the utterances are prosodically and syntactically not target-consistent. D. might develop to a so-called “late bloomer”, but D. has not caught up on the delay compared to normally developing children. 12. N.Q. is an extremely seriously language acquisition delayed child. N.’s prosody and grammar show a considerable delay in all modules of two years (cf. Fikkert, Penner & Wymann 1998). At the age of 3;04 N.Q.’s vocabulary only consists of monosyllabic CV structures. N.’s object– verb constructions show the typical level stress-pattern with a short break. If syllables are doubled, N.Q. often uses the weak instead of the target-consistent strong syllable. 13. We will leave open the question of to what extent the prosodic ill-formedness of early OV construction is amenable to a delay in the development of motor skills. However, it should not go unnoticed that rhythmicity in the child’s speech production is already observed during the stage of canonical babbling (cf. Vihman 1996:109 ff.). This is especially true with regard to the sw pattern at the foot level. 14. We will leave open the question how the child can identify the VP within the clause structure. Being aware of this problem, Nespor et al. (1996) and Guasti et al. (this volume) stipulate that, in applying the RAP, the child must be sensitive not only to single F’s in isolation, but also to the general rhythmic pattern of F sequences at the level of the Intonational Phrase. Within the Intonational Phrase, F’s occur recursively. Putting aside focus configurations, this results in prominence patterns like which display a sequence of right-prominent F’s with a single “exceptional” F at the right edge of the Intonational Phrase. Given that the exceptional F is just one per Intonational Phrase, the child can come to the conclusion that the latter is likely to be the embedded OV structure (e.g. the bare infinitive complement of an
THE PROSODY–LEXICON INTERFACE
291
auxiliary verb). The F’s, on the other hand, are the head-initial NPs, APs, and PPs which multiply occur at the level of the Intonational Phrase. 15. For reasons of space we ignore here the fact that in German the VP is head-last, while PPs and DPs are either head-last or head-first. Cf. Nespor et al. (1996) and Guasti et al. (this volume) for a detailed discussion of this issue in Dutch. 16. We disregard coordinative compounds of the type Baden-Württemberg. 17. For word-specific exceptions cf. Wiese (1996:308 ff.). 18. Cf. Wiese (1995:306 ff.) and the literature he cites. An additional complication is the fact that rules that eliminate arhythmies are subject to parametrization (cf. Nespor 1990). We will not discuss this issue here although it may turn out to be crucial for the acquisition of rhythm. 19. Note that, in addition to the categorially driven distribution of stress shift rules, arhythmies are subject to parametric variation (cf. Nespor 1990). This is an additional factor in the child’s learning procedure which we will ignore here. 20. We will leave open the question of how the distinction in the prosodic status of compounds and OV structures in German emerges. Possible explanations can be connected either to the role of branching (cf. Inkelas & Zec 1996) or the distinction between lexical and postlexical rules in the sense of Kiparsky’s model of Lexical Phonology (as adapted to German; cf. Wiese 1995). For a detailed discussion of accent shift in German cf. Féry (1986).
References Bayer, J. 1990. Directionality of Government and Logical Form. Habilitation Thesis. University of Konstanz. Féry, C. 1986. “Metrische Phonologie und Wortakzent im Deutschen.” StudiumLinguistik 20:16–43. Fikkert, P. 1994. On the acquisition of prosodic structure. Dordrecht:ICG Printing. Fikkert, P. 1995. “Models of Acquisition:How to Acquire Stress.” NELS Proceedings. Fikkert, P. (this volume). “Prosodic structure and compounds.” Fikkert, P. and Penner, Z. 1998. “Stagnation in prosodic development of languagedisordered children.” In Proceedings of the 22nd Annual Boston Conference on Language Development, Vol.1, A. Greenhill, M. Hughes, H. Littlefield, and H. Walsh (eds.), Somerville, Mass.:Cascadilla Press. Fikkert, P., Penner, Z. and Wymann, K. 1998. “Das Comeback der Prosodie.” Logos Interdisziplinär 6/2:84–97. Golinkoff, R. and Hirsh-Pasek, K. 1996. The Origins of Grammar. Cambridge, Mass.: The MIT Press. Golinkoff, R., Hirsh-Pasek, K. and Schweissguth, M. (this volume). “A reapraisal of young children’s knowledge of grammatical morphemes.” Guasti, T., Nespor, M., Christophe, A. & van Ooyen, B. (this volume). “Pre-lexical setting of the head-complement parameter through prosody.”
292
ZVI PENNER, KARIN WYMANN & JÜRGEN WEISSENBORN
Höhle, B. and Weissenborn, J. 1999. “Discovering grammar. Prosodic and morphosyntactic aspects of rule formation in first language acquisition.” In Learning: Rule Abstraction and Representation, A. Friederici and R. Menzel (eds.). Berlin: W. de Gruyter. Höhle, B. and Weissenborn, J. 2000. “The origins of syntactic knowledge: Recognition of determiners in one year old German children.” In Proceedings of the 24th Annual Boston Conference on Language Development, S.C. Howell, S.A. Fish and T. Keith-Lucas (eds.). Somerville, Mass.: Cascadilla Press. Höhle, B., Weissenborn, J., Schmitz, M. and Ischebeck A. (this volume) “Discovering word order regularities: The role of prosodic information for early parameter setting.” Inkelas, S. and Zec, D. 1996. “Syntax-Phonology Interface.” In The Handbookof Phonological Theory, J. Goldsmith (ed.). Oxford: Blackwell. Jusczyk, P. 1997. The Discovery of Spoken Language. Cambridge, Mass.: The MIT Press. Jusczyk, P. (this volume). “Bootstrapping from the signal. Some further directions.” Koopman, H. 1984. The Syntax of Verbs. Dordrecht: Foris. Koster, J. 1987. Domains and Dynasties: The Radical Autonomy of Syntax. Dordrecht: Foris. Nespor, M. 1990. “On the Separation of Prosodic and Rhythmic Phonology.” In The Phonology-Syntax Connection, S. Inkelas and D. Zec (eds.). CSLI Chicago. Nespor, M., Guasti, M. and Christophe, A. 1996. “Selecting Word Order: The Rhythmic Activation Principle.” Studia Grammatica 41: 1–26. Penner, Z. 1994. Ordered Parameter Setting in First Language Acquisition. The Role of Syntactic Bootstrapping and the Triggering Hierarchy in Determining the Developmental Sequence in Early Grammar. Habilitation Thesis. University of Berne. Penner, Z. 1996. From Empty to Doubly-Filled Complementizers. A Case Study in the Acquisition of Subordination in Bernese Swiss German. Arbeitspapier Nr. 77, Fachgruppe Sprachwissenschaft der Universität Konstanz. Penner, Z. 1998 “Learning-Theoretical Perspectives on Language Disorders in the Childhood.” In Normal and Impaired Language Acquisition. Studies in Lexical, Syntactic, and Phonological Development, Z. Penner and K. Wymann (eds.). Fachgruppe Sprachwissenschaft. Universität Konstanz. Arbeitspapiere Nr. 89. Penner, Z. and Roeper, T. 1998. “Trigger Theory and the Acquisition of Complement Idioms.” In Issues in the Theory of Language Acquisition. Essays in Honor of Jürgen Weissenborn, N. Dittmar and Z. Penner (eds.). Bern: Peter Lang. Penner, Z., Wymann, K. and Dietz, C. 1998. “From Verbal Particles to Complex Object– Verb Constructions in Early German.” In Normal and Impaired Language Acquisition. Studies in Lexical, Syntactic, and Phonological Development, Z. Penner and K. Wymann (eds.). Fachgruppe Sprachwissenschaft. Universität Konstanz. Arbeitspapiere Nr. 89. Penner, Z. and Wymann, K. 1999. “Constraints on Word Formation and Prosodic Disorders.” In Normal and Impaired Language Acquisition II. Studies in Lexical, Syntactic, and Phonological Development, Z. Penner, P. Schulz and K. Wymann (eds.). Fachgruppe Sprachwissenschaft. Universität Konstanz. Arbeitspapiere Nr. 105.
THE PROSODY–LEXICON INTERFACE
293
Pustejovsky, J. 1995. The Generative Lexicon. Cambridge, Mass.:The MIT Press. Roeper, T. 1972. Approaches to a Theory of Language Acquisition with Examples from German Children. Ph.D. Dissertation. Harvard University. Roeper, T. and Weissenborn, J. 1990. “How to make parameters work.” In Language Processing and Language Acquisition, L. Frazier and J. de Villers (eds.). Dordrecht: Kluwer. Roeper, T. and de Villiers, J. 1992. The one feature hypothesis for acquisition. Ms. University of Massachusetts, Amherst. Roeper, T. 1996. “The Role of Merger Theory and Formal Features in Acquisition.” In Generative Perspectives on Language Acquisition, H. Clahsen (ed.). Amsterdam: John Benjamins. Schönenberger, M., Penner, Z. and Weissenborn, J. 1997. “Object placement and early German grammar.” In Proceedings of the 21st Annual Boston University Conference on Language Development Vol.2, M. Hughes and A. Greenhill (eds.). Somerville, Mass.:Cascadilla Press. Stern, W. and Stern, C. 1928. Die Kindersprache. Leipzig:Barth. Travis, L. 1984. Parameters and Effects of Word Variation. Unpublished doctoral dissertation. MIT. Vihman, M. M. 1996. Phonological Development. The Origins of Language in the Child. Oxford:Blackwell. Weissenborn, J. 1994. “Constraining the child’s grammar:Local wellformedness in the development of verb movement in German and French.” In Syntactic Theory and Language Acquisition: Crosslinguistic Perspectives, Vol. 1: Phrase Structure, B. Lust, M. Suner and J. Whitman (eds.). Hillsdale NJ.:Lawrence Erlbaum. Wexler, K. and Manzini, R. 1987. “Parameters and Learnability in Binding Theory.” In Parameter Setting, T. Roeper and E. Williams (eds.). Dordrecht:Reidel. Wiese, R. 1995. The Phonology of German. Clarendon Press:Oxford. Wymann, K. (in Preparation) Profiles and Stages in the Language Development of Late Talkers. A Longitudinal Study on Delayed Language Acquisition of Bernese Swiss German. Ph.D. Thesis. Zec, D. and Inkelas, S. 1990. Prosodically Constrained Syntax. In The Phonology-Syntax Connection, S. Inkelas and D. Zec (eds.). CSLI Chicago.
Index
A adult-directed speech x, 34, 35 allophonic cues x, 10 ambiguous category 216, 220 amplitude 34, 36, 239 auditory word priming x, 54–57, 59–61, 63, 64 B bootstrap 74, 75, 80, 92, 190, 191, 226 bootstrapping vii, viii, ix, x, xi, 3, 4, 15, 19, 47, 76, 79, 80, 87, 88, 140, 147, 167, 190, 193, 222, 223, 226, 227, 233, 236–238, 245, 246, 249, 264, 267, 270, 285, 288 boundary cues 148, 150, 159 branching direction 154, 235, 249, 257 C case particles 126, 129, 135–137, 140, 141 category assignment 156, 157, 192, 204, 206, 213, 224 CELEX 201, 207, 208, 212, 213, 217, 220, 221 child-directed speech (CDS) x, xi, 19, 52, 53, 61, 71, 73, 74, 77, 153 clause-level syntax 89, 93 closed class elements 125–131, 139 closed class morphemes 182 coalition of cues 184, 185 coarticulation 59
competence and performance 125, 126, 128 comprehension 97–103, 111, 112, 119, 120 consistency xii, 41, 128, 134 cues vi, x, xii, xv, 3, 7, 9, 10, 13–16, 25, 26, 30, 33, 42, 43, 47, 49, 52, 62, 71, 76, 83, 88, 90, 92, 145, 148, 150–157, 159, 167–169, 184, 185, 189–193, 196, 201–212, 217, 223–226, 232, 244, 258, 285 D directionality 268–271, 281, 282, 287, 288 distributional bootstrapping 147, 190 distributional properties viii, xii, 3, 13, 14, 18, 52, 172, 173 duration 34, 35, 52, 103, 109, 117, 119, 175, 192, 201, 207, 258, 259, 260, 272, 275, 288 E early child vocabulary 80 early speech 81, 85, 86 encoding xiii, 7, 9, 57, 58, 82, 114, 148, 197, 201, 202, 204–206, 208, 210, 211, 212, 214, 215, 224, 225 explicit memory 54, 57, 58
296 F final lengthening 34, 36, 41, 151, 260 final syllables x, 26, 33–43, 59, 149 foreign language 10 frequency viii, xiv, 8, 41, 50, 54, 72, 74, 75, 80, 81, 83, 84, 140, 149, 184, 193, 195, 212, 213, 216, 217, 220, 221, 220, 221, 222, 221, 227, 237, 244, 258, 261, 263 functional categories 126 functors 171 G German viii, xiii, xvi, 16, 232, 241–243, 246, 249, 250, 258, 267–272, 281, 282, 286, 289–291 given-new distinction 74, 77 grammatical categories 167, 169–171 grammatical morphemes vi, xi, xii, xiii, 16, 19, 148, 152, 153, 155–157, 159, 167–173, 178–185, 267, 281 H head-complement parameter 234, 239, 243 head direction parameter 257 headturn preference procedure 4, 5, 27 human simulations 83 I iambic 30–32, 42, 191, 197, 198 IBL 193, 194, 196, 197, 203, 205–207, 211, 215, 227 imageability 85 implicit memory v, 47, 54 infant-directed speech 34–37, 41, 99, 115, 176, 207 information gain 195, 196, 200 intermodal preferential looking paradigm 168, 174, 175, 182, 183, 185, 232
INDEX intonation 53, 115, 116 Italian 130, 131, 137, 140, 231, 232, 235, 236, 246, 257, 287 J Japanese xii, 128, 129, 131–136, 140, 154, 155, 232, 233 L language comprehension xv, 174, 184 language impaired children 271, 272, 277–281, 288, 289 language processing 47, 99, 139 lemma v, xi, 125–132, 134, 136, 139, 140 level stress 272–274, 276–280, 288, 290 lexical representation 99–102, 106, 107, 111–114, 119, 120 lexical segmentation 74, 148–151, 159 linguistic signal 147, 151, 160 M machine learning 192, 196, 201, 208, 215, 226 maternal speech 83, 85, 87 Metrical Segmentation Strategy (MSS) 11–14 metrical template 26, 42 morphology xiii, xv, 89, 90, 130, 131, 147, 169, 174, 179, 223 morphosyntax 131 multiple-cue system 80 N native language vii, 3, 5, 9, 10, 16, 19, 29, 30, 32, 36, 42, 43, 47, 58, 233, 236 noun bias 85, 86 O object placement 267–271, 275, 280–282
297
INDEX open class xv, 126, 127, 189, 206, 212, 214, 217, 219, 220, 225 P parameter setting vi, xii, xiv, 249 perception prior to production 267, 281 Perceptual Representations System (PRS) 53, 55 perceptual salience 36, 37 phonological bootstrapping 147, 190, 193, 222, 223, 226, 227 phonotactic constraints x, 10, 238 phrase segmentation 151–153, 155, 157 pitch 34–36, 53, 140, 148, 151, 258, 263 procedural knowledge 127, 131, 136 prominence 235–239, 241–245, 257, 258, 260, 263, 267, 269–272, 281, 282, 283, 285–288, 290 prosodic bootstrapping 15, 47, 147, 167, 233, 236–238, 245, 249, 264, 267, 270 prosodic cues x, xii, 25, 26, 42, 43, 151, 154, 190, 232, 244, 285 prosodic hierarchy 234 prosodic organization 3, 15, 19 prosody v, vi, xiii, 15, 25, 87, 115, 151–153, 155, 170, 171, 183, 231, 238–240, 244, 246, 267–273, 275, 279–281, 286, 290 R reaction time 103, 106, 108, 109 rhythm 25, 27, 30, 32, 39, 40, 42, 43, 87, 236, 243–245, 270, 291 rhythmic xii, 9, 26, 27, 30, 32, 33, 36, 53, 236, 237, 257, 268–270, 281, 284, 285, 287, 290 Rhythmic Activation Principle 236, 237, 257, 268–270
S salient syllables x, 25, 26, 38, 39, 43 segmentation v, viii, x, xi, 4, 6, 8, 11, 13–15, 19, 25–27, 30, 32, 33, 36, 39–43, 47, 51, 52, 71, 72, 74, 76, 78, 79, 99, 116, 119, 148–153, 155, 157–159, 168, 172, 184, 185, 252, 267, 281 semantic bootstrapping 88, 190 sound patterns of words 5, 8, 9, 47, 58 speech errors 126, 133–135, 140 speech processing 97–100, 107, 109, 112, 113, 119, 120 stress xii, xiii, xiv, 10, 11, 14, 15, 26, 27, 29, 33, 34, 36–39, 41–43, 48, 49, 59, 60, 100, 149, 150, 171, 176, 178, 189, 191, 192, 197–202, 205–209, 208, 209, 211, 212, 213–215, 217, 221, 223, 224, 235, 237, 238, 243, 244, 271–280, 282–288, 290, 291 stress pattern xiii, 11, 14, 26, 34, 41, 49, 100, 191, 197, 198, 200, 206, 207, 213, 224, 237, 244 syntactic bootstrapping 80, 190 syntactic categories xii, 140, 155, 156, 185, 231, 259 syntactic information 90, 92 syntactic organization 3, 15, 19 syntactic structure vii, xii, 25, 82, 152–155, 210, 237 syntax-semantics relation 90 T trochaic x, 9, 26–33, 38–42, 191, 197, 198, 223, 253, 267, 277 Type Token Ratio 72–74 Type-Token 72, 75 U underspecification 286, 288
298 V V2 parameter 243 verb placement 223, 250, 252 verb-object agreement 130, 131, 137 verbs xi, xii, 18, 75, 80–93, 126, 138, 141, 152, 155, 156, 167–169, 174, 176, 177–181, 184, 186, 189–193, 196–199, 201–203, 206, 207, 208–212, 214, 219, 221, 223–225, 242, 245, 250, 253, 258, 259 very short utterances (VSUs) 72–76 vocabulary learning 80, 82, 86 vowel lengthening 118 W weak syllable omissions 26 word boundaries 9, 10, 14, 15, 33, 52, 77, 149, 185
INDEX word identification x, 42, 43, 48, 49, 52, 54–56, 58, 59, 62–64, 85 word learning 82 word meaning 80, 83, 84, 91 word order vi, xiii, xiv, 3, 118, 119, 232, 233, 237, 241, 243, 245, 249, 251, 257, 263, 264, 267, 268, 270, 271, 274, 276, 277–282, 288 word recognition x, xv, 8, 48–51, 56, 58, 59, 61, 63, 64, 97–101, 103, 105, 107–110, 112–116, 119, 120 word segmentation viii, x, 4, 6, 8, 15, 19, 51, 52, 76, 116, 252 word-to-world pairing 84, 86, 87
In the series LANGUAGE ACQUISITION AND LANGUAGE DISORDERS (LALD) the following titles have been published thus far or are scheduled for publication: 1. WHITE, Lydia: Universal Grammar and Second Language Acquisition. 1989. 2. HUEBNER, Thom and Charles A. FERGUSON (eds): Cross Currents in Second Language Acquisition and Linguistic Theory. 1991. 3. EUBANK, Lynn (ed.): Point Counterpoint. Universal Grammar in the second language. 1991. 4. ECKMAN, Fred R. (ed.): Confluence. Linguistics, L2 acquisition and speech pathology. 1993. 5. GASS, Susan and Larry SELINKER (eds): Language Transfer in Language Learning. Revised edition. 1992. 6. THOMAS, Margaret: Knowledge of Reflexives in a Second Language. 1993. 7. MEISEL, Jürgen M. (ed.): Bilingual First Language Acquisition. French and German grammatical development. 1994. 8. HOEKSTRA, Teun and Bonnie SCHWARTZ (eds): Language Acquisition Studies in Generative Grammar. 1994. 9. ADONE, Dany: The Acquisition of Mauritian Creole. 1994. 10. LAKSHMANAN, Usha: Universal Grammar in Child Second Language Acquisition. Null subjects and morphological uniformity. 1994. 11. YIP, Virginia: Interlanguage and Learnability. From Chinese to English. 1995. 12. JUFFS, Alan: Learnability and the Lexicon. Theories and second language acquisition research. 1996. 13. ALLEN, Shanley: Aspects of Argument Structure Acquisition in Inuktitut. 1996. 14. CLAHSEN, Harald (ed.): Generative Perspectives on Language Acquisition. Empirical findings, theoretical considerations and crosslinguistic comparisons. 1996. 15. BRINKMANN, Ursula: The Locative Alternation in German. Its structure and acquisition. 1997. 16. HANNAHS, S.J. and Martha YOUNG-SCHOLTEN (eds): Focus on Phonological Acquisition. 1997. 17. ARCHIBALD, John: Second Language Phonology. 1998. 18. KLEIN, Elaine C. and Gita MARTOHARDJONO (eds): The Development of Second Language Grammars. A generative approach. 1999. 19. BECK, Maria-Luise (ed.): Morphology and its Interfaces in Second Language Knowledge. 1998. 20. KANNO, Kazue (ed.): The Acquisition of Japanese as a Second Language. 1999. 21. HERSCHENSOHN, Julia: The Second Time Around – Minimalism and L2 Acquisition. 2000. 22. SCHAEFFER, Jeanette C.: The Acquisition of Direct Object Scrambling and Clitic Placement. Syntax and pragmatics. 2000. 23. WEISSENBORN, Jürgen and Barbara HÖHLE (eds.): Approaches to Bootstrapping. Phonological, lexical, syntactic and neurophysiological aspects of early language acquisition. Volume 1. 2001. 24. WEISSENBORN, Jürgen and Barbara HÖHLE (eds.): Approaches to Bootstrapping. Phonological, lexical, syntactic and neurophysiological aspects of early language acquisition. Volume 2. 2001. 25. CARROLL, Susanne E.: Input and Evidence. The raw material of second language acquisition. n.y.p. 26. SLABAKOVA, Roumyana: Telicity in the Second Language. 2001.