Gestures in Language Development
Benjamins Current Topics Special issues of established journals tend to circulate wi...
280 downloads
1082 Views
5MB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
Gestures in Language Development
Benjamins Current Topics Special issues of established journals tend to circulate within the orbit of the subscribers of those journals. For the Benjamins Current Topics series a number of special issues have been selected containing salient topics of research with the aim to widen the readership and to give this interesting material an additional lease of life in book format.
Volume 28 Gestures in Language Development Edited by Marianne Gullberg and Kees de Bot These materials were previously published in Gesture 8:2 (2008)
Gestures in Language Development Edited by
Marianne Gullberg Lund University
Kees de Bot University of Groningen
John Benjamins Publishing Company Amsterdam / Philadelphia
8
TM
The paper used in this publication meets the minimum requirements of American National Standard for Information Sciences – Permanence of Paper for Printed Library Materials, ansi z39.48-1984.
Library of Congress Cataloging-in-Publication Data Gestures in language development / edited by Marianne Gullberg, Kees de Bot. p. cm. (Benjamins Current Topics, issn 1874-0081 ; v. 28) Includes bibliographical references and index. 1. Communicative competence in children. 2. Language acquisition. 3. Gesture. 4. Semantics. 5. Nonverbal communication. I. Gullberg, Marianne. II. De Bot, Kees. P118.4.G47 2010 401’.93--dc22 2010043360 isbn 978 90 272 2258 9 (Hb ; alk. paper) isbn 978 90 272 8744 1 (Eb)
© 2010 – John Benjamins B.V. No part of this book may be reproduced in any form, by print, photoprint, microfilm, or any other means, without written permission from the publisher. John Benjamins Publishing Co. · P.O. Box 36224 · 1020 me Amsterdam · The Netherlands John Benjamins North America · P.O. Box 27519 · Philadelphia pa 19118-0519 · usa
Table of contents About the authors
vii
Preface
1
Gestures and some key issues in the study of language development Marianne Gullberg, Kees de Bot, and Virginia Volterra
3
Before L1: A differentiated perspective on infant gestures Ulf Liszkowski
35
The relationship between spontaneous gesture production and spoken lexical ability in children with Down syndrome in a naming task Silvia Stefanini, Martina Recchia, and Maria Cristina Caselli
53
The effect of gestures on second language memorisation by young children Marion Tellier
75
Gesture and information structure in first and second language Keiko Yoshioka
93
Gesture viewpoint in Japanese and English: Cross-linguistic interactions between two languages in one speaker Amanda Brown
113
Author index
135
Subject index
137
About the Authors
Kees de Bot received his PhD in Applied Linguistics from the University of Nijmegen. He is Chair of Applied Linguistics and Director of the Research School for Behavioral and Cognitive Neuroscience at the University of Groningen. His main interest is in the application of Dynamic Systems theory to second language development and bilingual processing. Amanda Brown received her PhD from the Program in Applied Linguistics at Boston University and the Multilingualism Project at the Max Planck Institute for Psycholinguistics. She is currently Assistant Professor of Linguistics in the Dept. of Languages, Literatures and Linguistics at Syracuse University. Maria Cristina Caselli, senior researcher at the Italian National Research Council (CNR), currently coordinates the “Language Development and Disorders” Laboratory at the CNR Institute of Cognitive Sciences and Technologies. Her research focuses on communication and language in typical and atypical development, neuropsychological developmental profiles, language assessment, and early identification of children at risk for language development. She is the author or co-author of many national and international publications in psycholinguistics, developmental psychology, and neuropsychology. Marianne Gullberg received her PhD in Linguistics from Lund University, Sweden. She was a staff member at the Max Planck Institute for Psycholinguistics, the Netherlands, 2000-2009, where she launched and headed the Nijmegen Gesture Centre with Dr A. Özyürek. She is now Professor of Psycholinguistics and Director of the Humanities Lab at Lund University. Her research targets bilingual, second and first language acquisition and use, with particular attention to processing, semantics, discourse, and the production and perception of gestures. Ulf Liszkowski received his PhD in Psychology from the University of Leipzig, Germany. He is head of the Max-Planck Independent Junior Research Group Communication Before Language at the MPI for Psycholinguistics in Nijmegen, The Netherlands. His research addresses infants’ prelinguistic communication and their social and cognitive development. Martina Recchia holds a degree in psychology from the University of Rome and she is a doctoral student at the University of Rome “La Sapienza”. She collaborates with the Institute of Cognitive Sciences and Technologies of the Italian National Research Council with the financial support of the Fondation Jerome Lejeune, Project “Lexical abilities in children with Down syndrome: the relationship between gestural and spoken modalities”. Silvia Stefanini holds a degree in psychology from the University of Padua. She is currently at the University of Parma, Department of Neuroscience, where she obtained her Ph.D. in Neuroscience in 2006. She has collaborated with the Institute of Cognitive Sciences and Technologies of the Italian National Research Council since 2002. Her main interest is first language acqui-
viii
About the Authors
sition in typical and atypical conditions, focusing on the link between motoric and linguistic development, in particular the gesture-speech system. Marion Tellier received her PhD in Linguistics in 2006 at University Paris 7 – Denis Diderot. She has since conducted research on embodided conversational agents at the IUT de Montreuil, University Paris 8, and is Maître de Conférence at the University of Provence – Aix-Marseille I. Her research interests include ‘teaching gestures’, second language teaching to children, teachers’ training and gesture perception and recognition. Virginia Volterra received her “laurea” in Philosophy from the University of Rome La Sapienza in 1971. She is Research Director of the Italian National Research Council, associated with the Institute of Cognitive Sciences and Technology. Her research focuses on the early stages of language acquisition in children with typical and atypical development. She has also conducted pioneering studies on Italian Sign Language (LIS). Keiko Yoshioka obtained her PhD in Applied Linguistics from Groningen University, the Netherlands, and currently lectures in Japanese Language and Second Language Acquisition at Leiden University. Her research interests include speech and gesture in second language acquisition and use.
Preface
Perhaps surprisingly, researchers working on language development in children and adults generally consider themselves as working in different disciplines, pursuing different research questions. They do not necessarily publish in the same journals, go to the same conferences, and discuss issues of (cross-linguistic) language development more generally across the disciplinary divide. This state of affairs holds even for those researchers who take a common interest in gestural aspects of communication and language development. The workshop “Gestures in Language Development”, held at Rijksuniversiteit Groningen, the Netherlands, in April 2006, aimed at bringing together researchers working on aspects of language development in both traditions to help establish new networks and to encourage cross-disciplinary exchange and discussions of the common themes and key issues to be explored in the realm of gesture research. The papers in this volume reflect some of the themes and concerns debated over the two days of the workshop. We extend our heartfelt thanks to all the participants in the workshop for their stimulating discussions and thought-provoking contributions, and to Marjolijn Verspoor for her hospitality. We also thank the editors of GESTURE and John Benjamins Publishers for giving us an opportunity to share some of the discussions with the wider gesture community through the special issue of GESTURE of which this volume is a re-print, the external reviewers for their time and expertise, and Nienke Hoeven van der Houtzager for help in the preparation of this book version. We gratefully acknowledge funding for the workshop from the Nederlandse Organisatie voor Wetenschappelijk Onderzoek (NWO).
Gestures and some key issues in the study of language development Marianne Gullberg1,2, Kees de Bot3, and Virginia Volterra4
1Max
Planck Institute for Psycholinguistics / 2Lund University / Groningen / 4Istituto di Scienze e Tecnologie della Cognizione, CNR 3Rijksuniversiteit
The purpose of the current paper is to outline how gestures can contribute to the study of some key issues in language development. Specifically, we (1) briefly summarise what is already known about gesture in the domains of first and second language development, and development or changes over the life span more generally; (2) highlight theoretical and empirical issues in these domains where gestures can contribute in important ways to further our understanding; and (3) summarise some common themes in all strands of research on language development that could be the target of concentrated research efforts. Keywords: first language, second language, development, acquisition, ageing
Introduction In recent years the scope of studies on language development has broadened from a fairly narrow focus on lexical and syntactic aspects at the sentence level to an interest in structures and processes at higher levels such as discourse and the interaction with other semiotic systems in communication. In parallel, studies on communication systems across modalities have provided growing empirical evidence supporting the view that gestures are a mode of expression tightly linked to language and speech (e.g. Goldin-Meadow, 2003; Kendon, 2004; McNeill, 1992, 2005). Gestures are spatio-visual phenomena influenced by contextual and socio-psychological factors, and also closely tied to sophisticated speaker-internal, linguistic processes. Under this view of speech and gesture as an inter-connected system, the study of gestures in development and the study of the development of gestures are natural extensions of research on language development, be it phylogenetically,
4
Marianne Gullberg, Kees de Bot, and Virginia Volterra
ontogenetically, or during the lifespan of an adult. Moreover, given their properties and dual role as interactive, other-directed vs. internal, speaker-directed phenomena, gestures allow for a fuller picture of the processes of language acquisition in which the learner’s individual cognition is situated in a social, interactive context. The role of gestures in language development can be studied from various perspectives: 1. Gestures as a medium of language development. We can examine the role gestures play in interaction to mediate the acquisition of spoken language, their general role in communication, in establishing the socio-cognitive prerequisites for the development of language, in conveying and possibly entrenching meaning, and their connection to cognitive capacities such as working memory, etc. 2. Gestures as a reflection of language development. We can further investigate the way in which gestures develop and change in parallel to spoken language development, and the ways in which they shed light on both the product and process of language acquisition. 3. Gestures as language development itself. This approach studies the acquisition of gestures as an expressive system in its own right. Traditionally the term language development has implicitly focused only on the gradual growth or progression of a first or second language towards the (idealised) stable model of an adult or native system. However, phenomena such as decline or regression in ability are clearly related (see papers in Viberg & Hyltenstam, 1993). For instance, regression as attested in attrition, or language loss, in adoptees, ageing bilinguals, and immigrants who stop using their first language, seems to affect the lexicon and grammar in similar ways as in progression. Not all shifts in ability lead to loss, however. Bilingual speakers may experience a decline in ability in one language when not using it without this leading to ungrammaticality. Moreover, they regain the ability when the language is brought back to use. Shifts in language dominance due to usage highlight the dynamic nature of language abilities. Development can thus usefully be seen not only as a linear process of progression, but as a complex, dynamic process that encompasses growth, decline, and any shift in both first and second languages (de Bot, 2007; de Bot, Lowie, & Verspoor, 2005). We will use the term development in this more general sense of change throughout. The purpose of the current paper, then, is to outline how gestures can contribute to the study of some central issues in language development. Specifically, we aim to (1) briefly summarise what is already known about gesture in the domains of first and second language development, and development over the life span more generally; (2) to highlight theoretical and empirical issues in these domains
Gestures and some key issues in the study of language development
where gestures can contribute to further our understanding; and (3) to summarise some common themes in all strands of research on language development that could be the target of concentrated research efforts.
Gesture and language In the contemporary gesture literature arguments are made for viewing gestures, language and speech as intimately linked or as forming an ‘integrated system’, an audiovisual ‘ensemble’, or a ‘composite signal’, depending on the theoretical approach (Clark, 1996; Engle, 1998; Kendon, 2004; McNeill, 1998). The arguments for integration come both from studies of language production and comprehension. First, in production, gestures have been found to fill linguistic functions like providing referential content to deictic expressions (this wide), filling structural slots in an utterance (“GIVE! [gesture: ‘the book’]”: Slama-Cazacu, 1976, p. 221), and acting as or modifying speech acts (e.g. Bühler, 1934; Slama-Cazacu, 1976; Kendon, 1995, 2004). Second, the observed semantic-pragmatic and temporal coordination between speech and gesture lies at the heart of all theories and models concerning the relationship. Although the precise relationship between the modalities is not entirely straightforward, particularly with regard to meaning and coexpressivity, there is a general consensus that gesture and speech express closely related meanings selected for expression (see de Ruiter, 2007; Kendon, 2004; Holler & Beattie, 2003, for overviews). A third argument for integration is that speakers deliberately distribute information across both modalities depending on spatial and visual properties of interaction (e.g. Bavelas, Kenwood, Johnson, & Phillips, 2002; Holler & Beattie, 2003; Melinger & Levelt, 2004; Özyürek, 2002a). Finally, a fourth frequent argument is that gestures and speech develop together in (first) language acquisition (e.g. Mayberry & Nicoladis, 2000; Volterra, Caselli, Capirci, & Pizzuto, 2005), and that they break down together in disfluency, in aphasia, etc. (e.g. Feyereisen, 1987; Lott, 1999; McNeill, 1985). This last argument is further discussed in the papers in this volume. In language comprehension, there is considerable evidence that gestures affect perception, interpretation of and memory for speech (Beattie & Shovelton, 1999; Graham & Argyle, 1975; Kelly, Barr, Breckinridge Church, & Lynch, 1999; Riseborough, 1981). Further to this, recent neurocognitive evidence shows that the brain integrates speech and gesture information, processing the two in similar ways to speech alone (e.g. Bates & Dick, 2002; papers in Özyürek & Kelly, 2007; Wu & Coulson, 2005). Overall, then, there is good reason to consider gestures, language, and speech as a closely-knit system.
5
6
Marianne Gullberg, Kees de Bot, and Virginia Volterra
The models attempting to formalise the relationship between gestures and speech differ in their views of the locus and the nature of the link. As suggested by Kendon (2007) some see speech as primary and gesture as auxiliary. Others regard gestures and speech as equal partners. The first set either considers gestures to facilitate lexical retrieval (the Lexical Retrieval Hypothesis, Krauss, Chen, & Gottesman, 2000) or views gestures as instrumental in the process of representing and packaging imagistic thought for verbalisation (the Information Packaging Hypothesis, Alibali, Kita, & Young, 2000; Freedman, 1977). The second set of theories regards gestures as an integral part of an utterance. Beyond this startingpoint, they differ in focus. Either they concentrate on gestures as a window on (linguistic and non-linguistic) thought (the Growth Point Theory, McNeill, 1992, 2005; McNeill & Duncan, 2000), or they target the interplay between imagistic and linguistic thinking (the Interface Hypothesis, Kita & Özyürek, 2003), or, finally, they centre on the communicative intention driving both modalities to form a deliberately coherent multimodal utterance (de Ruiter, 2000, 2007; Kendon, 1994, 2004; Schegloff, 1984). All existing accounts model the adult stable system. No theory has yet undertaken to account for development either in children or in adults.
Gesture and first language development The field of First Language Development (FLD) has a long-standing interest in gestures. Infants’ gestures have traditionally primarily been explored as relevant features of a prelinguistic stage, as behaviours that precede and prepare the emergence of language, identified exclusively with speech. More recently, the view of adult language as a gesture-speech integrated system has prompted the need to understand how the gesture-speech relationship is established in infancy and how it evolves towards the adult system.
The earliest development Infants begin to communicate intentionally through gestures and vocalisations and later with words (see Liszkowski, Stefanini et al., this volume). Gestures and speech are equal partners — in the majority of cases the communicative signals produced by children are expressed in both modalities, gestural and vocal. A key question is whether the two modalities are integrated from the very beginning, or are initially separate to become an integrated system only with development (McNeill, 1992, 2005). Some studies indicate that the gestural and vocal modalities are semantically
Gestures and some key issues in the study of language development
and temporally integrated from the earliest stages (Capirci, Contaldo, Caselli, & Volterra, 2005; Iverson & Thelen, 1999; Pizzuto, Capobianco, & Devescovi, 2005), while others report that asynchronous combinations of gestures and words are more frequent than synchronous ones in an initial developmental period (Butcher & Goldin-Meadow, 2000; Goldin-Meadow & Butcher, 2003). Despite these differences, all agree that deictic gestures appear before the end of the first year and that they fulfil the basic function of drawing the interlocutor’s attention to something in the environment. These gestures include requesting (extending the arm toward an object, location or person, sometimes with a repeated opening and closing of the hand), showing (holding up an object in the adult’s line of sight), giving (transferring an object to another person) and pointing (index finger or full hand extended towards an object, location, person, or event). The referents of these gestures can be identified only in the physical context in which communication takes place. Around 12 months children start to produce other more content-loaded types of gestures, referring, like first words, to action schemes usually performed at this age with or without objects (e.g. bringing the handset or an empty fist to the ear for telephone/phoning). Some gestures refer to action schemes that are non-objectrelated (e.g. moving the body rhythmically without music for dancing to request that music be turned on) or to conventional actions (waving the hand for bye-bye) with forms more arbitrarily related to their meaning. The terminology used for these gestures (“conventional”, “referential,” “symbolic”, “iconic”, “characterising”, “representational”) is variable, and has changed considerably over the years, even in the work of the same author(s), reflecting changes both in methodology and theoretical perspectives. The communicative function of such gestures appears to develop within routines similar to those considered to be fundamental for the emergence of spoken language. Their forms and meanings are established in the context of child–adult interaction. The first gestures and the first words involve the same set of concerns: eating, dressing, exchange games, etc., and they are initially acquired with prototypical objects, in highly stereotyped routines or scripts. At roughly parallel rates, they gradually “decontextualise” or extend out to a wider and more flexible range of objects and events.
The role of input The remarkable similarities between production in the gestural and the vocal modalities during the first stages of language acquisition raise interesting issues regarding the communicative and linguistic role of early words and gestures. Symbolic actions produced in the gestural modality have often been seen as communicative and referential irrespective of the contexts of use (for a discussion,
7
8
Marianne Gullberg, Kees de Bot, and Virginia Volterra
see Caselli, 1994). Around 13 months there is a basic equipotentiality between the vocal and the gestural channels (Erting & Volterra, 1990). Differences in the type of input to which children are exposed influences the extent to which the manual or spoken modality is used for representational purposes and assumes linguistic properties. For example, children systematically exposed to sign language input acquire and develop a complete language in the visual gestural modality (see Schick, Marschark, & Spencer, 2006). Comparisons between deaf and hearing children suggest that all children, regardless of whether their primary linguistic input is spoken or signed, use gestures to communicate, in particular in the transition stage to symbolic communication (Volterra, Iverson, & Castrataro, 2006). Although the relationship between gesture and sign language in general and in development has received little attention to date, recent research suggests that gesture is as essential a part of sign language as it is of spoken communication (Emmorey, 1999; Liddell, 2003). Typically developing children are clearly encouraged by parents to rely much more on vocal symbols for communication. However, it has been suggested that gestural input may facilitate the acquisition of spoken words, as in the case of “baby signs” or ‘enhanced gestures’ used in conjunction with speech (Goodwyn & Acredolo, 1998; Goodwyn, Acredolo, & Brown, 2000). A possible explanation for this effect, found also in children with developmental disorders, is that exposure to enhanced gesturing provides children with opportunities to master new forms in both the vocal and manual modalities (Abrahamsen, 2000). Culture and adult input may influence both the form and the frequency of representational gestures. Many studies have reported more frequent production of representational gestures by Italian children who are immersed in a ‘gesturerich’ culture (see the discussion in Kendon, 2004, Ch. 16). In particular, the representational gestures produced by Italian children include numerous object/action gestures (e.g. eating, phoning) and attributive gestures (e.g. big, hot), whereas American children almost exclusively produce conventional gestures (e.g. hi, yes, all gone) (Iverson, Capirci, Volterra, & Goldin-Meadow, 2008). Cross-cultural longitudinal studies of spontaneous interaction should reveal how similarities and differences in the way object/action gestures versus more conventional social gestures develop.
The relationship between speech and gesture Interesting findings on the relationship between children’s production of action and gestures and early (receptive and expressive) word repertoires have been collected through the MacArthur-Bates Communicative Development Inventory
Gestures and some key issues in the study of language development
(MBCDI). This is an instrument designed to explore and assess typically developing children’s early communicative and linguistic development (Fenson et al., 1993). In particular, it has been shown that there is a complex relationship between early lexical development in comprehension and production, and action-gestures (Caselli & Casadio, 1995). Around 11–13 months, the productive repertoire of action-gestures appears to be larger than the vocal repertoire, but in the following months the mean number of words and action-gestures are more similar. More interestingly, at this early age there is a significant correlation between words comprehended and action-gestures produced (Fenson et al., 1994). These findings suggest that the link between real actions, actions represented via gestures, and children’s vocal representational skills may be stronger than has been assumed thus far. Another important finding is that in all cultures investigated to date the first utterances (combinations of two or more meaningful communicative elements) are cross-modal. Various studies highlight that deictic gestures (notably pointing) play a special role in two-element utterances. Combinations of a pointing gesture with a representational word are the most productive types of child utterances. These gesture-speech combinations can refer to a single element or to two distinct elements. Complementary and supplementary gesture-speech combinations reliably predict the onset of two-word combinations, underscoring the robustness of gesture as a harbinger of linguistic development (Butcher & Goldin-Meadow, 2000; Capirci, Iverson, Pizzuto, & Volterra, 1996; Iverson et al., 2008; Iverson & Goldin-Meadow, 2005). Many constructions (e.g. predicate+argument like “point (to chair) saying “mommy” to ask mommy to sit on the chair) appear in supplementary gesture-speech combinations several months before the same construction appears in speech (e.g. “sit mommy” or “mommy sit”). The production of a supplementary deictic gesture-word combination appears early, whereas supplementary representational gesture-word or two-word combinations, which require the child to retrieve two symbols each conveying a different piece of semantic content, appear later. The production of a single word and identification of another referent in the context through a deictic gesture supposedly places fewer cognitive demands on the child than the combination of two representational elements and presumably fits the child’s current cognitive capacities (Özcaliskan & Goldin-Meadow, 2005). The study of children with atypical input or development can further illustrate how gesture appears to be related to cognitive and linguistic development in infancy. An example of how gesture may compensate for specific impairments of the spoken abilities is children with Down syndrome (DS). The neuropsychological profile of DS children is characterised by a lack of developmental homogeneity between cognitive and linguistic abilities. The linguistic abilities of DS children are
9
10
Marianne Gullberg, Kees de Bot, and Virginia Volterra
poorer than expected based on their overall cognitive level (e.g. Chapman & Hesketh, 2000). These children appear to compensate for poor productive language abilities through greater production of gestures. There is ample evidence that the gap between cognition and productive language skills becomes progressively wider with development among DS children (Chapman, 1995; Franco & Wishart, 1995). However, with increasing cognitive skills and social experience these children also develop relatively large repertoires of gestures (Caselli et al., 1998; Stefanini, Caselli, & Volterra, 2007; Stefanini, Recchia, & Caselli, this volume). The compensatory use of gesture can be enhanced, particularly if children are encouraged through the provision of signed language input (cf. Abrahamsen, 2000). Higher gesture rates associated with speech difficulties have also been reported for other clinical populations such as children with specific language impairment (Evans, Alibali, & McNeil, 2001; Fex & Månsson, 1998).
Later development Given that gesture usage appears to be related both to the general cognitive level and to phono-articulartory abilities, it is important to examine children in later childhood and at different stages of linguistic development. The development whereby children’s gestures become organised into the adult speech-gesture system have not been fully described. Very few studies have explored the development of this system after the two-word stage when other types of gestures, such as ‘rhythmic’ or ‘emphatic’ gestures, start to appear. Mayberry & Nicoladis (2000) followed 5 French-English bilingual boys longitudinally (from 2 years to 3;6 years), showing that children from age 2 onwards largely gesture like adults with regard to gesture rate and meaning. Interestingly, different gesture types developed differently such that the use of iconic and beat gestures correlated with language development, whereas the use of pointing gestures did not. Children between 16 and 36 months use gestures and speech in agreement and refusal constructions with their mothers somewhat differently from adults (e.g. Guidetti, 2005). Looking at more sophisticated language use, children from 4 to 5 years productively use idiosyncratic, content-loaded gestures during narratives (McNeill, 1992). Colletta (2004), recording adult-child spontaneous interactions, has described the development of conversational abilities in school-age children. Younger children produce very few metaphoric, abstract deictic gestures and beats, which become more frequent in the production of older children. Finally, research investigating gesture production in school-aged children in problem-solving tasks, reasoning about balance or mathematical equivalence, indicates that children convey a substantial proportion of their knowledge through
Gestures and some key issues in the study of language development
speech-accompanying gestures (Alibali & Goldin-Meadow, 1993; Church & Goldin-Meadow, 1986; Pine, Lufkin, & Messer, 2004). In some cases children’s gesturespeech ‘mis-matches’ predict learning. Children whose speech and gestures ‘mismatch’ are more likely to benefit from instruction than children whose speech and gestures match. These studies indicate that gestures can reveal not only what children are thinking about but also their learning potential. In sum, even if differences in data sets (e.g. ages considered, gesture types described), in methodology and terminology make it challenging to compare findings across studies, the available data suggest that the role of gesture in spoken language acquisition and development changes according to different stages and communicative/interactional contexts. Around one year of age gesture plays a crucial role in the construction and expression of meaning. In the following stages gesture production develops together with speech. At later stages still, gesture production appears to decrease in some linguistic contexts (e.g. naming tasks) although it is frequent with speech in others (e.g. narratives). These findings together indicate that any study on the development of language should include and pay particular attention to gestures.
Gesture and second language development In recent years the interest in the relationship between gestures and Second Language Development (SLD or L2D) has grown considerably. Studies suggest that gestures play an important role in SLD and should be seen both as a resource in learning and as a component of language proficiency in its own right (cf. Gullberg, 2006b, 2008; Gullberg & McCafferty, 2008). Again, if gestures and speech are seen as an integrated system, then factors that play a role in SLD in general may also play a role in the development of gesture, and conversely, gestures may provide further information on the effects of such factors. Therefore, a large part of the SLD research agenda is also relevant for gesture where a number of traditional topics can fruitfully be addressed taking gestures into account.
Cross-linguistic influence (CLI) or transfer One of the most widely studied aspects of SLD is cross-linguistic influence, that is, the impact of existing languages on the acquisition and use of new ones. Traditionally this research has been concerned with the effect of the first language (L1) on later learned languages, but research on lexical processing in bilinguals and research on language attrition and language loss has shown that later learned
11
12
Marianne Gullberg, Kees de Bot, and Virginia Volterra
languages may influence the first language (Cook, 2003; Costa, 2005; de Bot & Clyne, 1994; Köpke, Keijzer, & Weilemar, 2004; van Hell & Dijkstra, 2002). Recent studies have also demonstrated an impact of the L2 on the L1 in gestures (e.g. Brown, 2007; Brown, this volume; Brown & Gullberg, 2008; Pika, Nicoladis, & Marentette, 2006). A growing body of work suggests that native speakers of typologically different languages, such as English on the one hand, and Spanish and Turkish on the other, gesture differently, both in terms of gestural form and timing, as a reflection of how these languages encode and express meaning components of motion like path and manner (e.g. Duncan, 1996; Kita & Özyürek, 2003; McNeill, 1997; McNeill & Duncan, 2000; Özyürek, Kita, Allen, Furman, & Brown, 2005). Further studies have also shown that L2 learners of these languages do not necessarily gesture like target language speakers, but display traces of their L1s in their gesture production either in terms of timing, aligning their gestures with different elements in speech compared to native speakers (e.g. Choi & Lantolf, 2008; Kellerman & van Hoof, 2003; Negueruela, Lantolf, Rehn Jordan, & Gelabert, 2004; Stam, 2006), or in terms of gestural forms, expressing different semantic content in gestures compared to native speakers (e.g. Brown, 2007; Brown & Gullberg, 2008; Gullberg, submitted; Negueruela et al., 2004; Özyürek, 2002b; Yoshioka & Kellerman, 2006). Such findings are often discussed in terms of Slobin’s notion of ‘thinking for speaking’ (e.g. Slobin, 1996), that is to say, ways in which linguistic categories influence what information you attend to and select for expression when speaking. The argument for L2 is that L1-like gesture patterns may reveal whether L2 speakers continue to think for speaking in the L1 rather than in L2-like ways. A number of questions need to be addressed in this domain. A crucial issue concerns how to identify and study gestural practices typical of a given language and culture. It is a real difficulty that so little is known about language-specific gesture patterns in terms of frequency, gestural forms, use of gesture space, and semantic expression. An absolute prerequisite for the study of CLI in gestures in L2 is therefore a better understanding of gestural practices across languages in native performance. Currently, any study on L2 behaviour is a triple study where the native behaviour in both source and target language needs to be described before learner behaviour can be considered. If gestures and L2 studies are to follow in the steps of general SLD research, effects of other known languages (L3, Ln) should also be taken into account, pushing the boundaries even further. It is equally important to point out that in contrast to the traditional focus on ‘errors’ in SLD (see papers in Richards, 1974; van Els, Extra, van Os, & Janssen van Dieten, 1984), a different approach is necessary when considering gestures in L2 production. Since there can be no absolute ‘grammaticality’ of gesture
Gestures and some key issues in the study of language development
performance, preferential usage patterns must instead be established with corresponding gradient native scales of appropriateness or acceptability. For instance, Duncan (2005) examined 20 native English speakers retelling a cartoon and found that 64% of the manner gestures coincided with manner verbs, while 33% of the manner gestures were linked to other elements such as ground or path. In contrast, 20 Spanish speakers engaged in the same task aligned only 23% of their manner gestures with manner verbs, while 58% coincided with ground or path elements. The range of variation defines what is ‘nativelike’ and allows for an equal range of possible behaviours for L2 learners that would still qualify as ‘nativelike’. This opens for a more gradient and sophisticated view of L2 performance in general beyond the narrow domain of target-like gestures. CLI effects have mainly been studied looking at representational (iconic) gestures. It is unknown whether effects of CLI can be found for other types of gesture practices. For instance, given that gestures supposedly align with speech rhythms and language-specific prosodic patterns, it seems plausible that rhythmic patterns of gesturing will transfer into an L2 along with a foreign accent. Similarly, it is possible that cross-linguistic differences in ways of managing interaction might transfer into an L2 in the use of interactive and ‘pragmatic’ gestures (e.g. Bavelas, Chovil, Lawrie, & Wade, 1992; Kendon, 2004). To date, no study has examined these issues. The studies of L2 gestures occasionally display dissociation between surface form and gesture whereby L2 learners say one thing (in L2-like fashion) and gesture another (in L1-like fashion) (e.g. Özyürek, 2002b; Stam, 2006). In most studies gesture is more conservative than speech, such that speech seems to change more readily towards the L2 target than gestures. This phenomenon is mainly interpreted as indicating transfer of L1 representations, perspectives, or thinking for speaking. However, similarly to the study of CLI in spoken language, to determine whether a particular phenomenon is caused by CLI/transfer, or whether it is a general learner phenomenon, requires methodological triangulation (cf. Jarvis, 2000). At the very least, it is necessary to examine learners from two different source languages learning the same target language to tease apart such effects. Further, very few attempts have been made to theoretically account for the fact that L2 speakers do and say different things, an L2-specific form of speech-gesture discrepancy. A question that arises is what representations actually underpin L2 surface forms, especially when these look target-like but gesture does not, and why it should be that speech changes before gesture. Do gestures have a privileged link to conceptual representations relative to speech? How dissociated can speech and gestures be and still be said to reflect the same representation? A different set of questions pertains to how gestures that seem not quite targetlike from a native speaker’s point of view are perceived by native speakers. The
13
14
Marianne Gullberg, Kees de Bot, and Virginia Volterra
inclusion of gesture in assessments of L2 speakers expands the number of dimensions along which learners’ production can vary relative to native speakers. In this sense, gesture data raise important questions concerning the ‘native speaker standard’ (cf. Davies, 2003), crucial in many studies of SLD. The discussion of critical periods for language learning and the degree to which adult learners can become nativelike is central to theories of adult L2 acquisition (cf. Birdsong, 2005). Gestures definitely raise the stakes for learners. However, no studies have systematically examined native perception of ‘foreign gesture’, nor its potential interactional consequences. Although a number of studies show that learners’ gesture production affects assessments positively such that learners are deemed more proficient if they gesture than if they do not (Gullberg, 1998; Jenkins & Parra, 2003; Jungheim, 2001; McCafferty, 2002), no studies so far have directly tested for effects of ‘foreign gesture’.
Gesture and learner-general phenomena SLD research does not restrict explanations of properties of the L2 to effects of the L1 or other languages learned. SLD studies also look at learner behaviour as a systematic and regular variety in its own right, as an interlanguage (Selinker, 1972), with properties determined both by general learning mechanisms and by the specific languages involved. Again, in such a perspective, a number of issues arise where gestures might provide important insights. One such issue concerns how language learners handle different types of difficulties at a given proficiency level, such as managing lexical, grammatical, and discourse related problems at the same time in real time. The analysis of gestures and speech in conjunction provides a fuller picture of such problem-solving. For instance, studies of Moroccan and Japanese learners of French show how learners move from using mainly representational gestures, complementing the content of speech, towards more emphatic or rhythmic gestures related to discourse (Kida, 2005; Taranger & Coupier, 1984). This suggests a transition from essentially lexical difficulties and lexically based production to more grammatical problems related to discourse. More careful charting of what gestures are produced by learners with particular proficiency profiles has potential pedagogical and diagnostic applications. The acquisition of gestures can and should also be studied in its own right. Just as we need to find out how children come to gesture in adult-like and culturespecific ways, so we need to know whether L2 learners ever come to gesture like native speakers. Although some attention has been given to L2 users’ comprehension of conventional or quotable gestures (‘emblems’) (e.g. Jungheim, 1991; Mohan & Helmer, 1988; Wolfgang & Wolofsky, 1991), nothing is known about whether L2
Gestures and some key issues in the study of language development
learners ever produce such culture-specific gestures, which may show the same acquisition difficulties as idiomatic expressions (e.g. Irujo, 1993). For instance, do L2 learners learn to produce appropriate gestural forms such as distinguishing the head toss from the headshake (Morris, Collett, Marsh, & O’Shaughnessy, 1979), do they learn to point in culturally appropriate ways (see papers in Kita, 2003), and do they learn to respect handedness taboos (e.g. Kita, 2001)? Even less is known about whether L2 learners acquire and produce language-specific nonconventionalised gestural practices. If they do, this raises important questions about implicit learning of both form and meaning, crucial to the domain of SLD. If they do not, it raises familiar SLD issues about why learners do not notice or ‘take in’ certain aspects of the input despite extended exposure (e.g. Robinson, 2003). It is perhaps particularly interesting to consider visual phenomena like gestures since they are often assumed to be inherently ‘salient’, and to have an attentiondirecting, enhancing effect in their own right. If they did, they should be easy to acquire. Again, next to nothing is known about this question. A closely related issue is what might be learnable and indeed teachable (and therefore assessable) in terms of gesturing. While it may be possible to teach forms and meanings of emblems, it is much less clear that other aspects of gestural practices are teachable. Even when gestures are on the classroom agenda, an explicit link is seldom made between language and gesture. Furthermore, research in this domain should consider the possible differences and similarities between spontaneously produced gestures and gestures explicitly deployed for teaching purposes (e.g. Lazaraton & Ishihara, 2005; Tellier, 2006). It is possible that features noted for ‘instructional discourse’ like child- or foreigner-directed gestures share properties with gestures employed in language classrooms. A further step is to consider learners’ interpretations of teachers’ gestures rather than examining teachers’ gestures in social isolation (cf. Sime, 2006). Answers to questions concerning learnability and teachability are wide-open.
Gesture across the lifespan Under the view that language development encompasses all shifts, a number of further domains become relevant such as the development of rhetorical styles and registers, but also language attrition in bilinguals, and changes in language related to ageing. Changes in language can of course also be related to disease, as in aphasia, split-brain surgery, etc., but we leave those changes aside in this overview (but see e.g. Feyereisen & de Lannoy, 1991; Goodwin, 2002; Lausberg, Zaidel, Cruz, & Ptito, 2007; Lott, 1999; Rose, 2006).
15
16
Marianne Gullberg, Kees de Bot, and Virginia Volterra
With regard to the development of rhetorical styles and gestures, something is known about the development of narrative skills and concomitant changes to gesture in later childhood. For instance, Cassell (1988) demonstrated that children’s production of beats becomes adult-like only with increasing development of narrative skills, specifically when children can alternate between different narrative levels. Very little is known, however, about the development of other rhetorical skills such as gestures in different registers, sermons, public speeches, etc. Although a small literature explores politicians’ gesture practices (e.g. Calbris, 2003), the focus is typically on the accomplished speaker, not on the development of the speech-gesture repertoire. In the domain of language attrition due to immigration or bilingualism, nothing at all is known about gesture practices. Assuming that gesture and speech are connected, it seems plausible that the gesture practices might also be affected if skills in the spoken first language are lost. However, given that gestures can also be recruited for other purposes, it is an empirical question whether this happens or not.
Gestures and ageing A recent overview of research on gestures over the lifespan suggests that there is very little research on gestures in older age groups (Tellier, 2009). There is a substantial body of research on non-verbal communication and ageing, and some of these studies have also considered gesture use and interpretation (Montepare & Tucker, 1999). The perspective taken is often a compensatory one. That is, communication problems emerge with age due to a decline in speech-motor skills and hearing. The assumption is that these problems are compensated for by gesturing (e.g. Cohen & Borsoi, 1996; Feyereisen & Havard, 1999). There are several problems with this approach. First, the decline of speech production in ageing is not well-established. Second, any decline seems to be co-affected by variables such as continuous use of the language and level of education. Third, the groups considered are typically fairly young (60s and early 70s) and comparisons between age groups are cross-sectional. Age-related language problems are more likely in the 75+ age group, in particular when there are other health problems and the level of education is low (de Bot & Makoni, 2005). Finally, there is considerable variation within and between age groups. So a simple young/old comparison may not be informative. It is possible that there are specific age-related types of gesturing, probably more due to specific motor patterns than to language issues. For instance, the control of small movements may be reduced, leading to larger movements. It is also possible that with decreasing flexibility of joints, changes in spinal curvature, etc.,
Gestures and some key issues in the study of language development
there is a reduction in gesture size, gesture speed, etc. (cf. Laver & Mackenzie Beck, 2001). Both changes may be given (un-intended) semiotic importance by onlookers. The field of gestural practices in ageing is desperately under-researched.
Common themes The preceding sections have briefly outlined some of what is known about gestures and language development, with some emphasis on questions that remain open to investigation in each domain. There are, however, clearly general themes that are common to all studies of language development and gesture.
The role of gestures in the input In studies on language development the precise role of input, that is, what language users hear and see, is hotly debated. Both in studies of FLD and SLD a familiar debate concerns whether input is simply a trigger of innate knowledge and structures (Pinker, 1989; Wexler & Culicover, 1980; White, 2003), or whether language development is based on detailed properties of the input such as frequencies and on usage (Ellis & Larsen-Freeman, 2006; Tomasello, 2003). In SLD the role of input is debated partly because L2 learners seem not to attend to what is in the input, namely ‘correct’ pronunciation, grammar, etc., as seen in their tendency to maintain foreign accents and grammatical peculiarities even after many years of teaching and exposure. A well-known hypothesis states that a prerequisite for input to be useful to learning is that it is comprehensible (e.g. the Comprehensible Input Hypothesis, Krashen, 1994).1 In this perspective, gestures seem to play an important role. Interlocutors are known to attend to and make use of gestural information, for instance, to improve comprehension in noise (Rogers, 1978). It is also clear that gestures in the input can improve learning in general such as the learning of maths and symmetry (Singer & Goldin-Meadow, 2005; Valenzeno, Alibali, & Klatzky, 2002). A natural assumption is therefore that gestures that convey speech-related meaning should improve language learners’ comprehension and possibly also learning of language. Indeed, adults, teachers and other ‘competent’ speakers seem to think so. All forms of didactic talk or ‘instructional communication’ studied — whether by adults to children (‘motherese’) or by adult native speakers to adult L2 users (‘foreigner/teacher talk’, Ferguson, 1971) — is characterised by an increased use of representational and rhythmic gestures (e.g. Adams, 1998; Allen, 2000; Iverson, Capirci, Longobardi, & Caselli, 1999; Lazaraton, 2004). However, few studies test
17
18
Marianne Gullberg, Kees de Bot, and Virginia Volterra
actual effects on language learning. There is some evidence that gestures improve the learning of new adjectives in English children (O’Neill, Topolovec, & SternCavalcante, 2002). Very few studies empirically test the connection between gestural input and learning outcomes in SLD (for exceptions, see Allen, 1995; Sueyoshi & Hardison, 2005; Tellier, this volume). Moreover, facilitative effects of gestures may differ depending on the linguistic units tested and be more evident for lexical than grammatical material (e.g. Musumeci, 1989). Different types of gesturing may also have different effects. Again, all these issues remain wide open. It is also an empirical question to what extent children and adult learners mirror the gesture input in their own gesture production. A related question is to what extent learners affect their own input by their spoken and gestural practices in interaction. It has been suggested that learners’ gestures might help promote positive affect between learner and adult/native speaker, which might ultimately promote learning (e.g. Goldin-Meadow, 2003; McCafferty, 2002). It has also been suggested that adult and native listeners in general tailor their production to learners based on the learners’ gestures (e.g. Goldin-Meadow, 2003). This is in line with the well documented observation that interlocutors synchronise or accommodate to each other in interaction also as regards gestures (Bavelas, Black, Chovil, Lemery, & Mullett, 1988; Condon & Ogston, 1971; Kimbara, 2006; Wallbott, 1995). It is an open question to what extent such synchronisation might affect language learning (cf. discussions of structural priming as a means of learning, e.g. Bock & Griffin, 2000; Branigan, Pickering, & Cleland, 2000).
The role of gestures in the output The complementary notion also plays a role in development, namely that production is crucial to acquisition. Bruner (1983) suggested that (first) language is learned through use and a similar notion is present in the ‘output hypothesis’ in SLD. This states that new language knowledge only becomes automatised if used for production (Gass & Mackey, 2006; Swain, 2000). In a parallel fashion, it has been shown that the production of gestures promotes learning of other skills, such that adults and children who gesture while learning about maths and science do better than those who do not (Alibali & DiRusso, 1999). General recall also improves when participants enact events (e.g. Frick-Horbury, 2002). Evidence for an effect of gesturing on the acquisition of language is again much scarcer. Although it has been suggested that gesturing might help L2 learners internalise new knowledge on theoretical grounds (Lee, 2008; McCafferty, 2004; Negueruela et al., 2004), and although teaching methods relying on embodiment exist (e.g. Total Physical Response, Asher, 1977), it remains an empirical question whether any real, long-
Gestures and some key issues in the study of language development
term learning effects can be demonstrated for gesture production in L1 or L2 (for short-term effects in L2, see Tellier, 2006).
Variation and individual differences All language development is characterised by individual variation. First language development is relatively uniform — at least regarding final outcome — in comparison to SLD, which is characterised by highly variable outcome. In SLD the effect of a range of psycho-social factors have been explored, such as intelligence, language aptitude, memory capacity, attitudes, motivation, personality traits, and cognitive style (e.g. de Bot et al., 2005, pp. 65–75; Dörnyei, 2006; Verspoor, Lowie, & van Dijk, 2008). For instance, intelligence matters more in tutored than in untutored SLD, and more in grammar learning than in other skills. The correlations between language aptitude tests and free oral production and general communicative skills are generally low. Working memory capacity seems to be lower generally in L2 than in L1 (Miyake & Friedman, 1999), etc. No study of such factors in SLD has to date considered gestures either as a co-variable or as a measure of any of the factors despite the fact that the influence of some of these factors on gestures has been extensively studied. For instance, effects of personality and psychological types (e.g. introvert vs. extrovert) on non-verbal behaviour have received a lot of attention (see Feyereisen & de Lannoy, 1991, for an overview), and verbal vs. spatial fluency (Hostetter & Alibali, 2007), etc., have been documented. However, no studies have combined these perspectives although a number of possible links can be hypothesised. Recent studies have suggested that gestures help reduce cognitive load (e.g. Goldin-Meadow, Nusbaum, Kelly, & Wagner, 2001; Wagner, Nusbaum, & GoldinMeadow, 2004). Such an effect would be important in L2 production (cf. Gullberg, 2003, 2006a) where individual differences in working memory and proficiency might conspire to make such effects more important. A key expansion on the hitherto rather uninformative observations that L2 learners gesture more in the L2 than in the L1 would be to examine the relationship between fluency, processing units, and gesture production more closely in these terms. For instance, at stages where L2 learners are not very fluent and proceed almost word by word, they seem to produce one gesture for every unit/word. Once they start stringing together more material in chunks, the gesture rate also goes down (Gullberg, 1998, 2006a; Nobe, 2001). This suggests a possible link between working memory, fluency and gesture production. Similarly, individual differences in cognitive style and personality affect interaction patterns and thereby the extent to which L1 and L2 learners create situations
19
20 Marianne Gullberg, Kees de Bot, and Virginia Volterra
of rich input for themselves (cf. Goldin-Meadow, 2003). While this has been examined in FLD, no studies to date have explored such issues in SLD. Finally, there is inter- and intra-individual variation in adult, native gesturing, depending on social setting, degree of formality, shared knowledge, ambiguity, expertise, the content of speech, etc. Many aspects of individual variation in adult, native gesturing are not well understood, such as why some speakers gesture more than others, and why the same speaker sometimes chooses to gesture and sometimes not (Kendon, 1994). To qualify the possible range of behaviours in adult native speakers while allowing for variation is crucial to studies of language development and gesture. Rather than looking at behaviour outside of the ‘typical’ as ‘noise’ in the data, a more productive approach is to look at variation as a meaningful source of information. This is not to say that we need to explain every single instance of a deviation from a general pattern. As in other areas of language development, variation is a reflection of the developmental process resulting from the interaction of many internal variables that cannot be taken apart to study the impact of each individual factor (van Dijk & van Geert, 2005; Verspoor et al., 2008). Studies of gestures and language development will have to be methodologically creative to find ways of taking variation into account.
Gesture as compensation In many parts of the language development literature, a general and often tacit assumption is that children and adults alike produce gestures mainly to overcome the gap between their communicative intentions and the expressive means at their disposal. That is to say, gestures are viewed as a compensatory mode of expression. However, the theoretical issues underlying such a view are rarely discussed. First, compensation as a notion is often ill- or undefined. For instance, spoken language acquisition research shows that not all learner behaviour is best characterised as strategic problem-solving. Children and adult learners all over-generalise, not as a means of compensation, but as part of the developmental process. Furthermore, adult learners are often communicatively fluent in an L2 even though their systems do not look like those of native speakers. Conversely, not all difficulties are overt. Learners may avoid difficulties by changing their intention when the expressive means do not match. The general difficulties involved in identifying and defining compensatory behaviour has received attention in SLD studies (see papers in Kasper & Kellerman, 1997), but much less so in studies of FLD, and are virtually absent from studies considering gesture as compensation. A related issue relevant both to acquisition and gesture studies is the question whether compensation is intended for the speaker or for the addressee. That is,
Gestures and some key issues in the study of language development
is it a speaker-internal solution to a problem, an interactional solution, or both? These questions echo familiar debates in the gesture literature regarding gesture production (cf. the input/output distinction above), but they are equally relevant for developmental, compensatory issues (e.g. Gullberg, 1998). A third question concerns what parts of spoken language gestures can compensate for. The focus has traditionally been on lexis and meaning, but lexical access, grammar, discourse, conceptualisation, and problems of linearising global information have all been implicated in gestural compensation (Alibali et al., 2000; Gullberg, 1999, 2006a; Hostetter, Alibali, & Kita, 2007; Pine, Bird, & Kirk, 2007). Finally, of theoretical relevance for gesture studies is the question how gestures can compensate for linguistic expressions, and how compensatory gestures are defined and function. In adult, ‘competent’ users, the speech-gesture integration is multifaceted and may not be obligatory and automatic. ‘Competent’ speakers can choose to decouple speech and gesture. This raises important questions about co-expressivity, however that is defined. Gestures that express non-redundant meaning from speech are not typically considered ‘compensatory’ in cases of mature, adult native speakers, whereas such instances are often seen as compensatory in developing speakers. Further, a number of familiar questions in the debate on gesture production could be cast in terms of compensation, such as whether gestures help lexical retrieval (activate word forms) (Krauss et al., 2000), or help with conceptualisation or information packaging (Goldin-Meadow, 2003; Kita, 2000). However, surprisingly, these theoretical notions are rarely touched upon in discussions of ‘compensatory’ gestures in development (for notable exceptions, see Nicoladis, 2007; Nicoladis, Mayberry, & Genesee, 1999). Although there are exceptions in the literature on children’s development, notably the literature on ‘mis-matches’ (e.g. Goldin-Meadow, 2003) and on lexical access in children (e.g. Pine et al., 2007), even these studies do not typically discuss explicitly what defines some gestures as compensatory. In studies of adult L2 users’ gestural behaviours, theoretical discussions of gestural compensation are almost entirely absent. The properties that make some gestures compensatory and others not need to be discussed and elucidated if we are to form a better understanding of the role of gesture in language development. In sum, the notion of compensation raises important theoretical issues both for studies of language development and for gesture studies. We need to consider how and when to view the function of gesture as mainly compensatory, to formulate independent defining criteria, etc. (e.g. Goodwin & Goodwin, 1986). Developmental data that raise important issues for compensation are to be seen in the context of theories concerning the relationship between speech and gesture. Conversely, developmental studies may need to be more specific about their view of how gestures can serve compensatory functions.
21
22
Marianne Gullberg, Kees de Bot, and Virginia Volterra
Conclusions and introduction to this volume The issues regarding language development and gesture raised in this review are far from exhaustive. A range of other questions can be asked, with regard to methodology, to interaction, and concerning the relationship between language, gestures, and culture. Are some types of gesture related to characteristics of the language system while others are more cultural (e.g. gesticulation vs. emblems) and if so, what does that mean for the parallel development of the two modalities? Is there anything in culture-specific communication that affects the emergence and use of gestures, such as the presence of semi-conventionalised, recurrent hand shapes (see Kendon, 2004)? How does lack of contact with a language and culture affect gesture use? Are there differences in gesture practices between tutored and untutored learners? What is the gestural behaviour of early simultaneous bilinguals? How might learners use gestures to express group affiliation (e.g. Efron, 1972 [1941])? Can language development and gesture be modelled together? The papers in this volume span both first and second language development. They all exemplify how studies of language development can gain insights from taking gestures into account. The first two papers focus on first language development. Liszkowski’s paper examines the gestures of pre-linguistic infants who have not yet developed their first language. He reviews and assesses what is known about pointing and other representational gestures. The paper re-evaluates current findings and takes a new stance, upgrading the role of pointing and downgrading the role of representational gestures in infants, thereby re-assessing the role of such gestures for the emergence of human communication. The second paper by Stefanini, Recchia, & Caselli focuses on the relationship between gesture production and spoken lexical capacity in children with Down syndrome compared to typically developing children. Drawing on data from a naming task, the authors show that, although children with Down syndrome do not differ quantitatively in gesture production from developmentally-matched controls, they do differ qualitatively in the distribution of information across the modalities. The study sheds important light on the ways in which gestures come into play when cognitive abilities outstrip productive spoken language skills. In the transition between first and second language studies, Tellier’s paper investigates the popular assumption that gestures improve the acquisition of a new word in a foreign language by looking at French children who are taught English. The study compares the effect of seeing vs. both seeing and producing gestures. The results indicate that (producing) gestures affects the productive retention of new vocabulary. The study thus lends support to the notion that gestures are implicated in learning language specifically, not only learning in general.
Gestures and some key issues in the study of language development
In the domain of adult second language development, the paper by Yoshioka examines how adult Dutch learners of Japanese construct narrative discourse in speech and gesture. In particular, the paper investigates how learners deal with cross-linguistic differences in how entities are referred to, for instance by lexical means (e.g. the frog, it) or by ellipsis. The results show that learners display both general and target language-specific means of structuring information in discourse in the two modalities. In this sense, the study adds to the evidence suggesting that gestures reflect language-specific speech patterns. It also contributes to the study of cross-linguistic influence in SLD. Brown investigates the interaction between first and second languages in adult speakers, specifically comparing the use of character- and observer-viewpoint in English and Japanese. Japanese speakers with some knowledge of English gesture differently in their native language from Japanese speakers without any knowledge of English, showing patterns similar to those of monolingual English speakers. Although traditionally only the effect of the L1 on the L2 has been considered in studies of SLD, this paper interestingly suggests that the L2 might also affect the L1. This perspective has important implications for what is considered the native standard in studies of language development.
Acknowledgements We gratefully acknowledge support from the Nederlandse Organisatie voor Wetenschappelijk Onderzoek (NWO) for a grant awarded to Kees de Bot and Marianne Gullberg to fund an International Workshop, “Gesture in Language Development”, held at Rijksuniversiteit Groningen, the Netherlands, April 20–22, 2006. We also thank Adam Kendon for helpful comments and discussions.
Note 1. For an overview of critiques of this hypothesis, see Ellis, 1994, pp. 273–280.
References Abrahamsen, Adele (2000). Explorations of enhanced gestural input to children in the bimodal period. In Karen Emmorey & Harlan Lane (Eds.), The Signs of Language revisited: An anthology to honor Ursula Bellugi and Edward Klima (pp. 357–399). Mahwah, NJ: Erlbaum. Adams, Thomas W. (1998). Gesture in foreigner talk. Unpublished PhD diss., University of Pennsylvania.
23
24
Marianne Gullberg, Kees de Bot, and Virginia Volterra
Alibali, Martha W. & Alyssa A. DiRusso (1999). The function of gestures in learning to count: More than keeping track. Cognitive Development, 14 (1), 37–56. Alibali, Martha W. & Susan Goldin-Meadow (1993). Gesture-speech mismatch and mechanisms of learning: What the hands reveal about a child’s state of mind. Cognitive Psychology, 25, 468–523. Alibali, Martha W., Sotaro Kita, & Amanda J. Young (2000). Gesture and the process of speech production: We think, therefore we gesture. Language and Cognitive Processes, 15 (6), 593– 613. Allen, Linda Q. (1995). The effect of emblematic gestures on the development and access of mental representations of French expressions. Modern Language Journal, 79 (4), 521–529. Allen, Linda Q. (2000). Nonverbal accommodations in foreign language teacher talk. Applied Language Learning, 11, 155–176. Asher, James (1977). Learning another language through actions. Los Gatos: Sky Oaks Productions, Inc. Bates, Elizabeth & Frederic Dick (2002). Language, gesture, and the developing brain. Developmental Psychobiology, 40, 293–310. Bavelas, Janet B., Alex Black, Nicole Chovil, Charles R. Lemery, & Jennifer Mullett (1988). Form and function in motor mimicry. Topographic evidence that the primary function is communicative. Human Communication Research, 14 (3), 275–299. Bavelas, Janet B., Nicole Chovil, Douglas A. Lawrie, & Allan Wade (1992). Interactive gestures. Discourse Processes, 15 (4), 469–489. Bavelas, Janet B., Christine Kenwood, Trudy Johnson, & Bruce Phillips (2002). An experimental study of when and how speakers use gestures to communicate. Gesture, 2 (1), 1–17. Beattie, Geoffrey & Heather Shovelton (1999). Mapping the range of information contained in the iconic hand gestures that accompany spontaneous speech. Journal of Language and Social Psychology, 18 (4), 438–462. Birdsong, David (2005). Nativelikeness and non-nativelikeness in L2A research. International Review of Applied Linguistics, 43 (4), 319–328. Bock, Katherine & Zensi Griffin (2000). The persistence of structural priming: Transient activation or implicit learning. Journal of Experimental Psychology: General, 129 (2), 177–192. Branigan, Holly P., Martin J. Pickering, & Alexandra A. Cleland (2000). Syntactic co-ordination in dialogue. Cognition, 75 (2), B13–B25. Brown, Amanda (2007). Crosslinguistic influence in first and second languages: Convergence in speech and gesture. Unpublished PhD diss., Boston University, Boston, and MPI for Psycholinguistics, Nijmegen. Brown, Amanda & Marianne Gullberg (2008). Bidirectional crosslinguistic influence in L1-L2 encoding of manner in speech and gesture: A study of Japanese speakers of English. Studies in Second Language Acquisition, 30 (2), 225–251. Bruner, Jerome (1983). Child’s talk: Learning to use language. Oxford: Oxford University Press. Bühler, Karl (1934). Sprachtheorie. Jena: Fischer. Butcher, Cynthia & Susan Goldin-Meadow (2000). Gesture and the transition from one- to twoword speech: When hand and mouth come together. In David McNeill (Ed.), Language and gesture (pp. 235–257). Cambridge: Cambridge University Press. Calbris, Geneviève (2003). L’expression gestuelle de la pensée d’un homme politique. Paris: CNRS Editions.
Gestures and some key issues in the study of language development
Capirci, Olga, Annarita Contaldo, Maria Cristina Caselli, & Virginia Volterra (2005). From action to language through gesture: A longitudinal perspective. Gesture, 5 (1/2), 155–177. Capirci, Olga, Jana M. Iverson, Elena Pizzuto, & Virginia Volterra (1996). Gestures and words during the transition to two-word speech. Journal of Child Language, 3, 645–675. Caselli, Maria Cristina (1990). Communicative gestures and first words. In Virginia Volterra & Carol J. Erting (Eds.), From gesture to language in hearing and deaf children (pp. 56–67). Washington, DC: Gallaudet University Press. Caselli, Maria Cristina & Paola Casadio (1995). Il primo vocabulario del bambino: Guida all’uso del questionario MacArthur per la valutazione della comunicazione e del linguaggio nei primi anni di vita. Milan: Franco Angeli. Caselli, Maria Cristina, Stefano Vicari, Emiddia Longobardi, Laura Lami, Claudia Pizzoli, & Giacomo Stella (1998). Gestures and words in early development of children with Down Syndrome. Journal of Speech, Language and Hearing Research, 41, 1125–1135. Cassell, Justine (1988). Metapragmatics in language development: Evidence from speech and gesture. Acta Linguistica Hungarica, 38 (1/4), 3–18. Chapman, Robin S. (1995). Language development in children and adolescents with Down Syndrome. In Paul Fletcher & Brian MacWhinney (Eds.), The handbook of child language (pp. 641–663). Oxford: Blackwell Publishers. Chapman, Robin S. & Linda Hesketh (2000). The behavioral phenotype of Down syndrome. Mental Retardation and Developmental Disabilities Research Review, 6, 84–95. Choi, Soojung & James P. Lantolf (2008). The representation and embodiment of meaning in L2 communication. Motion events in the speech and gesture of advanced L2 Korean and L2 English speakers. Studies in Second Language Acquisition, 30 (2), 191–224. Church, Ruth B. & Susan Goldin-Meadow (1986). The mismatch between gesture and speech as an index of transitional knowledge. Cognition, 23 (1), 43–71. Clark, Herbert H. (1996). Using language. Cambridge: Cambridge University Press. Cohen, Ronald L. & Diane Borsoi (1996). The role of gestures in description-communication: A cross study of ageing. Journal of Nonverbal Behavior, 20 (1), 45–63. Colletta, Jean-Marc (2004). Le développement de la parole chez l’enfant âgé de 6 à 11 ans. Corps, language et cognition. Sprimont: Mardaga. Condon, William S. & William D. Ogston (1971). Speech and body motion synchrony in the speaker-hearer. In David L. Horton & James J. Jenkins (Eds.), Perception of language (pp. 150–173). Columbus, OH: Merrill. Cook, Vivian (Ed.) (2003). Effects of the second language on the first. Clevedon: Multilingual Matters. Costa, Albert (2005). Lexical access in bilingual production. In Judith F. Kroll & Annette M. De Groot (Eds.), Handbook of bilingualism. Psycholinguistic approaches (pp. 308–325). Oxford: Oxford University Press. Davies, Allan (2003). The native speaker: myth and reality. Clevedon: Multilingual Matters. De Bot, Kees (2007). Dynamic systems theory, life span development and language attrition. In Barbara Köpke, Monika Schmid, Merel Keijzer, & Susan Dostert (Eds.), Language attrition: Theoretical perspectives (pp. 53–68). Amsterdam & Philadelphia: Benjamins. De Bot, Kees & Michael Clyne (1994). A 16-year longitudinal study of language attrition in Dutch immigrants in Australia. Journal of Multilingual and Multicultural Development, 15 (1), 17–28.
25
26 Marianne Gullberg, Kees de Bot, and Virginia Volterra
De Bot, Kees, Wander Lowie, & Marjolijn Verspoor (2005). Second language acquisition: An advanced resource book. London: Routledge. De Bot, Kees & Sinfree B. Makoni (2005). Language and ageing in multilingual contexts. Clevedon: Multilingual Matters. De Ruiter, Jan-Peter (2000). The production of gesture and speech. In David McNeill (Ed.), Language and gesture (pp. 284–311). Cambridge: Cambridge University Press. De Ruiter, Jan-Peter (2007). Postcards from the mind: The relationship between speech, gesture and thought. Gesture, 7 (1), 21–38. Dörnyei, Zoltan (2006). Individual differences in second language acquisition. AILA Review, 19, 42–68. Duncan, Susan D. (1996). Grammatical form and ‘thinking-for-speaking’ in Mandarin Chinese and English: An analysis based on speech-accompanying gesture. Unpublished PhD diss., University of Chicago, Chicago. Duncan, Susan D. (2005). Co-expressivity of speech and gesture: Manner of motion in Spanish, English, and Chinese. In Charles Chang et al. (Eds.), Proceedings of the 27th Annual Meeting of the Berkeley Linguistic Society (pp. 353–370). Berkeley, CA: Berkeley Linguistics Society. Efron, David (1972 [1941]). Gestures, race and culture. The Hague: Mouton. (first ed. 1941 as Gestures and environment. New York: King’s Crown Press.) Ellis, Nick C. & Diane Larsen-Freeman (2006). Language emergence: Implications for Applied Linguistics. Applied Linguistics, 27 (4), 558–589. Ellis, Rod (1994). The study of second language acquisition. Oxford: Oxford University Press. Emmorey, Karen (1999). Do signers gesture? In Lynn L. Messing & Ruth Campbell (Eds.), Gesture, speech and sign (pp. 133–159). Oxford: Oxford University Press. Engle, Randi A. (1998). Not channels but composite signals: Speech, gesture, diagrams, and object demonstrations are integrated in multimodal explanations. In Morton Ann Gernsbacher & Sharon J. Derry (Eds.), Proceedings of the 20th Annual Conference of the Cognitive Science Society (pp. 321–326). Mahwah, NJ: Erlbaum. Erting, Carol J. & Virginia Volterra (1990). Conclusion. In Virginia Volterra & Carol J. Erting (Eds.), From gesture to language in hearing and deaf children (pp. 278–298). Berlin: Springer-Verlag. Evans, Julia L., Martha W. Alibali, & Nicole M. McNeil (2001). Divergence of verbal expression and embodied knowledge: Evidence from speech and gesture in children with specific language impairment. Language and Cognitive Processes, 16 (2), 309–331. Fenson, Larry, Philip S. Dale, J. Steven Reznick, Elizabeth Bates, Donna Thal, & Stephen Pethick (1994). Variability in early communicative development. Monographs of the Society for Research in Child Development, 59 (5). Fenson, Larry, et al. (1993). The MacArthur Communicative Development Inventories: User’s guide and technical manual. San Diego: Singular Publishing Group. Ferguson, Charles A. (1971). Absence of copula and the notion of simplicity: A study of normal speech, baby talk, foreigner talk and pidgins. In Dell Hymes (Ed.), Pidginization and creolization of languages (pp. 141–150). Cambridge: Cambridge University Press. Fex, Barbara & Ann-Christine Månsson (1998). The use of gestures as a compensatory strategy in adults with acquired aphasia compared to children with specific language impairment (SLI). Journal of Neurolinguistics, 11 (1/2), 191–206. Feyereisen, Pierre (1987). Gestures and speech, interactions and separations: A reply to McNeill (1985). Psychological Review, 94 (4), 493–498.
Gestures and some key issues in the study of language development
Feyereisen, Pierre & Jacques-Dominique de Lannoy (1991). Gestures and speech: Psychological investigations. Cambridge: Cambridge University Press. Feyereisen, Pierre & Isabelle Havard (1999). Mental imagery and production of hand gestures while speaking in younger and older adults. Journal of Nonverbal Behavior, 23 (2), 153–171. Franco, Fabia & Jennifer Wishart (1995). The use of pointing and other gestures by young children with Down syndrome. American Journal of Mental Retardation, 100 (2), 160–182. Freedman, Norbert (1977). Hands, words, and mind: On the structuralization of body movements during discourse and the capacity for verbal representation. In Norbert Freedman & Stanley Grand (Eds.), Communicative structures and psychic structures: A psychoanalytic approach (pp. 109–132). New York: Plenum Press. Frick-Horbury, Donna (2002). The use of hand gestures as self-generated cues for recall of verbally associated targets. American Journal of Psychology, 115 (1), 1–20. Gass, Susan M. & Alison Mackey (2006). Input, interaction and output: An overview. AILA Review, 19, 3–17. Goldin-Meadow, Susan (2003). Hearing gesture: How our hands help us think. Cambridge, MA: The Belknap Press. Goldin-Meadow, Susan & Cynthia Butcher (2003). Pointing toward two-word speech in young children. In Sotaro Kita (Ed.), Pointing: Where language, culture, and cognition meet (pp. 85–107). Mahwah, NJ: Erlbaum. Goldin-Meadow, Susan, Howard Nusbaum, Spencer D. Kelly, & Susan Wagner (2001). Explaining math: Gesturing lightens the load. Psychological Science, 12 (6), 516–522. Goodwin, Charles (Ed.) (2002). Conversation and brain damage. Oxford: Oxford University Press. Goodwin, Marjorie H. & Charles Goodwin (1986). Gesture and coparticipation in the activity of searching for a word. Semiotica, 62 (1/2), 51–75. Goodwyn, Susan W. & Linda P. Acredolo (1998). Encouraging symbolic gestures: A new perspective on the relationship between gesture and speech. In Jana M. Iverson & Susan Goldin-Meadows (Eds.), The nature and functions of gesture in children’s communication (pp. 61–73). San Francisco: Jossey-Bass. Goodwyn, Susan W., Linda P. Acredolo, & Catherine A. Brown (2000). Impact of symbolic gesturing on early language development. Journal of Nonverbal Behavior, 24 (2), 81–103. Graham, Jean Ann & Michael Argyle (1975). A cross-cultural study of the communication of extra-verbal meaning by gestures. International Journal of Psychology, 10 (1), 56–67. Guidetti, Michèle (2005). Yes or no? How young French children combine gestures and speech to agree and refuse. Journal of Child Language, 32 (4), 911–924. Gullberg, Marianne (1998). Gesture as a communication strategy in second language discourse. A study of learners of French and Swedish. Lund: Lund University Press. Gullberg, Marianne (1999). Communication strategies, gestures, and grammar. Acquisition et Interaction en Langue Étrangère, 8 (2), 61–71. Gullberg, Marianne (2003). Gestures, referents, and anaphoric linkage in learner varieties. In Christine Dimroth & Marianne Starren (Eds.), Information structure and the dynamics of language acquisition (pp. 311–328). Amsterdam & Philadelphia: Benjamins. Gullberg, Marianne (2006a). Handling discourse: Gestures, reference tracking, and communication strategies in early L2. Language Learning, 56 (1), 155–196. Gullberg, Marianne (2006b). Some reasons for studying gesture and second language acquisition (Hommage à Adam Kendon). International Review of Applied Linguistics, 44 (2), 103–124.
27
28
Marianne Gullberg, Kees de Bot, and Virginia Volterra
Gullberg, Marianne (2008). Gestures and second language acquisition. In Peter Robinson & Nick C. Ellis (Eds.), Handbook of cognitive linguistics and second language acquisition (pp. 276–305). London: Routledge. Gullberg, Marianne (forthcoming). What learners mean. What gestures reveal about semantic reorganisation of placement verbs in advanced L2. Gullberg, Marianne & Stephen G. McCafferty (2008). Introduction: Gesture and SLA — Toward an integrated approach. Studies in Second Language Acquisition, 30 (2), 133–146. Holler, Judith & Geoffrey Beattie (2003). Pragmatic aspects of representational gestures. Do speakers use them to clarify verbal ambiguity for the listener? Gesture, 3 (2), 127–154. Hostetter, Autumn B. & Martha W. Alibali (2007). Raise your hand if you’re spatial: Relations between verbal and spatial skills and gesture production. Gesture, 7 (1), 73–95. Hostetter, Autumn B., Martha W. Alibali, & Sotaro Kita (2007). I see it in my hand’s eye: Representational gestures are sensitive to conceptual demands. Language and Cognitive Processes, 22 (3), 313–336. Irujo, Suzanne. (1993). Steering clear: avoidance in the production of idioms. International Review of Applied Linguistics, 31 (3), 205–219. Iverson, Jana M., Olga Capirci, Emiddia Longobardi, & Maria Cristina Caselli (1999). Gesturing in mother–child interactions. Cognitive Development, 14 (1), 57–75. Iverson, Jana M., Olga Capirci, Virginia Volterra, & Susan Goldin-Meadow (2008). Learning to talk in a gesture-rich world: Early communication of Italian vs. American children. First Language, 164–181. Iverson, Jana M. & Susan Goldin-Meadow (2005). Gesture paves the way for language development. Psychological Science, 16, 367–371. Iverson, Jana M. & Esther Thelen (1999). Hand, mouth and brain. The dynamic emergence of speech and gesture. Journal of Consciousness Studies, 6 (11/12), 19–40. Jarvis, Scott (2000). Methodological rigor in the study of transfer: Identifying L1 influence in the interlanguage lexicon. Language Learning, 50 (2), 245–309. Jenkins, Susan & Isabel Parra (2003). Multiple layers of meaning in an oral proficiency test: The complementary roles of nonverbal, paralinguistic, and verbal behaviors in assessment decisions. Modern Language Journal, 87 (1), 90–107. Jungheim, Nicholas O. (1991). A study on the classroom acquisition of gestures in Japan. Ryutsukeizaidaigaku Ronshu, 26 (2), 61–68. Jungheim, Nicholas O. (2001). The unspoken element of communicative competence: Evaluating language learners’ nonverbal behavior. In Thom Hudson & James D. Brown (Eds.), A focus on language test development: Expanding the language proficiency construct across a variety of tests (pp. 1–34). Honolulu: University of Hawai’i. Kasper, Gabriele & Eric Kellerman (Eds.) (1997). Communication strategies: Psycholinguistic and sociolinguistic perspectives. London: Longman. Kellerman, Eric & Anne-Marie van Hoof (2003). Manual accents. International Review of Applied Linguistics, 41 (3), 251–269. Kelly, Spencer D., Dale J. Barr, Ruth Breckinridge Church, & Katherine Lynch (1999). Offering a hand to pragmatic understanding: The role of speech and gesture in comprehension and memory. Journal of Memory and Language, 40 (4), 577–592. Kendon, Adam (1994). Do gestures communicate? A review. Research on Language and Social Interaction, 27 (3), 175–200.
Gestures and some key issues in the study of language development
Kendon, Adam (1995). Gestures as illocutionary and discourse structure markers in Southern Italian conversation. Journal of Pragmatics, 23 (3), 247–279. Kendon, Adam (2004). Gesture. Visible action as utterance. Cambridge: Cambridge University Press. Kendon, Adam (2007). Some topics in gesture studies. In Anna Esposito, Maja Bratanic, Eric Keller, & Maria Marinaro (Eds.), Fundamentals of verbal and nonverbal communication and the biometric issue (pp. 3–19). Amsterdam: IOS Press. Kida, Tsuyoshi (2005). Appropriation du geste par les étrangers: Le cas d’étudiants japonais apprenant le français. Unpublished PhD diss., Université de Provence, Aix-en-Provence. Kimbara, Irene (2006). On gestural mimicry. Gesture, 6 (1), 39–61. Kita, Sotaro (2000). How representational gestures help speaking. In D. McNeill (Ed.), Language and gesture (pp. 162–185). Cambridge: Cambridge University Press. Kita, Sotaro (2001). Pointing left in Ghana: How a taboo on the use of the left hand influences gestural practice. Gesture, 1 (1), 73–95. Kita, Sotaro (Ed.). (2003). Pointing: Where language, culture, and cognition meet. Mahwah, NJ: Erlbaum. Kita, Sotaro & Asli Özyürek (2003). What does cross-linguistic variation in semantic coordination of speech and gesture reveal? Evidence for an interface representation of spatial thinking and speaking. Journal of Memory and Language, 48 (1), 16–32. Krashen, Stephen D. (1994). The input hypothesis and its rivals. In Nick C. Ellis (Ed.), Implicit and explicit learning of languages (pp. 45–78). London: Academic Press. Krauss, Robert K., Yihsiu Chen, & Rebecca F. Gottesman (2000). Lexical gestures and lexical access: a process model. In David McNeill (Ed.), Language and gesture (pp. 261–283). Cambridge: Cambridge University Press. Lausberg, Hedda, Eran Zaidel, Robyn F. Cruz, & Alain Ptito (2007). Speech-independent production of communicative gestures: Evidence from patients with complete callosal disconnection. Neuropsychologia, 45 (13), 3092–3104. Laver, John & Janet Mackenzie Beck (2001). Unifying principles in the description of voice, posture and gesture. In Christian Cavé, Isabella Guaïtella, & Serge Santi (Eds.), Oralité et gestualité (pp. 15–24). Paris: l’Harmattan. Lazaraton, Anne (2004). Gesture and speech in the vocabulary explanations of one ESL teacher: A microanalytic inquiry. Language Learning, 54 (1), 79–117. Lazaraton, Anne & Noriko Ishihara (2005). Understanding second language teacher practice using microanalysis and self-reflection: A collaborative case study. Modern Language Journal, 89 (4), 529–542. Lee, Jina (2008). Gesture and private speech in second language acquisition. Studies in Second Language Acquisition, 30 (2), 169–190. Liddell, Scott (2003). Grammar, gesture and meaning in American Sign Language. Cambridge: Cambridge University Press. Lott, Petra (1999). Gesture and aphasia. Bern: Peter Lang. Mayberry, Rachel I. & Elena Nicoladis (2000). Gesture reflects language development: Evidence from bilingual children. Current Directions in Psychological Science, 9 (6), 192–196. McCafferty, Stephen G. (2002). Gesture and creating zones of proximal development for second language learning. Modern Language Journal, 86 (2), 192–203. McCafferty, Stephen G. (2004). Space for cognition: Gesture and second language learning. International Journal of Applied Linguistics, 14 (1), 148–165.
29
30
Marianne Gullberg, Kees de Bot, and Virginia Volterra
McNeill, David (1985). So you think gestures are nonverbal? Psychological Review, 92 (3), 271– 295. McNeill, David (1992). Hand and mind. What gestures reveal about thought. Chicago: University of Chicago Press. McNeill, David (1997). Growth points cross-linguistically. In Jan Nuyts & Eric Pederson (Eds.), Language and conceptualization (pp. 190–212). Cambridge: Cambridge University Press. McNeill, David (1998). Speech and gesture integration. In Jana M. Iverson & Susan GoldinMeadow (Eds.), The nature and functions of gesture in children’s communication (pp. 11–27). San Francisco: Jossey-Bass. McNeill, David (2005). Gesture and thought. Chicago: University of Chicago Press. McNeill, David & Susan D. Duncan (2000). Growth points in thinking-for-speaking. In David McNeill (Ed.), Language and gesture (pp. 141–161). Cambridge: Cambridge University Press. Melinger, Alissa & Willem J. M. Levelt (2004). Gesture and the communicative intention of the speaker. Gesture, 4 (2), 119–141. Miyake, Akira & Naomi Friedman (1999). Individual differences in second language proficiency: Working memory as language aptitude. In Alice Healy & Lyle Bourne (Eds.), Foreign language learning: Psycholinguistic experiments on training and retention (pp. 339–362). Mahwah, NJ: Erlbaum. Mohan, Bernard & Sylvia Helmer (1988). Context and second language development: Preschoolers’ comprehension of gestures. Applied Linguistics, 9 (3), 275–292. Montepare, Joann M. & Joan S. Tucker (1999). Aging and non-verbal behavior: Current perspectives and future directions. Journal of Nonverbal Behavior, 23 (2), 105–109. Morris, Desmond, Peter Collett, Peter Marsh, & Marie O’Shaughnessy (1979). Gestures, their origins and distribution. London: Cape. Musumeci, Diane M. (1989). The ability of second language learners to assign tense at the sentence level: A crosslinguistic study. Unpublished PhD diss., University of Illinois at UrbanaChampaign. Negueruela, Eduardo, James P. Lantolf, Susan Rehn Jordan, & Jaime Gelabert (2004). The “private function” of gesture in second language speaking activity: A study of motion verbs and gesturing in English and Spanish. International Journal of Applied Linguistics, 14 (1), 113–147. Nicoladis, Elena (2007). The effect of bilingualism on the use of manual gestures. Applied Psycholinguistics, 28 (3), 441–454. Nicoladis, Elena, Rachel I. Mayberry, & Fred Genesee (1999). Gesture and early bilingual development. Developmental Psychology, 35 (2), 514–526. Nobe, Shuichi (2001). On gestures of foreign language speakers. In Christian Cavé, Isabella Guaïtella, & Serge Santi (Eds.), Oralité et gestualité (pp. 572–575). Paris: l’Harmattan. O’Neill, Daniela K., Jane Topolovec, & Wilma Stern-Cavalcante (2002). Feeling sponginess: The importance of descriptive gestures in 2- and 3-year-old children’s acquisition of adjectives. Journal of Cognition and Development, 3 (3), 243–277. Özcaliskan, Seyda & Susan Goldin-Meadow (2005). Gesture is at the cutting edge of early language development. Cognition, 96 (3), B101–B113. Özyürek, Asli (2002a). Do speakers design their cospeech gestures for their addressees? The effects of addressee location on representational gestures. Journal of Memory and Language, 46, 688–704.
Gestures and some key issues in the study of language development
Özyürek, Asli (2002b). Speech-language relationship across languages and in second language learners: Implications for spatial thinking and speaking. In Barbora Skarabela (Ed.), BUCLD Proceedings (Vol. 26, pp. 500–509). Somerville, MA: Cascadilla Press. Özyürek, Asli & Spencer D. Kelly (2007). Special isssue ‘Gesture, brain, and language’. Brain and Language, 101 (3), 181–184. Özyürek, Asli, Sotaro Kita, Shanley E. M. Allen, Reyhan Furman, & Amanda Brown (2005). How does linguistic framing of events influence co-speech gestures? Insights from crosslinguistic variations and similarities. Gesture, 5 (1/2), 219–240. Pika, Simone, Elena Nicoladis, & Paula Marentette (2006). A cross-cultural study on the use of gestures: Evidence for cross-linguistic transfer? Bilingualism, 9 (3), 319–327. Pine, Karen J., Hannah Bird, & Elizabeth Kirk (2007). The effects of prohibiting gestures on children’s lexical retrieval ability. Developmental Science, 10 (6), 747–754. Pine, Karen J., Nicola Lufkin, & David Messer (2004). More gestures than answers: Children learning about balance. Developmental Psychology, 40 (6), 1059–1067. Pinker, Stephen (1989). Learnability and cognition: The acquisition of argument structure. Cambridge, MA: MIT Press. Pizzuto, Elena, Micaela Capobianco, & Antonella Devescovi (2005). Gestural-vocal deixis and representational skills in early language development. Interaction Studies: Social Behaviour and Communication in Biological and Artificial Systems, 6 (2), 223–252. Richards, Jack C. (Ed.) (1974). Error analysis. London: Longman. Riseborough, Margaret G. (1981). Physiographic gestures as decoding facilitators: Three experiments exploring a neglected facet of communication. Journal of Nonverbal Behavior, 5 (3), 172–183. Robinson, Peter (2003). Attention and memory during SLA. In Catherine J. Doughty & Michael H. Long (Eds.), The handbook of second language acquisition (pp. 631–678). Oxford: Blackwells. Rogers, William T. (1978). The contribution of kinesic illustrators toward the comprehension of verbal behavior within utterances. Human Communication Research, 5 (1), 54–62. Rose, Miranda L. (2006). The utility of arm and hand gesture in the treatment of aphasia. Advances in Speech-Language Pathology, 8 (2), 92–109. Schegloff, Emanuel A. (1984). On some gestures’ relation to talk. In J. Maxwell Atkinson & John Heritage (Eds.), Structures of social action (pp. 266–296). Cambridge: Cambridge University Press. Schick, Brenda, Marc Marschark, & Patricia E. Spencer (Eds.) (2006). Advances in the sign language development of deaf and hard-of-hearing children. New York: Oxford University Press. Schmid, Monika S., Barbara Köpke, Merel Keijzer, & Lina Weilemar (Eds.) (2004). First language attrition: Interdisciplinary perspectives on methodological issues. Amsterdam: Benjamins. Selinker, Larry (1972). Interlanguage. International Review of Applied Linguistics, 10 (3), 209–231. Sime, Daniela (2006). What do learners make of teachers’ gestures in the language classroom? International Review of Applied Linguistics, 44 (2), 209–228. Singer, Melissa A. & Susan Goldin-Meadow (2005). Children learn when their teacher’s gestures and speech differ. Psychological Science, 16 (2), 85–89. Slama-Cazacu, Tatiana (1976). Nonverbal components in message sequence: “Mixed syntax”. In William C. McCormack & Stephen A. Wurm (Eds.), Language and man: Anthropological issues (pp. 217–227). The Hague: Mouton.
31
32
Marianne Gullberg, Kees de Bot, and Virginia Volterra
Slobin, Dan I. (1996). From “thought and language” to “thinking for speaking”. In John J. Gumperz & Stephen C. Levinson (Eds.), Rethinking linguistic relativity (pp. 70–96). Cambridge: Cambridge University Press. Stam, Gale (2006). Thinking for Speaking about motion: L1 and L2 speech and gesture. International Review of Applied Linguistics, 44 (2), 143–169. Stefanini, Silvia, Maria Cristina Caselli, & Virginia Volterra (2007). Spoken and gestural production in a naming task by young children with Down syndrome. Brain and Language, 101 (3), 208–221. Sueyoshi, Azano & Debra M. Hardison (2005). The role of gestures and facial cues in second language listening comprehension. Language Learning, 55 (4), 661–699. Swain, Merrill (2000). The output hypothesis and beyond: Mediating acquisition through collaborative dialogue. In James P. Lantolf (Ed.), Sociocultural theory and second language learning (pp. 97–114). Oxford: Oxford University Press. Taranger, Marie-Claude & Christine Coupier (1984). Recherche sur l’acquisition des langues secondes. Approche du gestuel. In Alain Giacomi & Daniel Véronique (Eds.), Acquisition d’une langue étrangère. Perspectives et recherches (pp. 169–183). Aix-en-Provence: Université de Provence. Tellier, Marion (2006). L’impact du geste pédagogique sur l’enseignement/apprentissage des langues étrangères: Etude sur des enfants de 5 ans. Unpublished PhD diss., Université Paris VII — Denis Diderot, Paris. Tellier, Marion (2009). The development of gesture. In Kees de Bot & Robert Schrauf (Eds.), Language development over the life-span (pp. 191–216). New York: Routledge. Tomasello, Michael (2003). Constructing a language: A usage-based theory of language acquisition. Cambridge, MA: Harvard University Press. Valenzeno, Laura, Martha W. Alibali, & Roberta Klatzky (2002). Teachers’ gestures facilitate students’ learning: A lesson in symmetry. Contemporary Educational Psychology, 28, 187–204. Van Dijk, Marijn & Paul van Geert (2005). Disentangling behavior in early child development: Interpretability of early child language and the problem of filler syllables and growing utterance length. Infant Behavior and Development, 28, 99–117. Van Els, Theo, Guus Extra, Charles van Os, & Annemieke Janssen van Dieten (1984). Applied Linguistics and the learning and teaching of foreign languages. London: Edward Arnold. Van Hell, Janet G. & Ton Dijkstra (2002). Foreign language knowledge can influence native language performance in exclusively native contexts. Psychonomic Bulletin & Review, 9 (4), 780–789. Verspoor, Marjolijn, Wander Lowie, & Marijn van Dijk (2008). Variability in L2 development from a dynamic systems perspective. Modern Language Journal, 92 (2), 214–231. Viberg, Åke & Kenneth Hyltenstam (Eds.) (1993). Progression and regression in language. Cambridge: Cambridge University Press. Volterra, Virginia, Maria Cristina Caselli, Olga Capirci, & Elena Pizzuto (2005). Gesture and the emergence and development of language. In Michael Tomasello & Dan I. Slobin (Eds.), Beyond nature-nurture: Essays in honor of Elizabeth Bates (pp. 3–40). Mahwah, NJ: Erlbaum. Volterra, Virginia, Jana M. Iverson, & Marianna Castrataro (2006). The development of gesture in hearing and deaf children. In Brenda Schick, Marc Marschark, & Patricia E. Spencer (Eds.), Advances in the sign language development of deaf children (pp. 46–70). New York: Oxford University Press.
Gestures and some key issues in the study of language development
Wagner, Susan M., Howard Nusbaum, & Susan Goldin-Meadow (2004). Probing the mental representation of gesture: Is handwaving spatial? Journal of Memory and Language, 50 (4), 395–407. Wallbott, Harald G. (1995). Congruence, contagion, and motor mimicry: Mutualities in nonverbal exchange. In Ivana Marková, Carl Graumann, & Klaus Foppa (Eds.), Mutualities in dialogue (pp. 82–98). Cambridge: Cambridge University Press. Wexler, Kenneth & Peter Culicover (1980). Formal principles of language acquisition. Cambridge, MA: MIT Press. White, Lydia (2003). On the nature of interlanguage representation: Universal grammar in the second language. In Catherine J. Doughty & Michael H. Long (Eds.), The handbook of second language acquisition. Oxford. Wolfgang, Aaron & Zella Wolofsky (1991). The ability of new Canadians to decode gestures generated by Canadians of Anglo-Celtic backgrounds. International Journal of Intercultural Relations, 15 (1), 47–64. Wu, Ying Choon & Seana Coulson (2005). Meaningful gestures: Electrophysiological indices of iconic gesture comprehension. Psychophysiology, 42 (6), 654–667. Yoshioka, Keiko & Eric Kellerman (2006). Gestural introduction of Ground reference in L2 narrative discourse. International Review of Applied Linguistics, 44 (2), 171–193.
33
Before L1 A differentiated perspective on infant gestures Ulf Liszkowski Max Planck Research Group Communication Before Language, Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
This paper investigates the social-cognitive and motivational complexities underlying prelinguistic infants’ gestural communication. With regard to deictic referential gestures, new and recent experimental evidence shows that infant pointing is a complex communicative act based on social-cognitive skills and cooperative motives. With regard to infant representational gestures, findings suggest the need to re-interpret these gestures as initially non-symbolic gestural social acts. Based on the available empirical evidence, the paper argues that deictic referential communication emerges as a foundation of human communication first in gestures, already before language. Representational symbolic communication, instead, emerges after deictic communication first in the vocal modality and, perhaps, in gestures through non-symbolic, socially situated routines. Keywords: prelinguistic communication, pointing, representational gestures, infant
Before L1 Language is a hallmark of modern human communication, but language is maybe best understood as an emergent property over historical time of more basic and non-linguistic yet unique forms of human communication. Language, whether spoken or signed, is a conventional code, but the code alone is not a sufficient basis for communication. For example, the statement “This is a concert” to a person whose mobile phone goes off in the middle of a symphony is meant to communicate much more than what is actually coded linguistically. Further, in the absence of a shared linguistic code speakers of different languages still communicate successfully with gestures. Even in the absence of conventionalized gestures (i.e., sign language) deaf-born humans who are deprived of spoken and signed linguistic
36
Ulf Liszkowski
codes can still communicate successfully with creatively invented ‘home-sign’ gestures (Goldin-Meadow, 2003; Senghas, Kita, & Özyürek, 2004). The main point is that language, albeit special in itself, is only the tip of the iceberg of human communication, with all its complexities below the surface still waiting to be discovered. Human communication is an inferential process. A recipient tries to understand a sender’s intention, and the sender, in turn, knows this and intends the recipient to understand his intention (Grice, 1957; Sperber & Wilson, 1986). This recursive inferential model of human communication involves two main psychological components. First, social-cognitively, interlocutors need to form and understand intentions toward others’ intentions and understand epistemic states to transmit and infer referential content. Second, motivationally, human communication at its base is cooperative. The sender marks and formulates his utterances so that the receiver understands them, and the receiver tries to understand them as the sender intended her to. Communication takes place in a joint zone (‘common ground’; Clark, 1996). Communicative attempts outside the zone fail to communicate successfully, whether they are coded linguistically or not. Further, humans communicate with cooperative motives, for example to freely provide others with relevant information, not only to gain immediate direct benefit. Human communication thus involves complex social-cognitive and cooperative abilities, abilities which must develop somehow. Animal communication provides an informative contrast to the inferential and cooperative human communication model. Most of animals’ signals are reactions to stimuli, usually with little flexibility, often lacking referential or even communicative intent (e.g., Call & Tomasello, 2007). Dawkins and Krebs (1978) emphasized that ritualized combats, mating, courtship and so forth really are individualistic behaviors to maximize the benefit of the individual who emits the signal by literally using another individual’s muscles from afar. Animal communication thus seems to involve quite different processes compared to human communication. It mostly lacks a cooperative structure and is rather based on individualistic attempts of manipulating the environment to one’s own benefit, whether the environment is animate or inanimate. How and when in phylogeny cooperative behaviors did evolve is currently a topic of hot debate (e.g., Boyd, 2006). It would seem that it is only with the advent of cooperative motives that we would first see rudimentary forms of communication proper that go beyond the manipulation of others’ muscles for one’s own direct benefit. Infant gestural communication is a test case for models of communication. If language rests on some more fundamental yet already unique forms of human communication, the core infrastructure of human communication should already be present before language, and already be different from other species. This paper
Before L1
takes an ontogenetic perspective on human communication that is largely independent of language and linguistic code models by investigating infant gestures before acquisition of a first language (L1). In contrast to adult gesture research, it does not address speech-accompanying gestures as part of or a complement to an existing linguistic system, but instead focuses on the emergence and use of core gestures before any speech or linguistic system has yet developed. The main question is: To what extent do infant gestures already share a common cognitive and motivational infrastructure with fully fledged adult communication before language has emerged? From a developmental perspective, a related question is: Where do infant gestures originate from ontogenetically? With regard to the gestural origins, two perspectives may broadly be distinguished. Classically, from a language acquisition perspective, infants’ gesturing has been interpreted as a kind of social tool-use, building on infants’ emerging intentionality (Bates, 1979). Infants’ intentionality, in turn, supposedly originates from infants’ individualistic sensorimotor schemes toward the physical environment. Infant gesturing, on this account, is a tool for individualistic problem-solving serving one’s own benefit, which rests on the emergent intentionality from individualistic object-directed action schemes. On this account, however, it is not clear to what extent infant gestures already reflect a psychology of intentions toward others’ intentional states, and cooperative motives to commune with and achieve mutual understanding involving benefits for the other. It is also not straightforward how inferential-cooperative communication would simply emerge on the heels of individualistic object-directed action schemes and egocentric motives. Other accounts instead emphasize humans’ ultra-sociality as the origins of infants’ gestures. Infants are entrenched in rich interactional contexts with competent adult communicators from the beginning, which provide a strong basis for the ontogenetic origins of cooperative human communication (Bruner, 1983; Werner & Kaplan, 1963). Infants are attuned to people from birth (e.g., their faces and voices) and an attachment system assures adults’ interest in interacting with their offspring on a psychological level beyond mere nurturing (Bowlby, 1969). Very young infants are already sensitive to several interaction cues like ostension and reference (see Csibra & Gergely, 2006), and to the contingencies of turn-taking as evidenced, for example, by their gaze aversion and reengagement behavior in the still-face paradigm (an adult interrupts an ongoing face-to-face interaction with a still face; see Adamson & Frick, 2003). Cognitively, however, on this account it is not so clear to what extent infant gestures are already under their control and intentionally directed at people. One perspective on the emergence of gestures thus emphasizes infants’ developing intentionality from individualistic object-directed sensorimotor schemes as the key feature in the emergence of human communication. In that perspective,
37
38
Ulf Liszkowski
the origins are individualistic and the underlying motivation is egocentric and still lacks the cooperative structure of adult communication. Another perspective instead suggests that the roots of human communication are a primary motive for social contact within an ultra-social environment. It is less clear on this account, however, whether infants’ behaviors already involve communicative and referential intent, or whether adults only interpret and construct them as communicative. Few empirical studies have directly tested the underlying complexities of infants’ gestures with regard to communicative intent, social cognition and cooperative motives. In what follows, I first present recent findings on infants’ referential deictic gestures, in particular infant pointing. These findings constitute evidence for social-cognitively and motivationally ‘rich’ prelinguistic referential communication. Next, I review relevant findings on infants’ representational gestures emerging after pointing. These findings yield little support for a symbolic interpretation of infant gestural communication, in particular not before language. Instead, I propose a leaner re-interpretation of these gestures as non-symbolic gestural social acts.
Infant gestures In infancy research infants’ gestures have been operationally defined as intentionally communicative based on infants’ (1) looks to the adult, (2) persistence and flexibility to achieve a goal, and (3) conventionalization of behavioral forms (Bates, 1979). Intentionally communicative gestures have been classified into deictic and representational gestures (Bates, 1979). Deictic gestures show or present a referent in the environment (deixis, Greek “to show”), the most prominent gesture being pointing. Deictic gestures are thus used to communicate referentially. Representational gestures re-present a referent, either in a conventionalized arbitrary form with a gesture that is commonly associated with the idea it should trigger (e.g., thumbs up for ‘good’), or in an iconic way by miming a referent or pretending to act out the content of a message (e.g., raising fist half open toward mouth for ‘drinking’). Representational gestures are thus used to communicate referentially by means of a symbolic vehicle which represents the referent.
Deictic gestures and infant pointing Classically, infant gestures such as showing, giving, reaching, and pointing have been classified as deictic gestures (Bates, 1979). In fact, infant reaching may better be described as a request rather than a showing gesture. It is ritualized from abbreviated grasping attempts, similar to begging gestures in apes (Call & Tomasello, 2007). However, infants from around 9 months also pick up objects and hold them
Before L1
out with an outstretched arm, usually to the delight of their caregivers who then comment on them. Although infants are often less ready to let an adult take the object they are holding out (and so such ‘showing’ is not yet an ‘offer’), infants will sometimes also hand over objects, often by placing them in the parents’ lap or hands. Such ‘placing’ is deictic in the sense that the object becomes a referent by virtue of the specific place where it is put (Clark, 2003). In a sense, these gestures are thus referential because they bring specific objects to the attention of others. Further, they seem to be motivated cooperatively, to mutually engage about these objects. Showing and placing are thus good candidates for crediting infants with intentional deictic referential communication and may reflect foundations of uniquely human communication. However, it is not clear precisely how these gestures work from the infants’ point of view. There are no experiments to my knowledge which have directly tested referential intent underlying infants’ showing or placing. Since these gestures involve objects at hand, a leaner interpretation is that they originate from individualistic object-directed actions. For example, infants may shake objects as an exploratory activity, while parents interpret this as communicative object exposure. Based on parents’ reactions the activity then becomes ritualized into a social gesture. Once ritualized, social gestures may be interpreted as intentional communication. However, they need not yet be intentionally deictic and express referential intent on the infants’ part. Instead, showing and placing may simply reflect a way of interacting with others, non-referentially. These gestures are motivationally interesting because they afford and establish social contact. The underlying communicative and cognitive complexities, however, are not yet clear. Pointing emerges after showing and placing around 12 months. Pointing is interesting because it enables reference to things at a distance and does not require physical contact with objects. Its action scheme has no function outside communicative contexts, in particular not for individualistic actions on objects. Pointing is even used to refer to referents beyond the immediate perceptual ‘here and now’ as one can point to a chair to refer to the late grandfather who used to sit in it. Referring to entities displaced from the ‘here and now’ is clearly a distinguishing feature of human communication. Social-cognitively, communicating by pointing requires an understanding that people attend to things and that one can direct their attention to these. It also involves an understanding of the shared background against which the point’s referent must be interpreted. Motivationally, adults point with cooperative motives, for example, to engage about things together, or to help others notice what they need to know. Two core aspects of human communication, that is, social cognition and cooperation, are thus already reflected in the single special act of human pointing. But developmentally, we need to know how this gesture works from the point of view of prelinguistic infants.
39
40 Ulf Liszkowski
In a series of recent experimental studies my colleagues and I have investigated infant pointing when it has just emerged around 12 months. These studies were designed to challenge communicatively ‘lean’ accounts. For example, developmental psychologists have proposed that (i) infants initially point non-communicatively (Desrochers, Morissette, & Ricard, 1995); (ii) pointing is non-referential as it does not involve a social-cognitive understanding of recipients’ attention (Moore & D’Entremont, 2001); and (iii) infants’ motivation is mainly egocentric, to obtain objects or attention to the self (Bates, Camaioni, & Volterra, 1975; Moore & D’Entremont, 2001; Gomez, Sarria, & Tamarit, 1993). We used different procedures to elicit infant pointing. Either interesting events happened (a light flashed; a puppet appeared from behind a curtain), or an adult searched for something she needed, or the infant desired something out of reach. What we systematically varied in all these studies was the social context of infant pointing. First, with regard to communicative intent, findings were that already at twelve months infants use their pointing gestures to communicate. For example, when a recipient did not react to their pointing, infants persisted in their communicative goal and augmented the signal as reflected in repeated pointing and increased vocalizations compared to a situation in which the adult reacted typically by sharing attention and interest (Liszkowski et al., 2004, 2007a). Even more clearly, before infants initiated a point, they considered whether the recipient attended to them and so could see their point. When an adult turned sideways and did not look at infants (and so could not possibly see a point), infants pointed less than when the adult was turned toward them and so could see and react to their visual gesture (Liszkowski, Albrecht, Carpenter, & Tomasello, 2008). These experimental results thus establish that 12-month-olds point with communicative intent. Second, with regard to reference, we found that infants point referentially, making reference even to absent entities. In two studies a recipient misidentified infants’ referents and either attended solely to the infants’ face, or to an irrelevant object nearby the intended referent. In both these cases of referential misunderstandings, infants attempted to redirect the recipient’s attention by repeating their pointing to their intended referent more often than when the recipient had correctly identified the referent (Liszkowski et al., 2004, 2007a). Infants thus point to refer to particular entities. In further studies, we found that infants even refer to ceased events and objects which are not present at the moment of testing. For example, when infants had attended to an interesting event and it had ceased, they then pointed to its previous, now-empty location depending on how a recipient had reacted to it before (Liszkowski et al., 2007b). Further, to obtain a desirable object that was absent at the moment of request, infants but not chimpanzees who were tested in the same study design pointed to the object’s usual but now-empty location, thus referring to the absent entity (Liszkowski, Schäfer, Carpenter, &
Before L1
Tomasello, 2009). These studies thus establish that infants point to refer others to specific, and sometimes even absent referents. In further studies we tested infants’ epistemic understanding underlying their referential pointing. We found that infants pointed significantly more often to an interesting event when the adult had not yet seen it than when she already had (Liszkowski et al., 2007b). Moreover, we established that infants point to inform an adult who is looking for an object (Liszkowski, Carpenter, Striano, & Tomasello, 2006). In this new search paradigm, an adult lost track of one of two boring objects which she (but not the infant) needed, and then searched around with a quizzical look. Infants readily pointed out the object the adult needed, without requestive accompaniments or personal interest in them, and more often when the adult was ignorant than knowledgable of the objects’ locations (Liszkowski, Carpenter, & Tomasello, 2008). These results thus reveal that infants point referentially with an understanding of the attentional and epistemic states of others. Third, we found that infants point for others with cooperative and prosocial motives. The studies show that infants point at interesting events to share their interest about these with others. For example, when an adult only oriented to the infant’s referent but then did not comment on it (Liszkowski et al., 2004), or when the adult’s comment about a referent was unenthusiastic and therefore did not match the infant’s interest (Liszkowski et al., 2007a), infants were dissatisfied, as reflected in their differential pattern of pointing. Crucially, when infants already shared an attentional focus with the adult, they then still pointed if the adult expressed interest in the referent, in order to express their alignment with the adult’s expression of attitude (Liszkowski et al., 2007b). These findings show that infants do not only want to share the visual focus on a referent; they want to express and share their attitudes about a referent, too. Moreover, we demonstrated for the first time that infants also point to help others, which may be interpreted as the ontogenetically earliest evidence for altruistic helping without direct benefit for the self. In these studies, infants pointed to help an adult find things which they themselves did not request or find of particular interest (Liszkowski et al., 2006), and more so when the adult needed help to find it than when she did not (Liszkowski et al., 2008). The studies thus provide experimental evidence that infants point with cooperative and prosocial motives, i.e., to align with and to help others. The new experimental findings provide a new look at infant pointing as a human communicative act including full-fledged reference on a mental level and cooperative motives like sharing and helping, all before language has emerged (see Tomasello, Carpenter, & Liszkowski, 2007). This interpretation is further supported by the fact that infants also comprehend the pointing of others in the same way that they themselves point (Camaioni, Perucchini, Bellagamba, & Colonnesi, 2004; Behne, Liszkowski, Carpenter, & Tomasello, submitted). The exact
41
42
Ulf Liszkowski
process of the emergence of pointing is not well understood at the moment (see also Lock, Young, Service, & Chandler, 1990). Since pointing is not functional as an object-directed action (unlike, e.g., reaching), and because it is used communicatively with cooperative motives from the beginning, pointing does not seem to simply originate from individualistic object-directed actions, in particular not from reaching (see also Franco & Butterworth, 1996). Presumably, as already suggested by Werner & Kaplan (1963), the ability to refer originates in interpersonal contexts from an emerging motive to share objects together as ‘objects-of-regard’. On that account, infants’ showing and placing may enhance object-involved interpersonal contexts and, through social scaffolding, lead to infants’ comprehension and production of referential communication. It is not clear whether imitation or instead a more biological basis leads to the particular form of index-finger pointing. Given the communicative complexities of pointing when it has just emerged, however, it should be conceptualized as developmental accomplishment of — and not a precursor to — referential communication.
Representational gestures and infant gestural social acts Like words, representational gestures have semantic content and are used for symbolic reference. They are thus different from pointing which in itself does not represent in a symbolic way or carry meaning independent of its context. Infant representational gestures emerge after pointing, in the case of arbitrary gestures through imitation and, in the case of iconic gestures, also creatively from one’s own action experiences (Bates, 1979; Capirci et al., 2005). Symbolic communication is a transformation of earlier forms of deictic communication which presupposes skills of reference and, in addition, cognitive skills for symbolizing. Representational gestures, especially iconic gestures, require the cognitive ability to decouple an action directed at an object from the communicative act of representing a referent with that action. Iconic gestures thus involve some kind of pretending or miming of an object-related action in order to represent a referent. Developmentally, however, it is possible that both arbitrary conventional and iconic gestures are initially simply reproduced via imitation. To merely reproduce an iconic gesture one need not understand the ‘etymological’ relation to its action scheme derivate and decouple action and representation. Indeed, studies show that there is no advantage for infants’ comprehension of iconic over arbitrary gestures (Namy, Campbell, & Tomasello, 2004). It is thus possible that representational gestures initially rather reflect non-symbolic forms of participating in social situations, routines and game formats. On such an account, infant representational gestures may be re-interpreted as non-symbolic gestural social acts.
Before L1
Gestural social acts are conceptually and developmentally different from fully symbolic representational gestures and best understood in terms of their origins. They originate in social routines, games, and contingencies through interactional processes and are mainly used to do what one does with others socially. Just like objects afford certain object-directed actions, for infants social situations and persons may afford certain social gestures. For example, a routine in which a mother sings ‘we are birds’ while flapping with her arms, may lead the infant to eventually flap arms too, initially just in the game context just with the mother, and then maybe as a way of initiating or maintaining contact with other social partners, out of an interest in the social world and a proclivity to interact. The point is that the infant initially need not know that the gesture is used to symbolically represent the referent ‘bird’ or ‘bird game’. Instead of representing anything symbolically, gestural social acts are about the direct social activity and interaction itself. It is not entirely clear what the convincing evidence for a symbolic understanding of representational gestures would be, but there are several co-occurring behaviors which could support a symbolic interpretation. Clearly, one would expect generalization of usage and decontextualization (Werner & Kaplan, 1963), since symbols are rather abstract and not bound to specific situations or recipients. Further, good evidence would be skills for creatively producing iconic gestures, which requires the cognitive ability to decouple actions from objects and, instead, use these actions to represent referents. Other supportive evidence for a symbolic interpretation of representational gestures is their combination with other deictic and representational gestures or words into symbolically communicated messages (for example, gesture for sleep + word ‘bed’, or gesture for sleep + point to bed). More independent support would be the cognitive ability to creatively extend pretense acts and understand symbols more generally, for example maps or scale models. In what follows I review and discuss findings on infant representational gestures which have been taken to support a symbolic interpretation. In light of the findings, I propose a leaner re-interpretation of these gestures as initially nonsymbolic gestural social acts. Classically, Bates (1979) proposed a transition from deictic to representational communication around 13 months when infants start to use their first words and representational gestures to name things and engage in symbolic play such as putting doll shoes on a doll’s feet, or stirring with a spoon to express something about ‘spoonness’. Such pretense or symbolic play has been interpreted as representational gesturing (‘gestural naming’). For example, Caselli (1990), based on a diary study, reported episodes of symbolic play at 9–12 months which she interpreted as communicative semantic acts similar to first words (e.g., holding empty fist to ear for ‘telephone’). However, most of these observations were about narrowly defined, context-bound social acts learned and reproduced in specific social interactions.
43
44 Ulf Liszkowski
Further, these ‘naming’ gestures could be a form of individualistic pretense play instead of being communicative. Moreover, they may not even involve pretense, but instead only reflect individualistic trying and practicing of object-directed action schemes. In fact, more recent studies suggest that it is around 2 years of age that children differentiate pretense from serious trying (Rakoczy, Tomasello, & Striano, 2004) and creatively extend others’ pretense acts (for example, when an adult pretends to spill some coffee on a table, infants then pretend to clean the table; Harris & Kavanaugh, 1993). Acredolo and Goodwyn (1988, Study 2) conducted longitudinal interviews with parents of 16 infants at 11 months over a weekly assessment period of 9 months. They concluded that infants gesture representationally from around 14 to 15 months onwards, with a possible advantage in onset of representational gestures over words of about 3 weeks (only with gesture training; Goodwyn & Acredolo, 1993). They coded gestures if they occurred more than once (to exclude one-time off mimicking) and if they were clearly discernable from other (e.g., vocal) behaviors. They distinguished ‘object gestures’ denoting the presence of specific objects or events (e.g. sniffing for ‘flower’) if they were generalized from a real object to a picture of it or vice versa; ‘request gestures’ (like knob-turning for ‘open door’; arms-up for ‘pick me up’) which were specific to situations and contexts and, as the authors noted, not generalizable; and ‘attributes’ which were object descriptions, like blow for ‘hot’ or palms up for ‘all gone’, if they were not instrumental. The main findings were that request, attribute, and object gestures emerged between 14 and 15 months (in that order), and that object gestures were most frequent (38 types in 75% of the sample during 9 months of weekly observations, thus on average 2 gesture types per infant). Of these object gestures 32% originated within interactive routines, either through repeated exposure or explicit teaching. Instead, 58% of the object gestures were mimed actions without objects in hands, for example, rubbing the tummy for ‘soap’, panting for ‘dog’, or flapping arms for ‘bird’, which the authors argued had emerged outside interactive routines. Ten percent of the gestures depicted perceptual qualities of the referent (e.g., a cupped hand for ‘moon’). However, a number of issues challenge the authors’ interpretation of symbolic gesturing before language. Methodologically, parental weekly interviews may be limited with regard to issues of modes of acquisition (e.g., inside vs. outside interactive routines) and the extent of generalization. Operationally, request gestures are context-specific and ritualized abbreviated action attempts, observable also in non-linguistic apes, rather than symbolic (e.g., instead of climbing up mother’s leg, it becomes sufficient after a while to simply ‘raise arms’; see also Call & Tomasello, 2007). Further, attribute gestures also do not involve great generalization, because their occurrence is constrained to specific situations like feeding or clean-up/
Before L1
hiding games (e.g., when eating, mum always blows, or when things are gone one always raises palms). They are thus rather prototypical gestural social acts, used to do what others do in specific social situations. With regard to object gestures, infants had a very small repertoire of on average 2 object gestures. This is in stark contrast to the rapid word growth at that age. Further, object gestures were generalized only to similar referents in similar contexts rather than being flexibly used. The degree of generalization was thus fairly small. A third of these gestures emerged inside interactive routines, consistent with the re-interpretation of these gestures as being gestural social acts. With regard to the remaining gestures, it is not clear that they really emerged outside a social context, especially not as individually created iconic symbols. To be parsimonious, it seems unnecessary to assume that young infants draw an analogy between birds’ wings and their own arms, then on this basis creatively mime birds by ‘flapping arms’, and finally use this pantomime with the intent to communicate something about a bird. Similarly, it is not clear that infants creatively produce gestures to depict perceptual qualities of objects. First, the very low frequency alone does not seem to suggest a general ability to create and flexibly communicate with iconic gestures. Second, if we imagine Acredolo and Goodwyn’s infant with cupped hands, it seems unlikely that the infant would invent this sign outside a social context by looking at the moon (on a sleepless night) and then for the first time communicate about the moon by creatively inventing the sign ‘cupped hands’. In another study, Iverson, Capirci, and Caselli (1994) collected observational data from 12 infants at 16 and 20 months in 45 minute play sessions to compare the development of gestural and vocal communication with regard to deictic and representational gestures. They measured ‘showing’, ‘pointing’, and ‘ritualized requesting (reach)’ as deictic gestures, and as representational gestures ‘predicates’ (e.g., hot; tall), ‘conventional gestures’ (e.g., no; bye-bye; all gone), and ‘nominal gestures’ (cf. Acredolo and Goodwyn’s ‘object gestures’; e.g., drinking from a cup, demonstrated with or without the object, or flapping hands for ‘birdie’). They found that most 16-month-olds communicated more frequently with gestures than vocally. However, the vast majority of all gestures at both ages were actually deictic, not representational. Moreover, the deictic gestures increased with age from 68% to 80% of all gestures, while representational gestures decreased correspondingly. Interestingly, the vast majority of deictic gestures at 16 and 20 months consisted of pointing alone (rising from about 60% to 80%, chance= 33%). Further, already at 16 months the total number of representational gestures was much smaller than the number of representational words (25% of the number of representational words). In addition, with regard to the types of representational gestures, of 14 nominal gestures observed at 16 months, half were done with an object in hands, thus not being clearly distinguishable from object-directed actions.
45
46 Ulf Liszkowski
The study shows quantitatively that infants use representational gestures rather seldom compared to pointing and verbal communication. This suggests that representational gestures — in contrast to pointing — play only a small role in infants’ prelinguistic communication. Also, the number of representational gestures is much smaller than that of representational words from the outset, suggesting that infant representational communication is vocal rather than gestural throughout. Further, relative to deictic gestures, infants’ low-frequent representational gesturing even decreases during the transition to symbolic communication, which suggests that infants do not build their emerging symbolic verbal communication on skills of symbolic gestural communication. Instead, pointing alone is the most frequent gesture which increases even until the two-word stage relative to all other gestures (see also Lock et al., 1990). This suggests that it is actually pointing which leads infant communication throughout the prelinguistic period to the two-word stage, not representational gestures. To investigate the role of gestures in the transition from one to two-word utterances, further research has addressed gesture-word combinations in the second year (e.g., Capirci, Iverson, Pizzuto, & Volterra, 1996; Iverson & Goldin-Meadow, 2005). Gesture-word combinations in these studies are, for example, when infants point to a cup and say ‘drink’, or ‘cup’. The main findings in these studies are that the vast majority of gestures that co-occur with words are, in fact, pointing gestures only. Further, combinations in the gestural mode, for example, two representational gestures (gesture for cup + gesture for drink) or deictic and representational gestures (point to cup + gesture for drink) are virtually absent (see Pizzuto & Capobianco, 2005). These findings thus suggest that in the transition to symbolic (verbal) communication, infant representational gestures do not play a significant role. Instead, they are bypassed by pointing and first word utterances. The co-occurrences of point + word may in fact also be interpreted as single holistic utterances (albeit in two modalities) instead of true combinations of two different utterances, especially since a point itself does not carry semantic content independent of the communicative context. In a sense, this may suggest that also one-word utterances are a fragile form of verbal representational communication, which initially still relies heavily on pointing. The studies show that infants in their second year of life produce forms of representational gestures. Quantitatively, these gestures are small in number, they are low-frequent, and used much less than deictic gestures (in particular pointing) or vocal communication. Infants thus rarely use representational gestures for their prelinguistic communication. Further, there is no conclusive evidence that infants use these gestures symbolically, in particular not before language. Infants’ representational gestures may thus be reinterpreted as non-symbolic gestural social acts. Initially, infants use these gestures as a form of social activity and a way
Before L1
of doing things together, mainly in play formats or routines. At this stage, infant representational gestures do not re-present anything other than the gestural activity itself. Instead, they rather present infants’ joint activity directly, their social acts done with gestures.
A differentiated perspective on infant gestures Infant gestural communication provides important insights into the emergence and nature of human communication. It is a model for unique forms of human communication independent of linguistic codes. Findings show that infants communicate with gestures already before the acquisition of a first language, in ways that are already different from those of other species. If language is the tip of the iceberg of human communication, infant gestural communication is its base. Focusing on the base of infant gestures more specifically, findings suggest a differential picture. Deictic referential gestures (i.e., pointing) are foundational to human communication. Representational gestures are an emergent property of interactional processes in the transformation of gestural deictic toward verbal symbolic communication. A new look at prelinguistic infants’ pointing has revealed that infants point to communicate referentially — including reference to absent entities — in various and flexible ways, with cooperative motives and a social-cognitive understanding of others’ epistemic states. Infant pointing thus bears core features of the infrastructure of human communication, already before language has emerged. In typically developing infants, pointing is foundational to language and mediates its acquisition. In children with autism, the absence of deictic pointing is source and symptom of their impaired communication (e.g., Baron-Cohen, 1989). Nonhuman primates, apes, who do not have language, also do not point for each other (Tomasello, 2006). Gestural deictic communication is thus primary in the emergence of human communication, predating language both ontogenetically and, perhaps, phylogenetically. It would be interesting to know from where pointing originates. Given that infants point socially from the beginning, its motivational background is presumably rooted in interpersonal contexts. It is possible that pointing is based on earlier interactive routines and play formats which involve objects and in which infants actively participate with gestures such as showing, and give-take exchanges. The re-interpretation of infant representational gestures questions whether infants gesture symbolically before language, and whether they produce iconic gestures creatively from individualistic object-directed actions or, instead, acquire them socially from interaction. The findings suggest that infants rarely
47
48 Ulf Liszkowski
communicate with such gestures, in particular when compared to pointing and words. Their usage is also still context-bound and not integrated into infants’ emerging combinations of communicative utterances, quite unlike infants’ usage of pointing. There is only little support that such gestures emerge as creatively produced pantomimes from individualistic object-directed action schemes. Cognitively, there is also little evidence that these gestures involve symbolic understanding, which emerges in related areas like pretense and scale-model games only around 2 years of age (DeLoache, 2004). Instead, these gestures may be reinterpreted as gestural social acts which emerge in interactive routines and game formats, mainly through observation and reproduction. They involve a bi-directionality in the sense that both infant and adult know how to react when being addressed, but infants presumably still use these gestures non-symbolically to initiate or maintain social interaction based on interactive routines. Gestural social acts build on infants’ earlier communication skills and social-cooperative motives. However, rather than being used as a symbolic vehicle to represent a referent in conventional or iconic ways, the interaction is about the gestural activity itself. Such types of nonsymbolic social gesturing may lead to the accumulation and extension of common grounds necessary for symbol acquisition. The social use suggests that gestural social acts originate in infants’ motivation for social participation and interaction. It is not entirely clear where representational gestures originate from and what role they play in the emergence of language. Phylogenetically, one possibility is that after pointing and before spoken language, there was a phase of creating iconic gestures from action schemes. Ontogenetically, however, this is not the case. Instead, infants acquire language before they creatively produce iconic gestures, and it is deictic, not representational gestures, that play a pivotal role in the acquisition of a first language. It would be interesting to know whether infants’ gestural social acts, like pointing, also mediate the acquisition of language or whether they have a more general social function, for example, in the emergence of conventionality and joint activities. It is an open question whether the absence of infant gestural social acts would hamper the transition from pointing to language in any specific way. Based on the available ontogenetic evidence, this paper proposes a differentiated perspective on infant gestures before language. Infant pointing is already a complex prelinguistic form of human cooperative referential communication. Infant representational gestures are still a form of non-symbolic gestural social acts which draw on earlier interaction skills and social-cooperative motives. This perspective emphasizes the interactional basis of language acquisition and symbol formation. Neither pointing nor representational gestures seem to simply emerge from individualistic object-directed action schemes. Instead, their emergence is presumably mediated by a primary motive for social contact and interaction. We need to know more about the origins of deictic gestures and about the role of
Before L1
gestural social acts in the transition to language to better understand the nature and origins of human communication.
Acknowledgements I thank Malinda Carpenter, Marianne Gullberg, Kees de Bot, Susan Schmidt, and two anonymous reviewers for helpful comments on an earlier draft.
References Acredelo, Linda & Susan Goodwyn (1988). Symbolic gesturing in normal infants. Child Development, 59, 450–466. Adamson, Laura & Janet Frick (2003). The still face: A history of a shared experimental paradigm. Infancy, 4 (4), 451–473. Baron-Cohen, Simon (1989). Perceptual role taking and protodeclarative pointing in autism. British Journal of Developmental Psychology, 7 (2), 113–127. Bates, Elizabeth (1979). The emergence of symbols: Cognition and communication in infancy. New York: Academic Press. Bates, Elizabeth, Luigia Camaioni, & Virginia Volterra (1975). The acquisition of performatives prior to speech. Merrill-Palmer Quarterly, 21, 205–226. Behne, Tanya, Ulf Liszkowski, Malinda Carpenter, Michael Tomasello. (submitted). Twelvemonth-old infants comprehend the communicative intent behind other’s pointing gestures. Bowlby, John (1969). Attachment and Loss. Vol. 1: Attachment. New York: Hogarth. Boyd, Richard (2006). The Puzzle of Human Sociality. Science, 314, 1553. Bruner, Jerome (1983). Child’s talk. New York: Norton. Call, Josep & Michael Tomasello (Eds.) (2007). The gestural communication of apes and monkeys. New York: LEA. Camaioni, Luigia, Paola Perucchini, Francesca Bellagamba, & Cristina Colonnesi (2004). The role of declarative pointing in developing a theory of mind. Infancy, 5 (3), 291–308. Capirci, Olga, Annarita Contaldo, Maria Cristina Caselli, & Virginia Volterra (2005). From action to language through gesture: A longitudinal perspective. Gesture, 5 (1/2), 155–177. Capirci, Olga, Jana Iverson, Elena Pizzuto, & Virginia Volterra (1996). Communicative gestures during the transition to two-word speech. Journal of Child Language, 23, 645–673. Caselli, Maria Cristina (1990). Communicative gestures and first words. In Virginia Volterra & Carol J. Erting (Eds.), From gesture to language in hearing and deaf children (pp. 56–67). Berlin: Springer. Clark, Herbert H. (1996). Using language. Cambridge: Cambridge University Press. Clark, Herbert H. (2003). Pointing and placing. In Sotaro Kita (Ed.), Pointing: Where language, culture, and cognition meet (pp. 243–268). Mahwah, NJ: Lawrence Erlbaum. Csibra, Gergely & Gyorgy Gergely (2006). Social learning and social cognition: The case for pedagogy. In Yuko Munakata & Mark H. Johnson (Eds.), Processes of Change in Brain and Cognitive Development. Attention and Performance XXI (pp. 249–274). Oxford: Oxford University Press.
49
50
Ulf Liszkowski Dawkins, Richard & John Krebs (1978). Animal signals: information or manipulation? In John Krebs & Nicolas Davies (Eds.), Behavioural ecology: An evolutionary approach (pp. 282– 309). Oxford: Blackwell. DeLoache, Judy (2004). Becoming symbol-minded. Trends in Cognitive Sciences, 8, 66–70. Desrochers, Stephan, Paul Morissette, & Marcelle Ricard (1995). Two perspectives on pointing in infancy. In Chris Moore & Philip J. Dunham (Eds.), Joint attention: Its origins and role in development (pp. 85–101). Hillsdale, NJ: Lawrence Erlbaum. Franco, Fabia & George Butterworth (1996). Pointing and social awareness: Declaring and requesting in the second year. Journal of Child Language, 23 (2), 307–336. Goldin-Meadow, Susan (2003). The resilience of language: What gesture creation in deaf children can tell us about how all children learn language. New York: Psychology Press. Gomez, Juan C., Encarnacion Sarria, & Javier Tamarit (1993). The comparative study of early communication and theories of mind: Ontogeny, phylogeny, and pathology. In Simon Baron-Cohen, Helen Tager-Flusberg, et al. (Eds.), Understanding other minds: Perspectives from autism (pp 397–426). New York: Oxford University Press. Goodwyn, Susan & Laura Acredolo (1993). Symbolic gesture versus word: Is there a modality advantage for onset of symbol use? Child Development, 64, 688–701. Grice, Paul (1957). Meaning. The Philosophical Review, 64, 377–388. Harris, Paul & Robert Kavanaugh (1993). Young children’s understanding of pretense. Monographs of the Society for Research in Child Development, 58 (1) [231], v–92. Iverson, Jana, Olga Capirci, & Maria Caselli (1994). From communication to language in two modalities. Cognitive Development, 9, 23–43. Iverson, Jana & Susan Goldin-Meadow (2005) Gesture paves the way for language development. Psychological Science, 16 (5), 367–371. Liszkowski, Ulf, Malinda Carpenter, Anne Henning, Tricia Striano, & Michael Tomasello (2004). Twelve-month-olds point to share attention and interest. Developmental Science, 7 (3), 297–307. Liszkowski, Ulf, Malinda Carpenter, Tricia Striano, & Michael Tomasello (2006). Twelve- and 18-month-olds point to provide information for others. Journal of Cognition and Development, 7 (2), 173–187. Liszkowski, Ulf, Malinda Carpenter, & Michael Tomasello (2007a). Reference and attitude in infant pointing. Journal of Child Language, 34(1), 1–20. Liszkowski, Ulf, Malinda Carpenter, & Michael Tomasello (2007b). Pointing out new news, old news, and absent referents at 12 months of age. Developmental Science, 10 (2). F1–F7. Liszkowski, Ulf, Konstanze Albrecht, Malinda Carpenter, & Michael Tomasello (2008). Twelveand 18-month-olds’ visual and auditory communication when a partner is or is not visually attending. Infant Behavior and Development, 31(2), 157–167. Liszkowski, Ulf, Malinda Carpenter, & Michael Tomasello (2008). Twelve-month-olds communicate helpfully, and appropriately for knowledgeable and ignorant partners. Cognition, 108(3), 732–739. Liszkowski, Ulf, Marie Schäfer, Malinda Carpenter, & Michael Tomasello (2009). Prelinguistic infants, but not chimpanzees, communicate about absent entities. Psychological Science, 20, 654–660. Lock, Andrew, Andrew Young, Valerie Service, & Paul Chandler (1990). Some observations on the origins of the pointing gesture. In Virginia Volterra & Carol J. Erting (Eds.), From gesture to language in hearing and deaf children (pp. 42–55). Berlin: Springer.
Before L1
Moore, Chris & Barbara D’Entremont (2001). Developmental changes in pointing as a function of attentional focus. Journal of Cognition & Development, 2, 109–129. Namy, Laura, Aimee Campbell, & Michael Tomasello (2004). The changing role of iconicity in non-verbal symbol learning: A U-shaped trajectory in the acquisition of arbitrary gestures. Journal of Cognition and Development, 5 (1), 37–57. Pizzuto, Elena & Micaela Capobianco (2005). The link and differences between deixis and symbols in children’s early gestural-vocal system. Gesture, 5 (1/2), 179–199. Rakoczy, Hannes, Michael Tomasello, & Tricia Striano (2004). Young children know that trying is not pretending — a test of the “behaving-as-if ” construal of children’s early concept of “pretense”. Developmental Psychology, 40 (3), 388–399. Senghas, Ann, Sotaro Kita, & Asli Özyürek (2004). Children creating core properties of language: Evidence from an emerging sign language in Nicaragua. Science, 305 (5691), 1779–1782. Sperber, Dan & Deirdre Wilson (1986/1995). Relevance: Communication and cognition. Oxford: Blackwell. Tomasello, Michael, Malinda Carpenter, & Ulf Liszkowski (2007). A new look at infant pointing. Child Development, 78, 705–722. Tomasello, Michael (2006). Why don’t apes point? In Nick Enfield & Steve Levinson (Eds.), The roots of human sociality: Culture, cognition, and interaction. Oxford: Berg. Werner, Heinz & Bernhard Kaplan (1963). Symbol formation: An organismic-developmental approach to language and the expression of thought. New York: Wiley.
51
The relationship between spontaneous gesture production and spoken lexical ability in children with Down syndrome in a naming task Silvia Stefaninia, Martina Recchiab,c, and Maria Cristina Casellib aDepartment
of Neurosciences, University of Parma, Italy / bInstitute of Cognitive Sciences and Technologies, National Research Council, Italy / cDepartment of Development and Socialization Processes, University of Rome “La Sapienza”, Italy
We examined the relationship between spontaneous gesture production and spoken lexical ability in children with Down syndrome (DS) in a naming task. Fifteen children with DS (3;8–8;3 years) were compared to 15 typically developing (TD) children matched for developmental age (DATD) (2;6–4;3 years of chronological age) and 15 matched for lexical ability identified by the MacArthur-Bates Communicative Development Inventory questionnaire (LATD) (1;9–2;6 years of chronological age). Children of the DATD group displayed a larger number of correct spoken answers compared to other groups, while DS and LATD groups showed a similar naming accuracy. In comparison to both groups of TD children, a higher number of unintelligible answers was produced by children with DS, indicating that their spoken language is characterized by serious phono-articulatory difficulties. Although children with DS did not differ from DATD and LATD controls on the total number of gestures, they produced a significantly higher percentage of representational gestures. Furthermore, DATD children produced more spoken answers without gestures, LATD children produced more bimodal answers, while children with DS gestured more without speech. Results suggest that representational gestures may serve to express meanings when children’s cognitive abilities outstrip their productive spoken language skills. Keywords: children with Down syndrome, gestures, lexical abilities
Gesture and spoken language is closely linked in young typically developing (TD) children (Bates & Dick, 2002). At the end of the first year of life, the emergence
54
Silvia Stefanini, Martina Recchia, and Maria Cristina Caselli
of first words is preceded and accompanied by deictic gestures, used to draw attention to objects, locations or events (Bates, Benigni, Bretherton, Camaioni, & Volterra, 1979; Capone & McGregor, 2004; Volterra, Caselli, Capirci, & Pizzuto, 2005). These gestures are ritualized requests, showing, giving and pointing. Their referents can only be identified in the physical context in which communication takes place (e.g., reaching for an object, opening and closing the palm, looking alternatively at the adult). Many authors have argued that deictic gestures, and specifically pointing, are used not only to communicate but also to influence the mental states of others. Pointing constitutes a pathway through which communication and language develop (Goldin-Meadow, 2007; Tomasello, Carpenter, & Liszkowski, 2007). At approximately 12 months of age, representational gestures emerge (Caselli, 1990; Goodwyn & Acredolo, 1993). These gestures (also defined as symbolic, characterizing, iconic and referential) differ from deictic gestures in that they denote a precise referent and their basic semantic content remains relatively stable across different situations (e.g., bringing a fist to the ear for telephone). Several studies have investigated the links between early language development and specific aspects of communicative and symbolic gestures, in an attempt to confirm Piaget’s ideas about the shared sensorimotor origins of linguistic and non-linguistic symbols (Piaget, 1945; Werner & Kaplan, 1963). This research has highlighted the strong relationships between gestural communication, specific language milestones and specific cognitive events, including the production of actions associated with specific objects and symbolic (pretend) play (Bates & Dick, 2002). Both deictic and representational gestures originate in action: initially children produce communicative gestures by touching or manipulating objects, e.g., children show and give an object before pointing, or bring a glass to the mouth for drinking before producing an empty-handed gesture (Capirci, Contaldo, Caselli, & Volterra, 2005). Moreover, both types of gestures appear to undergo a similar process of decontextualization: initially gestures are used to identify and recognize objects and events and are progressively employed outside of the specific context in which they have been learned (Goodwyn & Acredolo, 1993). According to Goodwyn and Acredolo (1993), “symbolic status” may be ascribed to a gesture when referring to multiple exemplars, when produced referring to pictures in the absence of the original exemplar, and when the object itself is not involved. This evidence supports the idea that these gestures are an early form of categorizing or naming and that gesture and spoken words tap the same cognitive and linguistic skills (Bates & Dick, 2002). Gestures not only precede and accompany early language development, but also predict progress in verbal language abilities (Capirci, Iverson, Pizzuto, & Volterra, 1996; Iverson & Goldin-Meadow, 2005). For instance, the onset of pointing is
Spontaneous gestures and spoken lexicon ability in children with Down syndrome
a reliable predictor of the appearance of first words (Bates et al., 1979). Later, the production of gesture-word combinations that convey two distinct pieces of information predicts the emergence of two-word speech (Butcher & Goldin-Meadow, 2000; Camaioni, Caselli, Longobardi, & Volterra, 1991; Pizzuto & Capobianco, 2005). Moreover, some authors have reported that semantic relations conveyed in gesture-speech combinations are some of the first observed in two-words combinations (Capirci et al., 1996; Özcaliskan & Goldin-Meadow, 2005). When spoken language ability increases, gestures continue to be produced and become integrated with it (Jancovic, Devoe, & Wiener, 1975; Nicoladis, Mayberry, & Genesee, 1999). Studies aimed at exploring the development of the gesturespeech system beyond toddlerhood have demonstrated that preschool and schoolage children produce gestures in combination with speech across contexts and tasks, i.e., in conversation and narratives and during explanations of the concepts/ problems’ solutions (Alibali, Kita, & Young, 2000; Colletta, 2004; Guidetti, 2002; Pine, Lufkin, Kirk, & Messer, 2007). In addition, Capone (2007) showed that toddlers produce more gestures in isolation when adult input is gesture rich and/or when the task is conducive to gesturing. These gestures may help children to display ideas that they cannot express in the spoken modality, conveying a substantial proportion of their knowledge. Although many authors describe the development of the gesture-language system in TD children, relatively little is known about children with developmental disorders involving impaired linguistic abilities. These studies highlight that when children are limited in cognitive, linguistic, metalinguistic, and articulatory skills, they may use representational gestures more frequently to express meanings (Bello, Capirci, & Volterra, 2004; Capone & Mc Gregor, 2004; Thal & Tobias, 1992). Some interesting results may come from research on children with Down syndrome (DS) due to a lack of developmental homogeneity between cognitive and linguistic abilities, the latter being more impaired (Chapman & Hesketh, 2000; Vicari, Albertini, & Caltagirone, 1992). Furthermore, several studies have found a specific asynchrony between language domains, i.e., verbal comprehension is better preserved than spoken production and the lexicon is less impaired than grammar, although the two domains are not dissociated (Vicari, Caselli, & Tonucci, 2000). Spontaneous speech is often less intelligible compared to controls (Abbeduto & Murphy, 2004). Very few studies have investigated the use of gestures in DS children. Caselli and colleagues (Caselli, Vicari, Longobardi, Lami, Pizzoli, & Stella, 1998) administered the Italian version of the MacArthur-Bates Communicative Development Inventory (MBCDI) (Caselli & Casadio, 1995) to parents of 40 Italian children with DS having a mean age of 28 months. Comparing children’s scores on the Actions and Gestures section of the questionnaire to those of a group of TD children
55
56
Silvia Stefanini, Martina Recchia, and Maria Cristina Caselli
from the normative sample matched on the basis of comprehension vocabulary size, the authors reported that children with DS had significantly larger action/ gesture repertoires. In particular, they produced a greater percentage of representational gestures such as gone or good . In a second study, Iverson, Longobardi, and Caselli (2003) analyzed the frequency with which children produced gestures in mother-child spontaneous interactions. Five children with DS (mental age of around 22 months and language age of around 18 months) were matched to five TD children for sex, language age, and observed expressive vocabulary size. Relative to a matched TD group, children with DS displayed a significantly smaller repertoire of representational gestures, but produced them with similar frequency; they also exhibited fewer gesture and word combinations than TD children and did not produce any two-word speech combinations. The authors concluded that the relationship between gesture and language in children with DS is very similar to what has been observed in TD children with comparable language production abilities, but children with DS may have a specific delay in making the transition from one- to two-word speech. Stefanini, Caselli, and Volterra (2007) recently investigated the relationship between gestures and words in a more structured task. A picture-naming task was administered to a sample of children with DS (mean chronological age 6 years; mean mental age 4 years) and to two groups of TD children, one matched for developmental age and one for chronological age. The main finding was that children with DS gave fewer correct spoken answers compared to the TD groups. They gestured and produced bimodal and unimodal gestural responses more often than the chronological age-matched controls, who produced significantly more unimodal spoken responses. Further analysis showed that children with DS produced more representational gestures than both TD groups. These gestures were semantically related to the meaning represented in the pictures, thus children with DS could convey the correct information in their gestures even if they could not do so in speech. The results of Stefanini et al. (2007) seem to suggest that cognition and spoken language in children with DS may be partially disparate. When asked to name pictures, their lexical spoken competence appeared to be more impaired than would be expected on the basis of their cognitive development, but they used more representational gestures than their developmental age controls. The poor lexical repertoire of children with DS may be due to difficulties in phonological processes: many studies have shown that the increase of vocabulary in TD children is dependent on phono-articulatory skills (Gathercole, Willis, Emslie, & Baddeley, 1992). Children with DS are particularly interesting because of their specific difficulties with phonological processes, which allow us to study the relationship
Spontaneous gestures and spoken lexicon ability in children with Down syndrome
between speech and the mental representations that might become visible through representational gestures. What is still unclear is whether this wide use of representational gestures characterizes an early stage of lexical acquisition in both DS and TD children. This current study represents a follow-up to the study by Stefanini et al. (2007). We compare a sample of children with DS with two groups of TD children: one matched for developmental age and one matched for lexical ability (measured on vocabulary size). If children with DS and their lexical age-matched controls show a similar use of representational gestures, we can hypothesize a strict link between representational gestures and the spoken lexicon. By contrast, if children with DS use more representational gestures than their lexical age-matched controls, we can conclude that the use of this type of gesture is more related to non-verbal cognition, i.e., that children with DS exploit representational gestures in order to express semantic knowledge and compensate for their limited speech abilities. Our prediction is that this second hypothesis will be confirmed by analyzing the behavior of children with DS compared to TD children matched for lexical age.
Method Participants Fifteen children with DS (7 females; 8 males) and thirty typically developing children (14 females; 16 males) participated in this study. The age range of participants with DS was 3;8 to 8;3 (M 6;1, SD 1;3) and their mental age range was from 2;6 to 4;3 (M 3;10, SD 0;7). Clinical psychologists assessed the mental age of children with DS with the Leiter International Performance Scale (LIPS; Leiter, 1979) or the Italian version of the L-M form of Stanford-Binet Intelligence Scale (Bozzo & Mansueto Zecca, 1993). Children exposed to other languages, children with recurrent serious auditory impairment, and children with epilepsy and psychopathological disorders were excluded from this study. The thirty TD children were individually matched to children with DS, resulting in two different control groups. The first group included 15 children between the ages of 2;6 and 4;4 (M 3;7, SD 0;7); each child in this group was individually matched to a child of the same sex in the DS group whose mental age corresponded to the TD child’s chronological age. This group represented the “Developmental Age” control group (DATD). Preliminary analyses confirmed that the chronological age of this first group did not differ from the mental age of the DS group, t (28) = .12, p = .9. The second group, including 15 children between the ages of 1;9
57
58
Silvia Stefanini, Martina Recchia, and Maria Cristina Caselli
and 2;6 (M 2;2, SD 0;2), represented the “Lexical Ability” control group (LATD): each child in this group was individually matched to a child of the DS group for sex and vocabulary size (number of words produced), calculated with the parent questionnaire “Il Primo Vocabolario del Bambino” (PVB; Caselli & Casadio, 1995), the Italian version of the MBCDI (Fenson et al., 1994). We used the “Words and Sentences” Short Form, which allows collection of data on lexical production and first grammar abilities (Caselli, Pasqualetti, & Stefanini, 2007). The user’s manual indicates that it is possible to administer the questionnaire to parents until the child reaches the score corresponding to the 50th percentile of TD 3 year-old children. Only the vocabulary repertoire out of the 100 words included in the questionnaire was calculated. On the basis of this measure the level of lexical development was established. Preliminary analyses showed that the two groups did not differ in vocabulary size, t(28) = 1.06, p = .3 (raw scores on the questionnaire: DS group: M 74.4, SD 17.9; LATD group: M 66.8, SD 21.1), confirming that the matching was appropriate.
Materials and procedure: Picture–Naming Task (PNT) The Picture Naming Task (PNT) was designed for very young children between the ages of 2 and 3 years. Lexical items were selected from the normative data of the PVB questionnaire on the basis of item frequency data. The standardization of the PNT with an Italian population was recently completed (Bello, Caselli, Pettenati & Stefanini, 2010). The version of the task employed consists of 77 colored pictures divided into two sets: a set of 44 pictures representing objects/tools (e.g., a comb) and a set of 33 pictures representing actions (e.g., eating) and characteristics (e.g., small). Children were assessed in a familiar setting (rehabilitation center, home or school). The two sets of pictures were presented separately in random order, but the order of picture presentation within each set was fixed. After a brief period of familiarization, the experimenter placed the pictures in front of the child one at a time. For pictures of body parts, animals, objects/tools, food, and clothing, the child was asked: “Che cos’ è questo?” (What is this?). For pictures of actions, children were asked “Cosa sta facendo il bambino?” (What is the child doing?); and for pictures of characteristics, “Com’ è/dov’ è questo?” (How/where is this?). Two practice trials were given for each subtest. For the elicitation of characteristics, two pictures were put in front of the child: one representing the expected characteristic (e.g., a small ball) and another representing the opposite characteristic (e.g., a big ball). If the child did not provide the expected label as a first answer, the experimenter offered help to the child by saying: “This one is big (pointing to the picture of the big ball), and what is this one like?” (pointing to the picture of
Spontaneous gestures and spoken lexicon ability in children with Down syndrome
the small ball).Occasionally the experimenter also pointed to the picture in order to help the child maintain focus, but otherwise avoided producing any other kind of gesture. The test was administered in two sessions, one dedicated to objects/tools, the other to actions/characteristics. The mean duration of the task was 25 minutes for the DS and LATD groups and 16 minutes for the DATD group. All sessions were videotaped for later transcription.
Coding We transcribed the communicative exchanges between child and experimenter from the time a picture was placed in front of the child to when the picture was removed. During these exchanges, children could, in principle, produce multiple spoken utterances and multiple gestures. We examined children’s responses in terms of modality of expression, accuracy of the spoken answer and types of gestures produced.
Modality of expression All children’s responses produced during communicative exchanges were tallied and classified into one of three categories on the basis of modality: (a) unimodal spoken productions included responses produced only in the spoken modality; (b) bimodal productions included all responses in which the child used both verbal and gestural modalities; (c) unimodal gestural productions included the responses produced only in the gestural modality. Spoken responses Answers in the naming task were classified as correct, incorrect, or no-response. An answer was coded as correct when the child provided the expected label for the picture. For some pictures, more than one answer was accepted as correct (e.g., “bag” may be called “sacchetto” or “busta” in Italian). Incorrect answers included words different from the target items the pictures were meant to elicit. We classified as incorrect: semantic errors (such as circumlocutions, use of general terms and semantic replacements), visual errors, off-target responses and unintelligible answers; for more details see Stefanini et al., 2007. This category also included unintelligible productions (e.g., “enno” instead of “telefono” for the picture of a telephone). Many phonologically-altered productions were found, especially in the DS and LATD groups. These were classified as correct answers (e.g., “lelefono” for “telefono” for the picture of a telephone) or incorrect answers (e.g., “olologio” (clock) for the picture of a telephone, intended to elicit the Italian word “telefono”).
59
60 Silvia Stefanini, Martina Recchia, and Maria Cristina Caselli
No-responses were coded when children either stated that they did not know the word corresponding to a picture or did not provide a spoken answer. When children gave an incorrect answer or a no-response on their first attempt, they were given a second chance to provide the correct answer, adopting a “best answer” criterion. If neither answer was correct, the first answer was tallied.
Gestural production All gestures produced by the children as they interacted with the experimenter were transcribed. These included gestures produced with and without speech, and those occurring both before and after the accepted spoken answer. This study was primarily limited to manual gestures and movements of the head, although occasional reference will be made to other kinds of non-manual gestures (e.g., posture, body movements, facial expressions). For the isolation and classification of gestures we referred partially to recent work conducted with young preschool children (Bello et al., 2004; Butcher & Goldin-Meadow, 2000; Stefanini et al., 2007). Our work differs from that of Butcher and Goldin-Meadow (2000) in that we did not require eye contact between the child and the observer. Given the specific nature of the task (asking children to name pictures), all of the children’s productions were considered to be communicative. Each gesture was classified into one of the following three categories. Deictic gestures included pointing, showing and giving. Most of the deictic gestures produced were pointing gestures either directed at or touching or patting the target picture. Instances of pointing with other fingers than the index or with the palm extended were included in this category. For the analyses we only took spontaneous pointing gestures into account, excluding cases in which the children’s pointing gestures could have been elicited by the adult, i.e., when children produced a pointing gesture immediately following the same gesture performed by the experimenter. Showing was defined as an arm extension while holding an object (often the picture) in the hand. In the case of giving, the object (i.e., the picture) was transferred to another person. Representational gestures are pictographic representations of the target picture’s meaning (or meanings associated with the object or event represented in the picture). This category includes action gestures and size-shape gestures. Action gestures depict the action typically performed with the object, by an object, or by a character (e.g., picture of a comb: the child moves his fingers near his head as if combing his hair). Size-shape gestures depict the size, shape or other perceptual characteristics of an object or an event (e.g., picture of a table: the child opens his arms with the palms up, saying “big”).
Spontaneous gestures and spoken lexicon ability in children with Down syndrome
The category of other gestures included gestures that could not be classified as deictic or representational. This category included conventional interactive gestures (e.g., shaking the head for “no”); beat gestures (e.g., the hand moving in time with the rhythmic pulsation of speech, or in the air while pronouncing a particular word); and Butterworths1 gestures (e.g. supporting the head with the hand in the action of thinking).
Reliability Reliability between two independent coders (the first and second authors) was assessed for modality of expression as well as for all spoken and gestural productions. Agreement between coders was 95.3% for response type (unimodal spoken, bimodal, unimodal gestural), 95.3% for accuracy of spoken answers (correct, incorrect, no-responses), and 90.5% for gesture type (deictic, representational, other). The instances of disagreement were identified and a third coder was requested to code the answers, choosing one of the two classifications proposed by the first two coders.
Statistical Analysis Differences between DS, DATD and LATD groups (between-subjects factor) in the task will be explored with respect to the following within-subjects factors: modality of expression (verbal, gestural or bimodal), naming accuracy (number of correct answers), phonological accuracy (proportion of intelligible phonologically altered answers and proportion of unintelligible answers) and number and type of gestures (proportion of deictic and representational) produced. The program used for statistical analysis was STATISTICA 6.1 and an alpha level of 0.05 was used to reject the null hypothesis. The primary statistical analyses were based on ANOVA models and the Duncan test was used for post-hoc analysis.
Results Modality of expression Total responses. Many children produced multiple spoken utterances and multiple gestures for each item during the communicative exchange. Thus, the total number of responses produced (correct and incorrect) was calculated (DS: M 94.7, SD 12.9; LATD: M 92.9, SD 10.4; DATD M 91.9, SD 8.2). An analysis of variance (ANOVA) with Group as a between-subjects factor (DS, LATD, DATD) and Total Responses
61
Silvia Stefanini, Martina Recchia, and Maria Cristina Caselli
Mean numbers of modality of expression
62
90 80 70 60 50 40 30 20 10 0 DS Unimodal Spoken
LATD Bimodal (speech+gesture)
DATD Unimodal Gestural
Figure 1. Mean numbers and standard deviations of modality of expression (Unimodal Spoken, Bimodal speech+gesture, Unimodal Gestural) exhibited by the three groups of children (DS: children with Down syndrome; LATD: typically developing children matched for lexical ability; DATD: typically developing children matched for developmental age).
as the dependent variable showed that this difference was not statistically significant, (F(2,42) = .25, p > .05). Types of responses. We classified all of the children’s productions according to modality of expression: unimodal spoken, bimodal (i.e., speech + gestural), or unimodal gestural. The mean numbers of each modality for the three groups are presented in Figure 1. A repeated measures ANOVA with Group (DS, LATD, DATD) as the betweensubjects factor and Modality of Expression Type (unimodal spoken; bimodal; unimodal gestural) as the within-subjects factor was conducted. The difference between Groups was not significant (F(2,42) = .52, p > .05), but a main effect of Modality of Expression Type (F(2,84) = 71.28, p