V O LU M E
F I F T Y- T WO
THE PSYCHOLOGY OF LEARNING AND MOTIVATION Advances in Research and Theory
Series Editor Brian H. Ross Beckman Institute and Department of Psychology University of Illinois at Urbana-Champaign Urbana, Illinois
V O LU M E
F I F T Y- T WO
THE PSYCHOLOGY OF LEARNING AND MOTIVATION Advances in Research and Theory EDITED BY
BRIAN H. ROSS Beckman Institute and Department of Psychology University of Illinois at Urbana-Champaign Urbana, Illinois
AMSTERDAM • BOSTON • HEIDELBERG • LONDON NEW YORK • OXFORD • PARIS • SAN DIEGO SAN FRANCISCO • SINGAPORE • SYDNEY • TOKYO Academic Press is an imprint of Elsevier
Academic Press is an imprint of Elsevier 525 B Street, Suite 1900, San Diego, CA 92101-4495, USA 30 Corporate Drive, Suite 400, Burlington, MA 01803, USA 32 Jamestown Road, London, NW1 7BY, UK Radarweg 29, PO Box 211, 1000 AE Amsterdam, The Netherlands
Copyright # 2010, Elsevier Inc. All rights reserved. No part of this publication may be reproduced, stored in a retrieval system or transmitted in any form or by any means electronic, mechanical, photocopying, recording or otherwise without the prior written permission of the publisher Permissions may be sought directly from Elsevier s Science & Technology Rights Department in Oxford, UK: phone (+44) (0) 1865 843830; fax (+44) (0) 1865 853333; email:
[email protected]. Alternatively you can submit your request online by visiting the Elsevier web site at http://elsevier.com/locate/permissions, and selecting Obtaining permission to use Elsevier material Notice No responsibility is assumed by the publisher for any injury and/or damage to persons or property as a matter of products liability, negligence or otherwise, or from any use or operation of any methods, products, instructions or ideas contained in the material herein. Because of rapid advances in the medical sciences, in particular, independent verification of diagnoses and drug dosages should be made ISBN: 978-0-12-380908-7 ISSN: 0079-7421
For information on all Academic Press publications visit our website at elsevierdirect.com
Printed and bound in USA 10 11 12 13 10 9 8 7 6 5 4 3 2 1
CONTENTS
Contributors
ix
1. Naming Artifacts: Patterns and Processes
1
Barbara C. Malt 1. 2. 3. 4.
Introduction How Are Artifact Nouns Extended? Interpretation Issues Implications of Artifact Naming Patterns for Other Aspects of Human Cognition 5. Summary and Conclusion Acknowledgments References
2. Causal-Based Categorization: A Review
2 3 16 21 31 33 33
39
Bob Rehder 1. Introduction 2. Assessing Causal-Based Classification Effects 3. Computational Models 4. The Causal Status Effect 5. Relational Centrality and Multiple-Cause Effects 6. The Coherence Effect 7. Classification as Explicit Causal Reasoning 8. Developmental Studies 9. Summary and Future Directions References
40 42 51 57 75 83 93 100 105 111
3. The Influence of Verbal and Nonverbal Processing on Category Learning
117
John Paul Minda and Sarah J. Miles 1. 2. 3. 4. 5.
Introduction Multiple Processes and Systems A Theory of Verbal and Nonverbal Category Learning Experimental Tests of the Theory Relationship to Other Theories
118 121 130 136 154 v
vi
Contents
6. Conclusions References
4. The Many Roads to Prominence: Understanding Emphasis in Conversation
157 157
163
Duane G. Watson 1. Introduction 2. Continuous Representations of Prominence 3. Acoustic Correlates of Prominence in Production 4. Acoustic Correlates of Prominence in Comprehension 5. Conclusions References
5. Defining and Investigating Automaticity in Reading Comprehension
163 166 170 176 180 181
185
Katherine A. Rawson 1. 2. 3. 4.
Introduction Defining Automaticity in Reading Comprehension Investigating Automaticity in Reading Comprehension Investigating Automaticity in Reading Comprehension: Outstanding Issues 5. Redefining Automaticity in Reading Comprehension Acknowledgments References
186 187 193 215 225 226 227
6. Rethinking Scene Perception: A Multisource Model
231
Helene Intraub 1. Introduction 2. Scene Perception as an Act of Spatial Cognition 3. Multisource Scene Representation: Behavioral and Neuroimaging Picture Studies 4. Multisource Scene Representation: Exploring Peripersonal Space 5. Summary and Conclusions Acknowledgment References
7. Components of Spatial Intelligence
232 235 244 248 259 261 261
265
Mary Hegarty 1. Introduction 2. Identifying Components of Spatial Thinking
266 267
Contents
3. Two Components of Spatial Intelligence 4. Flexible Strategy Choice as a Component of Spatial Intelligence 5. Representational Metacompetence as a Component of Spatial Intelligence 6. Concluding Remarks Acknowledgments References
8. Toward an Integrative Theory of Hypothesis Generation, Probability Judgment, and Hypothesis Testing
vii
270 270 281 293 294 294
299
Michael Dougherty, Rick Thomas, and Nicholas Lange 1. Introduction 2. A Computational Level of Analysis 3. An Algorithmic Level of Analysis 4. Discussion References
9. The Self-Organization of Cognitive Structure
300 301 306 334 339
343
James A. Dixon, Damian G. Stephen, Rebecca Boncoddo, and Jason Anastas 1. Introduction 2. The Emergence of New Structure During Problem Solving 3. Explaining the Relationship Between Action and New Cognitive Structure 4. A Physical Approach to Cognition 5. Gear-System Reprise: The Self-Organization of Alternation 6. Dynamics of Induction in Card Sorting 7. General Discussion References Subject Index Content of Recent Volumes
344 347 354 356 360 372 375 381 385 393
This page intentionally left blank
CONTRIBUTORS
Jason Anastas Department of Psychology, University of Connecticut, Storrs, CT, USA Rebecca Boncoddo Department of Psychology, University of Connecticut, Storrs, CT, USA James A. Dixon Department of Psychology, University of Connecticut, Storrs, CT, USA Michael Dougherty Department of Psychology, University of Maryland, College Park, MD, USA Mary Hegarty Department of Psychology, University of California, Santa Barbara, CA, USA Helene Intraub Department of Psychology, University of Delaware, Newark, DE, USA Nicholas Lange Department of Psychology, University of Oklahoma, Norman, OK, USA Barbara C. Malt Department of Psychology, Lehigh University, Bethlehem, PA, USA Sarah J. Miles Department of Psychology, The University of Western Ontario, London, Ontario, Canada John Paul Minda Department of Psychology, The University of Western Ontario, London, Ontario, Canada Katherine A. Rawson Department of Psychology, Kent State University, Kent, OH, USA Bob Rehder Department of Psychology, New York University, New York, NY, USA Damian G. Stephen Department of Psychology, University of Connecticut, Storrs, CT, USA
ix
x
Contributors
Rick Thomas Department of Psychology, University of Oklahoma, Norman, OK, USA Duane G. Watson Department of Psychology, University of Illinois, Champaign, IL, USA
C H A P T E R
O N E
Naming Artifacts: Patterns and Processes Barbara C. Malt Contents 1. Introduction 2. How Are Artifact Nouns Extended? 2.1. What Are the Observed Instances and Patterns of Use of Artifact Names? 2.2. How Can the Observed Instances and Patterns of Use be Accounted For? 3. Interpretation Issues 3.1. Does Function Fully Constrain Some Artifact Names? 3.2. Is the Apparent Complexity of Artifact Name Use Only Because of Polysemy? 3.3. Is the Apparent Complexity of Artifact Name Use Only Because of Compounding or Conceptual Combination? 3.4. Conclusion Regarding Interpretation Issues 4. Implications of Artifact Naming Patterns for Other Aspects of Human Cognition 4.1. Implications for Views of Artifact Categorization 4.2. Implications for Word Meanings Across Languages 4.3. Implications for Developmental Trajectory 4.4. Implications for Bilingualism 4.5. Implications for the Whorfian Hypothesis 5. Summary and Conclusion 5.1. Summary 5.2. Conclusion: Not Amazing, Yet Still Amazing Acknowledgments References
2 3 4 8 16 16 18 19 21 21 22 25 26 28 30 31 31 33 33 33
Abstract Nouns such as knife, fork, box, and bench that name artifacts (human-made objects) are applied to diverse sets of objects that cannot be fully predicted by any single type of dimension. Explaining the complexity of artifact naming
Psychology of Learning and Motivation, Volume 52 ISSN 0079-7421, DOI: 10.1016/S0079-7421(10)52001-2
#
2010 Elsevier Inc. All rights reserved.
1
2
Barbara C. Malt
patterns requires considering both how name extensions evolve over time and how the goal-driven nature of communication contributes to labeling choices by speakers. Because of these influences, an account of artifact naming will differ from an account of how people conceptualize the objects nonlinguistically. The complexity of naming patterns is not readily explained away by trying to limit the range of exemplars that should count in the analysis of a given name, because principled bases for limitations are lacking. The social nature of communication mitigates this complexity in language use: Interactions between speakers and addressees help ensure that artifact nouns in discourse are interpreted as intended despite the wide range of objects each can encompass. However, the complexity is further manifested in substantial variability in naming patterns for the same sets of objects across different languages. This cross-linguistic variability poses special challenges for the child language learner, learners of multiple languages, and researchers interested in understanding how language may influence thought.
1. Introduction ‘‘Smart’’ phones today not only allow voice communication but take photos and video, play music, browse the web, send and receive e-mail, edit text, assist in way-finding, and launch a wide variety of other applications. A recent television ad for one such phone features the company CEO commenting, ‘‘It’s amazing we still call it a phone.’’ Amazing indeed, when we consider how different these phones are in their functional capabilities, appearance, and mode of transmission from the phones that have come and gone over the last 150 years, and even from those still attached by a cord to a phone jack in many households today. But it is not amazing at all in the context of a broader consideration of how names for artifacts (human-made objects) are used. In this chapter, I will illustrate the diversity of objects to which ordinary artifact nouns are extended. I will then present an account of how they come to be used in these ways. Next, I will discuss issues that arise in interpreting the examples, including whether alternative accounts are possible. I will also consider the implications of this diversity and how it arises for related areas of inquiry in cognitive psychology. By its focus on artifact naming, this chapter is most directly about language use and not about concepts. Although how people talk about artifacts is no doubt closely linked in some respects to how they think about them, each needs to be understood on its own terms, as I will argue. Implications from this discussion of language use for the psychology of concepts will be considered at several points. Many words have multiple meanings that are quite distinct from one another. A classic example is the case of bank, which can mean both a financial institution and the ground sloping down to a creek or river.
Naming Artifacts: Patterns and Processes
3
Other sorts of examples include newspaper, which can refer to both a physical object that is read in the morning and the company that produces it (as in ‘‘The newspaper won’t allow the staff to join the union’’), and running, which can refer to human locomotion, water coming out of a tap, and a movie that is in progress, along with many other types of actions. Bank is considered a case of homonymy, in which the two meanings have no apparent relation to one another (and may even have come about independent of one another; see, e.g., etymology in Oxford English Dictionary on-line, http://dictionary.oed.com/entrance.dtl). The second case (newspaper) involves metonymy, in which one entity is used to stand for another (e.g., Lakoff, 1987; Nunberg, 1979), and the third (running) may involve metaphorical extensions of a word from a concrete meaning to more abstract ones. All of these cases demonstrate the great flexibility that words have to take on a wide range of meanings, and the corresponding flexibility that language users must have to use and interpret words appropriately despite the variations in what they are intended to convey on different occasions of use. Yet none of these examples quite captures the case of the modern phone. It does not seem to be homonymy, metonymy, or metaphor when you call an object small enough to close your fist around, with all its ‘‘smart’’ capabilities, by the name phone, and at the same time use that name for the clunky thing attached to your phone jack at home, and for Alexander Graham Bell’s 1876 brass cylinder with a flexible tube that transmitted a voice to the next room through a liquid medium (see http:// en.wikipedia.org/wiki/History_of_the_telephone). These uses of the name are clearly related, and they all seem to name a concrete object, and the whole of the object, in a literal way. Likewise, it is neither homonymy, metonymy, nor metaphor allows you to use box for a small plastic container holding 100 thumbtacks that snaps shut as well as a large cardboard container with flaps that holds a computer, or that lets you use brush for both a long, thin thing with fine, soft hairs that applies watercolors and a rectangular thing with stiff wire bristles that scrapes off rust. It is these more ordinary uses of names that I will discuss. At first glance they may seem more mundane, but the naming patterns they reveal are complex and by no means trivial to explain.
2. How Are Artifact Nouns Extended? I take as a starting assumption that names for artifacts generally have associated with them some elements of meaning that reflect typical, familiar, common uses. For instance, the meaning most closely associated with box might be that it is a squarish cardboard container with flaps meant for holding one or a few solid objects. These elements of meaning do not, by
4
Barbara C. Malt
themselves, account for many uses of artifact names, such as when box is applied to the plastic container with a snap lid for tacks (much less a child’s juice box holding liquid and accessed only through a straw). So the question is how the noun is extended to other sorts of objects. This question of ‘‘how,’’ itself, actually has two interpretations. The first is as a descriptive question: What are the observed instances and patterns of use of artifact names? The second is as a theoretical question: How can these observations be accounted for? I will address both parts of the ‘‘how’’ question.
2.1. What Are the Observed Instances and Patterns of Use of Artifact Names? In this section, I will consider how the most obvious, observable properties of artifacts—their functions and their physical features—are related to the name they receive. These are the sorts of features that people generally produce when asked to describe an artifact or to list features associated with artifact names (e.g., Malt & Smith, 1984; Rosch & Mervis, 1975). Less observable factors may also contribute to naming patterns, and this possibility will be addressed in considering theoretical accounts for the patterns. I will draw on English noun uses only, although the naming patterns of English do not necessarily match the naming patterns of other languages. I will take up this cross-linguistic variation later. 2.1.1. Function-Based Extension Many people have the intuition that artifact names must be extended based on function. Artifacts generally exist for some specific purpose (even if only decorative, but usually for more active use), and so it is not surprising that their function is central to how people think about them. As Kelemen (1999), Kemler Nelson and colleagues (e.g., Greif, Kemler Nelson, Keil, & Gutierrez, 2006), and others have pointed out, when encountering a new artifact, ‘‘What’s it for?’’ is likely to be one of the first and most compelling questions asked about it. Miller and Johnson-Laird (1976) took it as a given that function is more basic than form in determining artifact categories, and this sentiment is frequently echoed by others (e.g., Medin & Ortony, 1989; Rips, 1989). Consistent with these intuitions, it is easy to provide examples of artifact nouns that are applied to objects differing substantially in form but sharing a function. Chair, for instance, is used to label objects for seating one person, whether they are large and stuffed, small and wooden, plastic and woven, or like a giant beanbag. Key is used for objects that open locks on doors, whether made of metal and inserted to physically turn a deadbolt, or resembling a credit card with a magnetic stripe that is swiped, or operating a car door from a remote location by the push of a button. Fan is used for devices to move air for cooling people, including electric box fans with
Naming Artifacts: Patterns and Processes
5
blades and Japanese paper fans. Razor is used for objects that shave hair off a person, including devices having a straight blade and operated manually, and objects having several whirling circular cutting mechanisms powered by electricity. Camera is used for boxy objects that record images on film and need to be grasped with two hands as well as for tiny things embedded in cell phones that yield digital records of a scene. Such examples confirm that function is an important dimension along which artifact names can be extended. Yet a closer look reveals that function does not constrain the name an artifact receives in the sense of providing boundary conditions for use of a name. Artifacts with different functions sometimes are labeled by the same name, and artifacts with the same function sometimes are labeled by different names. To illustrate these two facts, I now turn to cases where form is integral to the use of an artifact name. 2.1.2. Form-Based Extension The form of an artifact (by which I mean its shape, material, and other aspects of its physical make-up) has sometimes been characterized as a ‘‘superficial’’ aspect of the artifact that is secondary in importance to a ‘‘deeper’’ aspect, its function (e.g., Medin & Ortony, 1989; Miller & Johnson-Laird, 1976; Rips, 1989; Xu & Rhemtulla, 2005). Some researchers have argued that although early naming of artifacts may be based on form, children progress to a more mature focus on function by about age 4 or 5 (e.g., Diesendruck, Hammer, & Catz, 2003), and others have even argued that children use function from as early as age 2 (e.g., Kemler Nelson, Russell, Duke, & Jones, 2000). Nevertheless, it seems quite common for artifact names to be extended based heavily on form rather than function. Brush is used for objects with handles and bristles or hairs including variations made to smooth and untangle hair, apply paint to a surface, push snow off a windshield, scrub dirt or rust off a surface, and produce soft sounds on a drum by being drawn across the surface. Bowl is used for deep, rounded dishes made for eating liquids such as soups and also for storing solids such as fruit and serving granules such as sugar. Rake is used for objects with long handles and tines including variations made to gather up leaves, break apart thatch, dig stones from within soil, level and create patterns in soil or sand, and pull snow off a roof. Knife is used for objects with handles and blades that are sharp for cutting or dull and flexible for applying frosting to a cake, putty to a window, and spackle to a wall. Sponge is used for objects with natural or artificial sponge material that wipe dirt off surfaces (cleaning sponges), sand old coverings off (sanding sponges with an abrasive coating) or apply paint to surfaces (painting sponges). Fork is used for objects that bring food to the mouth, hold food in place (carving fork), serve from a platter by supporting on the surface of the tines (fish-serving fork), take the temperature of grilled food (thermometer fork; see http://www.williamssonoma.com/products/7839004/index.cfm), scoop and move manure
6
Barbara C. Malt
(manure fork; see http://www.thefind.com/garden/info-5-tine-manurefork) and make a musical note (tuning fork). In all these cases, the functions of the objects sharing a name are less similar than their forms are. 2.1.3. Form and Function Together Based on the previous two sets of examples, one might suggest that artifact names fall into two groups, with one set being extended based on function and another based on form. However, there are several arguments against this proposal. First, a single name can encompass some objects that are related to the more typical examples via form and others via function. Above, I noted that electric box fans and Japanese paper fans seem to share a name because of their shared function of moving air to cool people. However, other objects called fan function to suck water vapor, smoke, or odors out of an area and are not intended to cool anything, such as a ventilator fan in a bathroom and an exhaust fan in a stove hood. In this case, the bladed objects (the box, ventilator, and exhaust fans) all seem to warrant being called by the same name because of the physical resemblance among them. Similarly, some things called key may come in disparate forms that share a function, but others seem linked via the form. A radiator valve key, for instance, resembles a metal door key but functions to turn a valve to bleed air out of a radiator, and hex keys (see http://www.radioshack.com/ product/index.jsp?productId¼2062781, also known as hex wrenches) are used to turn hexagonal bolts holding parts together in all sorts of devices. Although most things called knife share some similarities of form, laser knives overlap with typical knives on the cutting function but are physically entirely distinct, with the laser beam as cutting device not the least of their differences (see http://www.freepatentsonline.com/4249533.html for a detailed description of the components of a laser knife). Second, nouns sometimes seem be extended to a particular object based on resemblance on a combination of form and function. In such cases, the overlap on each dimension with the more typical features associated with the name may be only partial. For instance, things called spoon most often have closed bowls for lifting liquid to the mouth to eat, but slotted spoons have openings in the bowl, and pasta spoons have tines around the edge. Both are for lifting something while leaving the liquid behind, and for preparing rather than eating foods. Things called chair typically provide backs, seats, and legs for sitting in while doing some task (eating, reading, working, etc.) but massage chairs, dentist chairs, and electric chairs have added elements for specialized functions. In the last two cases, it is an external party that performs the task; the chair only serves to hold the recipient of the task in place. Things called blanket typically are flat, flexible, and made of breathable materials to cover a person for warmth, but picnic blankets, while being flat and flexible, cover the ground for protection from dirt, moisture, and insects and come in waterproofed versions.
Naming Artifacts: Patterns and Processes
7
Finally, even in cases where one dimension seems to be the dominant basis for extending a particular artifact name, the general correlation between form and function means that the two dimensions cannot be fully dissociated. Where form differs, some difference in function usually follows. Although the various objects called rake seem to be linked more by form than by function, their similarity of form still makes their manner of functioning more similar to one another than to, say, the manner of functioning of the things called brush or bowl. Conversely, as Petroski (1993) points out, the saying that form follows function is only loosely true. The ‘‘general’’ in the statement that there is a general correlation between form and function also has an implication for the cases where function seems to be the dominant link among things called by a particular artifact name. There are many different forms by which a general function can be implemented, and names for artifacts acknowledge these differences. It is impossible to characterize the functional boundaries of artifact names without appealing to form in the process. Things called chair, bench, stool, and sofa are all for sitting on, but it is the particular form they take that distinguishes things called by each name from those called by another, at least as much as any finer discriminations of function. Even distinguishing functions at a finer grain, shared function alone may not warrant use of a name if the form differs from that usually associated with the name. For instance, I observed the object shown in Figure 1 at a streetcar stop, used by passengers to sit on while waiting for the streetcar, but several observers have verified my intuition is that it is too unlike normal benches to readily be called bench1 (see Malt & Johnson, 1992; Hampton, Storms, Simmons, & Huessen, in press, for more systematic evidence). Returning to broader characterizations of function, many types of objects function to contain— those called box, basket, bin, crate, carton, bowl, bottle, jar, and jug, for instance—and so things called by each share a broad function of containment, but again it seems to be their material and/or shape that distinguish them from one another as much as details of the function. Things called knife are typically for cutting, but many other objects with other names are also for cutting, including those called pizza cutter, paper cutter, saw, wire clippers, pruning shears, lopper, scissors, axe, sword, scapel, machete, and scythe. Things called key may broadly be for opening and things called blanket for covering, but those functions do not discriminate key from can opener or blanket from plastic wrap. To narrow the function of each enough to discriminate among them by function, the function statement inevitably ends up entailing elements of form. A knife cuts by means of a single horizontally oriented blade whereas a pizza cutter cuts by means of a round blade, scissors cut by means of two blades that pass across each other, and so on. In the end, 1
Names proposed by three respondents were leaning rail or sitting rail, butt-rest which could evolve to buttress, and resting perch or person pedestal. A fourth suggested ergonomic bench.
8
Barbara C. Malt
Figure 1 This object has the function of things usually called bench but a form that falls outside the usual range associated with bench.
form seems to be integral to explaining what is excluded from being called by a certain name even when there is a strong component of shared function among things that are called by the name. 2.1.4. Conclusion Regarding Form and Function Patterns of use of artifact names seem to entail overlap among the entities receiving a given name on both form and function. In some cases, the overlap between things receiving the same name may be most prominently on the dimension of function, and in other cases it may be most prominently on the dimension of form. However, it is difficult to find cases of artifact words where one could claim that function alone or form alone accounts for why the whole set of objects is called by a single name and why certain other objects are not called by that name. In one way or another, for any given artifact noun, it is most often necessary to appeal to both form and function to explain the full naming pattern.
2.2. How Can the Observed Instances and Patterns of Use be Accounted For? By the focus so far on form and function, and the argument that both matter, it may sound like I am heading toward an argument for a ‘‘family resemblance’’ view of how artifact names are applied to objects. This is, of
Naming Artifacts: Patterns and Processes
9
course, the view that Rosch and Mervis proposed back in 1975, drawing on Wittgenstein’s (1991; originally published in 1953) earlier analysis of the range of things called by the name game (or, in reality, a roughly comparable German term, Spiele/Spiel). In this view, an object can be called by a name if it overlaps in its features with other things called by the same name, even if it does not share any particular single feature or set of features with all other things called by that name. As a general description of the relation of objects to one another that are all called by the same name, it may not be far from appropriate. However, this view fell out of favor among cognitive psychologists in the 1980s and thereafter, who, following Goodman (1972), became concerned that it failed to explain what counted as ‘‘similar’’ (e.g., Murphy & Medin, 1985) and failed to capture other important aspects of how people thought about objects. Spurred in part by Putnam (1977) writings on essentialism and Quine’s (1969) remarks about superficial versus deep similarity within philosophy, psychologists turned their attention toward people’s understanding of the more theoretical or causal connections among features (e.g., Ahn, Kim, Lassaline, & Dennis, 2000; Murphy & Medin, 1985) and to the possibility that there might be some single type of knowledge or belief that more fully constrains what things can be called by a name than the family resemblance view implies (e.g., Bloom, 1996; Gelman, 2003; Keil, 1989; Medin & Ortony, 1989). These later endeavors have brought to light many important aspects of how humans relate to objects, including that they seek to understand causal relations among object properties, they discriminate between more and less projectable features in inferring unseen properties, and they consider an object’s history as well as its current state in how they value it. But the wording used in these investigations of conceptual activities tends to conflate the problem of accounting for how names are used to label objects with their primary goal of characterizing people’s conceptual, nonlinguistic understanding of entities in the world. The fact that Wittgenstein’s famous analysis was actually of a German word, Spiele/Spiel, that is only roughly equivalent to English game (see http://en.wikipedia.org/wiki/ Philosophical_Investigations) is rarely pointed out for the same reason that my statement above of Rosch and Mervis’ (1975) proposal is in fact slightly inaccurate. Many cognitive psychologists writing about such topics in past and recent decades, including Rosch and Mervis, have tended to frame their topic in terms of concepts rather than names or word meanings per se. Thus although Rosch and others (e.g., Murphy, 2002; Smith & Medin, 1981) have used English words to identify concepts, they have more fundamentally been interested in explaining the nature of nonlinguistic representations. The presumption that words directly reveal concepts is debatable (e.g., Malt, Gennari, & Imai, 2010; Wolff & Malt, 2010), a point I will take up later, but the blurring of the distinction has led to loss of interest in family resemblance as it may apply to naming. For now, though, the key
10
Barbara C. Malt
point is that naming, as opposed to conceptualizing, is a linguistic phenomenon and must be considered in that context (Malt, Sloman, Gennari, Shi, & Wang, 1999). Returning, then, to how to characterize and explain naming patterns in particular, the family resemblance view may have some merit as a general description of the relations among things that get called by the same name, although I will suggest a refinement of it below. Function has continued to sometimes be taken as the ‘‘deeper’’ sort of property that constrains what things will be called by the same name and/or be part of the same concept (e.g., Asher & Kemler Nelson, 2008; Kemler Nelson, Herron, & Morris, 2002). As the earlier discussion indicates, though, the names actually applied to objects in everyday language use do not support this idea. The other chief contender for a possible constraint on artifact naming has been creator’s intended category membership (Bloom, 1996; Gutheil, Bloom, Valderrama, & Freedman, 2004). According to this idea, an artifact’s category membership, and therefore its name, will be whatever its creator intended it to be. Although creator’s intention can indeed have an important influence on artifact naming patterns, as I will discuss later, it is unlikely that it fully constrains what things a person tends to call by a given name. Creator’s intention is often unknown at the moment of using a name for an object (see Malt & Johnson, 1998). Differences between languages in which objects share the same name, which I will document later, would also pose a problem for trying to infer intention: Depending on who the creator was, the intended category may not be that which the current speaker would infer. And, empirically, current uses that differ from the original intention have some impact on name choice for artifacts (Malt & Sloman, 2007b; Siegel & Callanan, 2007). Although often worded in terms of artifact naming, this proposal has, ultimately, been deemed a view of concepts rather than of naming (Bloom, 2007). In the end, then, it seems likely that there is no single type of feature or dimension that places a strong constraint on what things can be called by a given artifact name. Even if something along the lines of a family resemblance characterization is suitable, though, it is not very satisfying from an explanatory perspective. It does not reveal anything about how such naming patterns come about. If there needs to be only partial overlap on some dimensions with some other things called by a name, but not on any particular dimension or with any particular object, in order for an object to be called by the same name as certain other objects, then how does an object come to have one name and not an alternative? For instance, why aren’t brooms and dusters called brush? Why aren’t scalpels and scythes called knife, or putty and frosting knives called spatula or spreader? It also does not reveal anything about how the names can usefully serve language production and comprehension purposes. When someone hears brush, how does she manage to interpret it appropriately, if the intended referent can vary so greatly?
Naming Artifacts: Patterns and Processes
11
A more satisfying account of artifact naming patterns needs to say more than just that objects called by the same name have a family resemblance among them. Next, I suggest two key elements of a more complete perspective on how artifact naming patterns come about. These elements follow from taking seriously the distinction between naming as a linguistic process and conceptual, nonlinguistic aspects of how people understand artifacts. 2.2.1. Naming Patterns Are the Result of Diachronic, Not Just Synchronic, Processes Some attention has been paid in the concepts literature to the types of sortings that people produce if they are asked to place sets of novel objects into groups. Participants tend to produce unidimensional sorts (e.g., Medin, Wattenmaker, & Hampson, 1987), although they will also produce family resemblance sorts under some conditions (Regehr & Brooks, 1995). But regardless, naming in real-world settings is not a result of seeing an array of objects all at once and trying to figure out how best to partition them to maximize resemblance within the groups. When naming an object, each speaker makes a choice about what name to apply from among those in her vocabulary, but those choices are constrained by the naming patterns she acquired from her elders, whose input shaped the child’s lexical choices to match their own conventions (Chouinard & Clark, 2003). Theirs in turn were constrained by ones they were exposed to during their own language acquisition. Because naming patterns are passed down from one generation of speakers to another, they are the product of processes operating over historical time, just as are syntax, phonology, and other aspects of language (e.g., Hock, 1991; Traugott & Dasher, 2005). When a new object enters a culture and needs to be named, the name it receives must depend on what other objects existed at that time within the culture and what contrasts among them were distinguished by name at that time. The latter would have been influenced by, among other things, contact with other languages, which can be a significant source of addition to a language’s vocabulary and an impetus for restructuring of semantic space to avoid synonymy (e.g., Clark, 2007; Millar, 2007). In short, the objects that come to share a name will depend on the order of input of each to the naming process and the existing landscape of names within the language2 at the time the object entered the culture. As a result of the order- and time-dependent process, chains may form where each link in the chain is motivated at the time it occurs, although the connection between distant elements of the chain may later be hard to 2
Most languages encompass several or more dialects or variations used by sub-communities of languages users, and naming patterns may differ across these. For brevity, I will refer to ‘‘languages’’ but this term should more accurately be interpreted as meaning some specified group’s version of the language. See, for instance, Kempton’s (1981) discussion of subcultural variation in use of Spanish pottery terms.
12
Barbara C. Malt
see. Alexander Graham Bell’s early telephone evolved year by year into more advanced objects having new forms, new modes of voice transmission, and greater capabilities. The corded phone spawned cordless phones, and those spawned cell phones, and now there is the ‘‘smart’’ cell phone with astonishing capabilities unrelated to the original function of transmitting voices across distance. (See Lakoff, 1987, for examples of chaining for word classes other than nouns.) Were someone to sort pieces of communication technology from across history without knowing the evolutionary paths among them, they might tend to group the early phone with a telegraph machine and the modern smart phone with a computer, but that is not how the names used for each came to be. (See Petroski, 1993, for an interesting discussion of how knives and forks changed over time, shifting functions to the extent that the fork took over the knife’s original function of bringing food to the mouth by spearing or heaping it on the blade surface.) These sorts of phenomena are better approximated by laboratory categorization models that permit order-dependent construction of clusters (e.g., Heit, 1992; Love, Medin, & Gureckis, 2004) rather than by the majority which assume simultaneous consideration of all exemplars in creating groups. Because naming patterns are established over historical time, they, not only their referents, also evolve. Elements of the chain can drop from current knowledge, making naming choices nontransparent to current speakers. For instance, it is hard to imagine why milk, eggs, ice cream, and cigarettes (and, for some people, yogurt and cottage cheese) all come in containers called carton when the forms and specialized functions of these containers are so different. The answer seems to be that it is because carton used to refer to containers made out of a certain material (pasteboard) (Oxford English Dictionary on-line http://dictionary.oed.com/entrance. dtl). As time passed and the material shifted to plastic or foam for some products, the name nevertheless remained the same, making its use nontransparent to current speakers. Someone once suggested to me that carton refers to containers holding fixed quantities, such as a gallon or a dozen. There appears to be no truth to this conjecture but it illustrates the extent to which current speakers have lost a sense of the original motivation. In sum, it is impossible to understand how artifacts are named without considering diachronic as well as synchronic processes. 2.2.2. Production and Comprehension Involve Social Processes, Not Just Individual Ones The second key to understanding artifact naming patterns is to consider the social nature of language use. Individuals doing things alone don’t need language. Language exists for social purposes; it helps accomplish activities such as sharing information, planning joint activities, teaching and learning, entertaining, and so on. Thus language use entails individual acts that are
Naming Artifacts: Patterns and Processes
13
performed as part of joint action (Clark, 1996). There are at least two aspects of the social nature of language that help explain how artifact naming patterns come about and how communication can be successful despite the complexity of name uses. First, naming is goal-driven, and second, speakers and addresses work together. 2.2.2.1. Naming is Goal-Driven Naming is goal-driven, both in terms of the enduring patterns of usage that are adhered to across many contexts, and in terms of the choices that are made on the spot in a given context when more than one name is possible for an object. A central goal of naming in ordinary discourse is to refer (meaning, when writing, for the writer to get the addressee to understand what kind of thing she has in mind; when speaking, if referents are physically present, to get the addressee to successfully pick out the intended referent from among possible ones). But naming can have other goals as well, as such as conveying affect or focusing attention on certain attributes of the object. Calling a building hut versus hovel, or house versus McMansion, highlights different properties of the objects and indicates something about the speaker’s attitude about the object (Malt & Sloman, 2007b). Within a particular interaction, the name chosen may depend on the immediate goals of the speaker as far as what attributes to highlight or attitude to convey (Malt & Sloman). Across the longer term, names for novel objects are often proposed by the manufacturer or marketer of the object and then adopted into general use by the public. The name the manufacturer or marketer puts forward is likely to be carefully designed to highlight certain attributes (and even encourage the adoption of certain attitudes) by either affiliating the object with those called by an existing name or by contrasting it with them. For instance, plastic juice boxes, even when shaped like a bear (see Malt et al., 1999 for illustration) presumably were labeled box to emphasize their potential as a replacement for a disposable juice box. The basket-maker Longaberger has labeled a wide variety of products as basket that might otherwise be called tote, woven bag, handbag, bin, or hamper (see http://www.longaberger.com/ourProducts.aspx) presumably to highlight their affiliation with the Longaberger material and style. On the other hand, the new name spork was spawned (and is now used by a number of manufacturers; see, e.g., http://en.wikipedia.org/wiki/Spork) presumably to contrast with both spoons and forks and highlight the ability of the new type of object to perform as both. The name was chosen even though the deviation of sporks from typical spoons is no greater than from, say, a grapefruit spoon with a serrated edge for cutting and no greater from typical forks than, say, a fish-serving fork. Scork has followed, the only motivation for the contrasting name apparently being the desire to signal the incorporation of a can opener into the handle design (see http://www. gearforadventure.com/Vargo_Stainless_Steel_Scork_p/1480.htm). Such goals can result in object names that deviate from those that would be constructed
14
Barbara C. Malt
if an array of objects were simply sorted into groups according to some generic similarity metric. 2.2.2.2. Speakers and Addressees Work Together Speakers and addresses work together to ensure that the intended message of an utterance is interpreted as it was meant to be (e.g., Clark, 1996, 2006). As Clark (1996) points out, language evolved in the context of face-to-face communication; conversation is its most basic and universal form. Comprehension is aided by the active engagement of parties in the interaction, occurring most readily in conversation but with similar processes carrying over to written discourse. One such process or device that has been proposed is the use of common ground (e.g., Clark, 1996, 1998; Clark & Marshall, 1981), whereby the parties to a discourse exploit what they know about what each other knows to choose their words and help guide their interpretation of the others’ words. This common ground can include precedents established between them in the course of their interactions (e.g., Clark & Brennan, 1991; Clark & Wilkes-Gibbs, 1986). So, for instance, if one person has referred to her Longaberger purchase as her new basket, her conversational partner is likely to call it the basket in return (and not the tote or the woven bag), knowing that this is the term that picks out for her friend the item they intend to discuss. There has been some debate over exactly how to describe the kinds of processes underlie adherence to conversational precedents (e.g., Horton & Gerrig, 2005a,b; Shintel & Keysar, 2009). Regardless of details, it is inevitable that common ground and conversational precedents are exploited in designing utterances. Adults adjust their vocabulary in speech to children and teachers do to learners, individuals take into account which of their language subcommunities (fellow surgeons, fellow dance enthusiasts, or fellow South Dakotans) they are addressing (Clark, 1998), and speakers can track what names have been used with what conversational partners within the bounds set by normal memory limitations (Horton & Gerrig). Other devices are also available to help in coordination. In conversation, the role of speaker and of addressee usually alternates frequently, which helps achieve mutual understanding as each can monitor and correct the interpretations of the other. Addressees indicate their interpretation of information through questions or comments (‘‘Yes, I see’’ or ‘‘This one, right?’’) as well as through nods, facial expressions (satisfied, surprised, quizzical, etc.) and movements such as picking up an object or gesturing or moving toward it (e.g., Barr, 2003; Clark, 1996; Clark & Brennan, 1991; Clark & Fox Tree, 2002; Clark & Krych, 2004; Kelly, Barr, Church, & Lynch, 1999). Speakers and addressees go through an iterative process in which the participants repair, expand on, or replace a referring expression until they reach a version they mutually accept (Clark & Brennan; Clark & Schaefer, 1989; Clark & Wilkes-Gibbs, 1986), and speakers may even
Naming Artifacts: Patterns and Processes
15
correct interpretations regardless of whether addressees request further information. For instance, when an instruction to pick up an object is ambiguous about the intended referent, there is often a further interaction to refine interpretation, with listeners either asking for clarification or else reaching for an object and then being corrected by the speaker (or both) (e.g., Barr & Keysar, 2005; Clark & Wilkes-Gibbs). Speakers also produce signals in their speech stream that help addressees differentiate among potential object referents. When speakers formulate descriptions of new referents, their utterances contain longer hesitations and are more likely to contain a filled pause (such as ‘‘ummm’’) (Barr, 2003), and other flags such as pronouncing the as THEE rather than THUH (Arnold, Tanenhaus, Altmann, & Fagnano, 2004). Addressees are able to make use of these speech patterns to help narrow in on intended referents; when they heard a description of a new referent preceded by such signals, they are better at understanding the description than when such signals are not available (Arnold et al.; Barr). Finally, when an addressee is faced with interpreting a bare noun such as basket or brush or box that can refer to objects having any of a range of forms and functions, it may be because there is only one potential referent present or the target referent is salient enough to easily be identified (Clark, 1996). When more than one possible referent is present (or could be imagined, in text), speakers (or writers) disambiguate by adding modifiers (Brennan & Clark, 1996). Although brush by itself can refer to many different sorts of objects, the longer expressions paint brush, hair brush, scrub brush, or basting brush narrow the relevant properties substantially. In subsequent utterances, the object can be referred to simply as brush once a mutual understanding has been achieved. Children produce and accurately interpret such modifiernoun expressions by 2–3 years of age (Clark & Berman, 1987; Clark, Gelman, & Lane, 1985), demonstrating the ease of use of such expressions. Thus, through a variety of devices, an addressee’s interpretation is heavily scaffolded by the speaker who is actively engaged in helping her arrive at the understanding he intended. 2.2.3. Conclusion Regarding How to Explain Artifact Naming Patterns At a general level, uses of artifact names may be described by the family resemblance idea of an object needing only some degree of overlap with other things called by the same name. Understanding why this family resemblance pattern comes about, though, requires digging deeper into an understanding of how naming conventions arise. Historical linguistic processes are key to this understanding, and they can produce chains of linked uses. Despite the looseness of such a system, it affords communication without problems because the inherently social nature of communication provides the necessary support. Addressees are not left on their own to
16
Barbara C. Malt
figure out the intended meaning of ambiguous nouns; many social, linguistic, and paralinguistic forms of assistance are provided by speakers (and, in more limited ways, by writers).
3. Interpretation Issues In this section, I will address several questions about the interpretation of the naming observations used as motivation for my arguments. These questions ask, from several different perspectives, whether artifact name use could look simpler and less in need of appeal to diachronic and social processes by adopting certain constraints on the words or objects being considered. I will argue that the answer is no in each case.
3.1. Does Function Fully Constrain Some Artifact Names? Many of the nouns that have been used in the examples so far are nontransparent in their names. Chair or basket or carton contains no morphemes within them that hint at a meaning or range of application. Some names for artifacts, though, contain meaningful units that might specify a particular function. For instance, given some understanding of tooth and brush, toothbrush seems to tell us that the word applies to things for brushing teeth, and given some understanding of tooth and pick, toothpick seems to tell us that the word applies to things for picking teeth. Some, perhaps many, nouns of this sort do seem fairly well constrained in their range of application by the function implied by their name. Doorstop may be used only for objects that stop doors, and headrest may be used only for things that heads are rested on. But the first two examples illustrate that such nouns need not always obey a strong functional constraint. Things named toothbrush are certainly for the most part made and bought for purposes of brushing teeth. Toothbrushes make good cleaning tools, though, and one company now makes and markets a set of objects they call toothbrushes specifically for use in house cleaning, even offering them in professional cleaner’s grade, and advertised with the exclamation ‘‘Definitely not for your teeth!’’ (see http://www. thecleanteam.com/catalog_f.cfm). Even more compellingly, toothpick originally did label an object made and bought for picking teeth (Petroski, 2007), but today the dominant reason for the making and buying of objects called toothpick (at least in the U.S.) is a different one: to use to spear cheese cubes and other canape´s and bring them to the mouth for eating. In many social circles, using them to actually pick the teeth would be considered poor manners. That these objects, so named, are created (not just used) for the eating purpose is verified by the wide range of decorative toothpicks available for party platters, and by their positioning with kitchen wares and
Naming Artifacts: Patterns and Processes
17
party supplies in merchandising. In fact, some toothpicks are now sold purely as decorations; one manufacturer offers American flag toothpicks to place on cakes and other desserts for Independence Day. As these examples illustrate, artifact names of this sort most likely start out with a range of application that is well described by a particular function. However, it seems that even in these cases, they can break loose from their origins and acquire a wider range that is not limited by a single function. Perhaps this should not be entirely surprising, since it has been observed in other contexts that transparent elements of meaning can be combined with others that violate them. We talk about plastic silverware, jumbo shrimp, working vacations, loose tights, white chocolate, and small fortunes (Lederer, 1989) without any problem. Another candidate for being more fully constrained by function might be agentive nouns. In English, the suffixes ‘‘-er’’ and ‘‘or’’ are used with some frequency to form nouns that denote the doer or performer of an action (e.g., Finegan, 1994). For instance, baker is composed of bake þ -er and refers to a person who bakes, and runner is composed of run þ er and refers to one who runs. Although agentive nouns are usually discussed with reference to animate agents as in the preceding examples, they also can be formed to name inanimate objects that are used to accomplish some sort of act. So, for instance, dryer, container, and hanger are artifact nouns of this sort, that name, at least prototypically, objects used for drying, containing, and hanging, respectively. Given the nature of the names, one might wonder whether the usage of this sort of artifact noun is more fully constrained by function than the nouns we have been discussing to this point. That is, does the range of application of dryer, container, and hanger get fully determined by a single, specifiable function (to dry, to contain, and to hang)? As with the preceding case, there seem to be few counterexamples in which an agentive noun name is routinely used to name an object not intended to fulfill the function suggested by its name. But even so, again, this strong tendency is not absolute. Consider brightly colored objects called pipe cleaners, made and sold for use in children’s crafts (e.g., http://www.discountschoolsupply. com/Product/ProductList.aspx?category¼89&es¼5530200000G&CMP¼ KNC-Google&s_kwcid¼TC|10010|pipe%20cleaner%20crafts||S||3019 930373&gclid¼CKjnhfDl8ZwCFcFD5god8lfObQ). Their craft use has no resemblance to pipe cleaning (or any sort of cleaning), indicating that the name can break loose from the function it originally implied. Furthermore, note that even when uses are consistent with the name, the function suggested by the name only partially constrains the range of objects to which the name is applied. Duster and dustcloth are applied to different types of objects although both are for dusting (a duster usually having a handle with feathers or lambswool, etc., attached). Cleaner and cleanser are also applied to different types of objects although both are for cleaning (cleaners usually being nonabrasive as in silver cleaner or glass cleaner, and
18
Barbara C. Malt
cleansers being gritty), and likewise mixer and blender are although both are for thoroughly combining food ingredients (mixers usually having an open bowl with beater blades and blenders having a more vertical container with a chopping blade). Even doorstop and door stopper, so minimally different, imply different forms although they both function to hold doors open (a doorstop usually being a heavy form placed against the door, and a door stopper being plastic or rubber and wedged under the door). Thus these agentive names still integrally require reference to elements of form in order to describe the range of referents to which they apply. Having a particular function revealed by the name may usually (though not always) be a necessary condition for application of an agentive noun but even then, it is not sufficient. One could argue that these agentive names are distinguished by the finer details of the function and so are still truly function-defined (even if the details are not transparent in their name). As with nonagentive nouns, though, the specifics of function can vary among things called by the same agentive name, making it impossible to describe function at a detailed level and encompass all objects that do get labeled by the name. For instance, computer and calculator are distinguished by both form and the specifics of the kinds of calculations they can carry out, but earlier computers could only perform at the level of today’s calculators (or even abacuses, before that), and tomorrow’s computers will have new capabilities. In general, objects that change rapidly with advancing technology will encompass fairly wide ranges of functions as well as forms (e.g., scooter, from a child’s two-wheeled, muscle-propelled toy to electric versions that transport disabled adults; cash register from mechanical to digital versions that generate coupons and track product sales as well as make change). Again, it seems that function may come closer to constraining the range of application of these names but it still is only a partial constraint. Highly specific functions may appear more sufficient for distinguishing between pairs such as computer and calculator, but then they are unlikely to be truly necessary for all uses of the words.
3.2. Is the Apparent Complexity of Artifact Name Use Only Because of Polysemy? Even setting aside homonymy, most words have a network of related senses (e.g., Nerlich, Todd, Herman, & Clarke, 2003). In some cases, these senses can be quite disparate although still having transparent connections, such as in the meanings of nose that include a facial organ, an olfactory attribute of wine, and an ability to detect (as in She has a good nose for this) (Nerlich & Clarke, 2003). These senses of nose are so different from each other that it seems meaningless, or least foolhardy, to even ask if there is any simple account of the meanings that can be articulated in terms of a shared constraint on what the term nose applies to. The range of things that nose
Naming Artifacts: Patterns and Processes
19
applies to, and how one would describe that range, might become more tractable and meaningful if we limit the referents under consideration to concrete objects. Then it is a matter of explaining why nose applies to people, dogs, birds, airplanes, rockets, etc.—still not a simple task but presumably a more manageable one that might have a better chance of yielding some straightforward constraint. One might wonder if the case of artifact terms would look a great deal simpler if we divided off some examples as entailing distinct senses and tried to account for only one sense at a time. For instance, do box in reference to plastic boxes with snap lids and fork in reference to fish-serving forks reflect different senses than when the words refer to more typical examples? The difficulty with this strategy is finding a motivated way to restrict the set of referents that should count as falling under one sense. I have already limited the cases under consideration to concrete referents and uses that do not seem to involve metaphor, metonymy, or any other such extension device. If some of the examples discussed to this point constitute separate senses of the words because they seem to invoke different attributes, then all of them do. Alternatively, if we were to try to specify what will count as the separate major concrete-object senses of a word such as box by, say, taking a single form or function as the diagnostic criterion, then the argument becomes circular. Naturally, if the range of examples to be explained is restricted according to some a priori criteria for the properties they can have, then they will all share these properties. In the end, it is hard to provide any objective criterion for separating any collection of concrete-object uses from any other (see Nunberg, 2004, for a related argument).
3.3. Is the Apparent Complexity of Artifact Name Use Only Because of Compounding or Conceptual Combination? As noted earlier, some of the artifact names I have discussed contain more than one morpheme. Some, such as toothbrush, are commonly written as a single word. Others are often (hair brush) or always (scrub brush) written as two words. All of these cases are potentially noun compounds, which are considered morphologically complex single words rather than nouns phrases consisting of a head noun and modifying noun or adjective. The classification is generally made based on the stress pattern. Compound nouns are said to resemble unanalyzable (monomorphemic) nouns (such as table or garbage) in having the primary stress on the first element of the compound, whereas noun phrases have primary stress on the last lexical element (e.g., Bybee, 1985; Pinker, 1994). For instance, this difference is illustrated by the stress patterns of Bluebird3 and blackboard (compounds) 3
Names of species are capitalized following the convention in ornithology.
20
Barbara C. Malt
compared to blue bird and black board (noun phrases) (e.g, Finegan, 1994). If things called toothbrush, hair brush, or scrub brush are not examples of the name brush by itself, but are separate words, then perhaps the range of referents of brush itself is not as variable as I have suggested. A related argument might be made from the perspective of psychologists who study conceptual combination (e.g., Hampton, 1997; Murphy, 1990; Wisniewski, 1997). If several concepts (labeled by nouns and adjectives) are put together to form new concepts, perhaps the terms that result should be considered to pick out some set of things that are distinct from those labeled by the head noun. For instance, chocolate bee, when used to refer to a piece of chocolate shaped like a bee, names something that lacks all the behavioral properties and almost all internal and external form properties of the majority of things called bee. Intuition suggests this sort of thing might not be considered an example of the noun bee by itself. Two points argue against adopting these positions to exclude some concrete-object examples from consideration. First, although some compounds may be coined to label things that don’t comfortably fit among other things labeled by their head noun, others do label things that are moderately to highly typical of the head noun alone. For instance, shoe box, hair brush, paint brush, Coke bottle, tea cup, and baby shoe would all be counted as conventional combinations having the signature stress on the first word of the phrase. However, they name things that are perfectly reasonable as examples of the bare nouns and can comfortably be referred to by box, brush, bottle, cup, or shoe alone. If this is true, one might wonder why the compound occurs with some frequency in reference to these objects rather than just labeling them with the bare noun. It may be that the routine use of the modifier (resulting in the status of the phrases as familiar compounds) functions simply to identify distinctive properties of the objects against a field of potential referents that is highly variable. Second, despite the widespread appeal to stress patterns to distinguish between compounds and noun phrases, this diagnostic test is substantially less valid that is usually assumed (e.g., Plag, Kunter, & Lappe, 2007; Plag, Kunter, Lappe, & Braun, 2008). For instance, the stress is on the right-hand element in chocolate donut, apple pie, paper doll, silk tie, and aluminum foil. For names of street-like passageways, the stress pattern varies depending on the right-hand element; thus Green Street is similar to Bluebird, but Green Avenue, Green Boulevard, and Green Parkway all have stress on the second constituent. (I note that even for color terms in bird names, variability from the often-cited Bluebird pattern exists: Black Phoebe, Yellow Rail, Green Heron, and among the blues, Blue Mockingbird, Blue Grosbeak, and Great Blue Heron.) There is even variation among native-speaker informants in stress assignment, and variation can be induced by the sentence context (Pennanen, 1980). A number of variables including argument structure, semantic relation between the first and second constituent, frequency of the
Naming Artifacts: Patterns and Processes
21
combination, and analogy to other combinations sharing the same head noun all have predictive value for the stress on a given combination, with none providing an absolute rule (Plag et al.). Thus, the message from stress patterns about what should count as a compound versus as a ‘‘mere’’ modifier-noun phrase is unclear. If the stress pattern test does not hold up, from a practical perspective, it is hard to know how to decide what is a compound and what is not. From a psychological perspective, maybe there is simply a gradient of conventionality, with the more familiar, frequently used combinations feeling like compounds and less common ones feeling like modifier-plus-noun phrases. In that case, there is no principled distinction to appeal to in deciding whether multimorphemic names do or do not label objects that count as examples of a given bare noun. In fact, if the more lexicalized modifier-noun combinations are the more frequently used ones, they are also likely to include some naming the most common referents of the noun (as in shoe box and hairbrush, etc.), which argues against lexicalization as an indicator of names that should be treated as distinct from instances of the head noun alone.
3.4. Conclusion Regarding Interpretation Issues Complexity of naming patterns is pervasive for nouns used to label artifacts, although it varies across noun types. Nouns that are transparently composed of several morphemes, including agentive nouns, may tend to stray less in their usage from that implied by the meaning of their constituents. Even in those cases, though, the constraints are not absolute, reinforcing the possibility that virtually any artifact noun has the potential to develop a range of uses that overlap with one another on different dimensions. This conclusion is not readily explained away by trying to limit the range of exemplars that should count in the analysis of a given name, because principled bases for limitations are lacking.
4. Implications of Artifact Naming Patterns for Other Aspects of Human Cognition The points made so far have implications for understanding aspects of cognition beyond how people use English nouns. In this section, I will consider how the naming issues relate to views of artifact categorization, how English naming patterns relate to those of other languages, and how the cross-linguistic variability that exists impacts word learning by children and by those learning two or more languages (either from birth or as second-language learners later in life). I end this section by returning to
22
Barbara C. Malt
how naming and nonlinguistic thought are related, this time considering the questions raised by the documented cross-linguistic variation.
4.1. Implications for Views of Artifact Categorization As alluded to earlier, a large number of studies over the past several decades have addressed questions about how adults and children categorize artifacts. One major line of inquiry has been about how adults make artifact category decisions—whether they are based on the form, original (intended) function, or current function of an object, the creator’s intended category membership, or some combination of these factors. A second major line has asked whether there is a developmental progression from one basis to another. The latter studies have focused on whether children move from form- to function-based categorization or whether they are oriented to function from early on. In both the adult and developmental literatures, original, intended function and creator’s intended category membership have generally been taken as use of ‘‘deep’’ properties over more superficial ones and sometimes cast in terms of psychological essentialism, the notion that people seek some underlying trait that determines an entity’s kind (Bloom, 1996; Medin & Ortony, 1989). The debates over the various possibilities have been extensive, but they have not resulted in convergence on final answers. Original, intended function and creator’s intended category membership are often found to have strong pull in the answers that both children and adults give to questions about what an object is (e.g., Bloom; Diesendruck, Markson, & Bloom, 2003; Kemler Nelson et al., 2002; Rips, 1989), but some studies have found contributions of (or domination by) current function or form (e.g., Estes, 2003; Hampton et al., in press; Landau, Smith, & Jones, 1988; Malt & Johnson, 1992; Siegel & Callanan, 2007). Methodological differences in the types of stimuli used and how the judgments are posed to participants may contribute to the varying results (e.g., Diesendruck et al.; Kemler Nelson et al., 2000; Malt & Sloman, 2007a). But there is also a theoretical muddying of the issues that contributes to the lack of resolution. Defeyter, Hearing, and German (2009) remark that research often has not distinguished clearly between the question of how people categorize something and the question of to what extent they focus on original or current function of an artifact when trying to understand a novel object. Following from my earlier argument, I would suggest that the confusion goes deeper than this. The research overlooks the difference between naming and how people might understand or group objects conceptually (Malt & Sloman, 2007a). Measures of artifact categorization are most often measures of the name chosen for an object, usually in a forcedchoice task. The observations I have described make clear that the question of how the name for a given artifact is determined does not have a simple one-factor answer, and so it is not surprising that results have been mixed.
Naming Artifacts: Patterns and Processes
23
In fact, it will not be possible to get an accurate picture of patterns of artifact naming in the real world through tasks that tap only synchronic variables, because such tasks eliminate many of the forces that actually influence naming that I described earlier (such as cultural history, the impact of word borrowings from other languages and subsequent reorganization of semantic space, and marketing goals). Once the distinction between naming and nonlinguistic understanding of objects is appreciated, it is easier to make sense of how the factors studied may play into these processes. Many researchers who use naming as their dependent measure are actually most concerned with how people think about and conceptually group objects (e.g., Bloom, 2007). Despite the relevance of form in establishing naming patterns, affording a use is the main reason that artifacts exist. It is natural that function is primarily what people seek to understand when encountering a novel artifact, that function may be a dominant basis for grouping artifacts conceptually, and that whether original, intended function or current use is more salient can vary depending on the context in which the object is encountered. It is also natural to want to know what use the original creator intended for object, because knowing that often reveals what the best use of the object is. Conversely, despite the importance of creator’s original intention in understanding artifacts, it is natural that it would be only a partial determinant of naming. In communicative situations people often receive direct information about what the object has been named in the past. People will tend to respect this naming precedent for the reasons described earlier: Language use is a social process, and using the name offered is usually the best way to achieve mutual understanding and acknowledge the speaker’s intentions. This name offered may be that intended by the creator, in which case creator’s intention is carried forward, but it may also be something else. Depending on the distance from the original creator and his or her intentions, and the importance of any contrasting current goals, the relative suitability of the original and possible alternative names may vary, and names other than those associated with original intention may be adopted (Malt & Sloman, 2007b; Siegel & Callanan, 2007; see also Chaigneau, Barsalou, & Sloman, 2004). The fluidity and flexibility of naming does not, by itself, argue against the possibility that either original, intended function or creator’s intended category membership fully determines the boundaries of some sort of nonlinguistic categories. A problem with this line of reasoning, though, is that if the groupings picked out by names are considered distinct from nonlinguistic groups and therefore not revealing of them, it is hard to know exactly what would constitute the nonlinguistic categories (Sloman & Malt, 2003). When looking at an object that is plastic and has a snap lid, how would someone judge that its use, or what its creator intended, should group it with cardboard things with flaps and not with other plastic containers or with some new group of things?
24
Barbara C. Malt
An alternative approach to this issue is to suggest that although name use in conversational contexts reflects the impact of metaphor, metonymy, pragmatic constraints such as lack of a better name, and so on, there are neutral contexts in which names delineate more constrained groupings and are a useful measure of the nonlinguistic categories (Bloom, 2007). That is, maybe the plastic snap container is not really a box, nor is, say, a drummer’s brush really a brush, even though they may be referred to in conversation as box or brush. In on-going work we have been evaluating how people make judgments of what something really is by asking them to judge whether certain artifacts are really examples of a particular name. In one study (Malt & Paquet, 2008), one group of participants gave typicality ratings to objects with respect to a target name (e.g., a short, round seat with three legs was judged typical of things called stool, and a taller, plastic seat with a back was judged less typical of things called stool). A second group of participants were then either told that the name (e.g., stool, in these cases) was given to the object by the creator or else was just assigned to it by someone who had found the object at a yard sale. The participants judged the extent to which each object was really an example of the target name. These really judgments strongly correlated with the objects’ rated typicality, and the judgments showed no effect of whether the creator intended it to have that name or not. In another study, we had people read stories in which a pictured object started out with one intended use and associated name (e.g., decanter) but the story characters then adopted a different use and associated name (e.g., vase) for it. Participants rated the extent to which the object was really an example of the first name and of the second. Original intention had an impact on the ratings, but the effect was modulated by how typical the pictured object was of each of the two names and whether or not the story characters had ever actually used the object as it was intended or had bought it planning to use it for the second purpose. In a study in progress, we have been using recent and more traditional objects associated with artifact names (e.g., a corded phone and a cell phone) and have asked college-aged and older adults to judge whether each object is really an example of the target name (e.g., phone). We are finding that older adults rate the recent objects as less really examples of the name than the younger participants do. All of these outcomes point to the conclusion that judgments of what an artifact really is don’t pick out some bounded underlying category defined by original, intended use or creator’s intended category membership. Instead, they reflect gradations in how well the object properties match properties associated with the word in the participants’ mind—multiple properties that can shift with context and across generations as the range of objects experienced in connection with the name shifts. In light of these observations, one key implication for views of artifact categorization is that it is critical to distinguish whether the issue of interest in a given study is actually how people use names for artifacts or something
Naming Artifacts: Patterns and Processes
25
about how they understand them nonlinguistically, and to select methods that will reflect the target topic. Another implication is that if there are nonlinguistic ‘‘categories’’ that artifacts are put into, a noncircular way of identifying those categories needs to be identified so that views of how this categorization is accomplished can be evaluated (Sloman & Malt, 2003). Alternatively, perhaps there are no such categories, apart from those given by the use of a name in linguistic context (Malt & Sloman, 2007c; Sloman & Malt). From the developmental perspective, these observations may actually turn part of the research focus on its head. If it is of interest to ask how children extend artifact names to objects (as opposed to how they understand the objects nonlinguistically), then the most pressing issue is not to decide whether they start with a shape-only strategy and shift to function later or use function from the start. It is to determine whether children are truly limited by either dimension in their early word use, and if so, how they break free of a single dimension to mastering the full range of uses that are linked by either one or both dimensions together. This perspective is compatible with that in other developmental arenas. Young children can be overly rigid as they begin to acquire a sense of adult conventions (e.g., in applying mutual exclusivity to their word use, Markman & Wachtel, 1988; or strict rules to moral behavior, Kohlberg, 1976; see also Casler, Terziyan, & Greene, 2009). Becoming more flexible, not more constrained, is the important developmental path they must follow. I will discuss developmental word learning issues further below.
4.2. Implications for Word Meanings Across Languages Naming patterns for concrete objects have often been assumed to be more cross-linguistically similar than naming patterns for abstract and socially construed entities such as emotions or kin relations (e.g., De Groot, 1993). This assumption could be true for artifact naming if several conditions were met: if the artifacts fell into fairly unambiguous groups with gaps between them, if names were assigned to artifacts on the basis of the groupings perceived when considering the current set together as a whole, and if the objects and the resultant groups they fell into remained constant over time and across cultures. I have already argued, though, that the last two conditions don’t hold. Based on the examples that have been discussed to this point, it should also be apparent that artifacts don’t always fall into neat clusters separated cleanly from one another. Even if some clustering exists, there are many objects that have partial overlap with members of two or more clusters and no strong affiliation with anyone. If patterns of artifact naming evolve over time and are subject to the varied influences that I have described, then it should be expected that they will vary across languages.
26
Barbara C. Malt
In several studies, my collaborators and I have found that this expectation is right. We first looked into this possibility by having largely monolingual speakers of American English, Argentinean Spanish, and Mandarin Chinese name a set of 60 photographs of common household objects (Malt et al., 1999). We found that the naming patterns of the three groups had similarities but also some notable differences. English speakers labeled most of the 60 objects with one of just three names—jar, bottle, and container—which they used in roughly equal proportions. Spanish speakers used 15 different names for the objects, with 28 objects being called frasco (or its diminutive, frascito), and each of the remaining names applying to no more than six objects. Chinese speakers used just five names for the objects, but one of these names (ping) accounted for 40 of the objects. These groupings of different sizes were not merely nested groupings reflecting finer and coarser differentiation; they were not all formed around the same centers and they partially cross-cut each other (Malt et al., 2003). We have now replicated these sorts of differences in naming patterns for Belgian speakers of Dutch and French using a different set of household containers plus a set of objects for preparing and serving food, and for speakers of English and Russian using a set of objects for holding and drinking liquids (Pavlenko & Malt, in press). Thus, the assumption that words for concrete objects in general will correspond closely across languages turns out not to be true. The variation we have found for artifacts suggests that words for virtually any domain may be susceptible to some cross-linguistic variability. The extent of variability will depend on the extent of variation in the factors listed above. For natural kinds, for instance, if there has been more consistency over time in what exemplars are present in a culture and across cultures, and stronger clustering of exemplars with fewer exemplars that fall between clusters, there may be greater consistency in naming patterns (see Malt, 1995).
4.3. Implications for Developmental Trajectory Most research on childhood word acquisition has focused on the learning that takes place from infancy through toddlerhood. There has been a sense that the interesting developmental stages of word learning are largely completed during this time (e.g., Bloom, 2000), except, perhaps, in certain domains that may pose special problems for the child (e.g., Clark, 1980). However, if artifact naming patterns vary from language to language and cannot be predicted just by looking for an obvious cluster into which each object falls, then learning to name artifacts as adults do will not be a trivial task. We (Ameel, Malt, & Storms, 2008) evaluated the developmental trajectory by comparing the naming patterns of Dutch-speaking Belgian children (aged 5, 8, 10, 12, and 14) to adults for large set of photos of household containers and objects for preparing and serving food (from Ameel, Storms, Malt, & Sloman, 2005, discussed later). We found that the children took up to
Naming Artifacts: Patterns and Processes
27
age 14 to converge their naming patterns onto those of the adults, even though the terms used by adults for most of the objects were present in their vocabulary by the age of 8. An extended reorganization of the lexical categories took place, with use of some names broadening (encompassing more objects) and others narrowing (encompassing fewer objects) over time. Regression analyses using features to predict naming choices at each age showed that this reorganization entailed learning both to attend to the same features the adults used and to assign adult-like weights to those features. These findings suggest that an extended word learning period to achieve full, adult-like use of words is not restricted to a small number of words or domains. It includes common, concrete terms such as names for familiar artifacts. Views of word learning will need to include an understanding of how word knowledge continues to develop throughout childhood. An important step toward a better understanding of the later stages of word learning for artifacts will be to know more about what it is that the child must master. How exactly do languages differ? Our previous work (Malt, Sloman, & Gennari, 2003) already demonstrated that different artifact naming patterns are not just a matter of the granularity of distinctions, but there is more to be understood about the differences. One way that the languages could produce the cross-cutting naming observed would be if they used different dimensions as the primary basis for grouping artifacts by name. For instance, one language might focus more heavily on shape, another on size, and a third on function. Under this scenario, the child’s task is one of parameter setting, as has been proposed for some aspects of grammar learning (e.g., in learning whether the language being acquired is one in which pronouns are routinely dropped; Chomsky & Lasnik, 1993). The child might have a range of possibilities ready, and by observing the adult naming patterns, she learns which values on the parameters create the artifact naming patterns of her language. But the discussion of English artifact naming above already suggests that this point of view is not likely to be right, since both function and form are implicated in English naming. The observations do not exclude the possibility that English weights certain dimensions more heavily than some other language does, but they do indicate that there will be no simple, single parameter setting that the child can select to produce mastery of English naming. What is needed, then, is to evaluate whether there are any systematic differences between languages that can be identified in dimensions weights or values used. If not, one can ask whether there are any systematic differences that are specific to certain parts of the domain. For instance, even if it is not true that English uses function more heavily Spanish or Chinese (or vice versa) across the board, could it be true for naming within some subset such as drinking vessels or tools? And if there turn out not to be any generalizations that can be drawn about dimension values or weights even within some portion of artifacts, then it will be important to characterize the differences at a finer
28
Barbara C. Malt
grain. In a study of naming of drinking vessels in English and Russian that will be discussed below, we (Pavlenko & Malt, in press) have informally noted that, for instance, the English distinction between cup and glass is more heavily based on material than the Russian distinction between chashka and stakan, which is more based on size and shape. At the same time, English separates mug from cup based on size and shape and Russian further separates fuzher from stakan based on material and function (use for alcoholic drinks). So each language appears to make similar featural contrasts but applied to different sets of objects. Recently, we have been using feature-based regression models to more systematically explore the differences in naming patterns across English, Dutch, and French for household containers and objects for preparing and serving foods. As the Russian examples suggest, we have been finding that the three languages use the same dimensions and values on dimensions, but in different combinations for specific naming contrasts. For instance, one language may discriminate by name within bowl-shaped objects based on size whereas another does not, but the second language may discriminate between cardboard storage containers based on size. Even these statements do not fully take into account the family resemblance and chaining phenomena among objects that share the same name, where some can overlap on one dimension or set of dimensions and others will overlap on different ones. Interesting work remains to be done to fully characterize what it is that children must learn and how they are able to do so.
4.4. Implications for Bilingualism Traditionally, research on bilingualism has not taken much interest in the mastery of words for concrete objects because of the assumption that the meanings of these words map closely across languages. If this assumption were right, then mastery would only be a matter of learning what word in one language corresponds to each word in the other one. However, our data comparing naming patterns across languages imply that the task is not nearly so easy. We tested this possibility by studying people who came to the United States (mostly as students) with first languages other than English (Malt & Sloman, 2003). All participants were immersed in English at the time of testing but varied from recent arrivals to 18 years of residency. Participants named pictures of artifacts in English, judged the typicality of each with respect to several English names, and gave us their intuitions about how they selected names. For comparison, native speakers did the same naming and typicality tasks. Even those second-language learners with the shortest length of immersion (less than 1 year) produced most of same basic vocabulary words that native speakers did, but they differed from native speakers in their application of the words to specific objects. Learners with less than 1 year of immersion showed the most divergence, and agreement with native speakers increased as a function of years of immersion. Similarly, those with
Naming Artifacts: Patterns and Processes
29
the fewest years of immersion did not have a good sense of what is most typical of names such as bottle or plate, etc., but typicality judgments corresponded better to native speakers’ over time. Strategy reports showed a shift from greater reliance on explicit use of specific features or translation equivalents to a more intuitive selection of words. Remarkably, despite the improvements, even the participants who had been in the U.S. the longest (10 or more years) still deviated significantly from the native speakers in both naming patterns and typicality judgments for some of the words. Mastering the subtleties of the artifact naming patterns of a second language is not at all quick and easy. To the contrary, it is a long, slow process, just as for the child native learner. Deviations from the language community’s norm may have subtle but real consequences for communication. For instance, a native Dutch speaker recently asked me, in an airport boarding line, if I had obtained a chair. His English was otherwise excellent, but it took several rounds of back-andforth before I understood that he was asking if I had a confirmed seat on the overbooked flight. If second-language learners immersed in the second language do gradually converge on native speakers’ patterns of word use, this outcome raises the question of what becomes of the naming patterns in their native language. Reaction-time studies have demonstrated that a bilingual’s two lexicons are not isolated from each other and interact in some fashion. For instance, words of one language prime words in the other (e.g., Altarriba, 1992; Kroll & Curley, 1988). Given this interaction, secondlanguage learners who become dominant in the second language may show an influence of the second on the first, shifting their native naming patterns in the direction of the second. Wolff and Ventura (2009) found evidence for this sort of effect in the learning of causal verbs. We (Pavlenko & Malt, in press) studied artifact naming patterns in Russian for native speakers of Russian who came to the U.S. at various ages and became immersed in English. We compared their patterns to those of native largely monolingual speakers of English and Russian. Even those who came to the U.S. as adults and rated their Russian proficiency considerably higher than their English proficiency showed some modest signs of English influence on their Russian naming. Those who came to the U.S. in childhood (ages 8–15) showed slightly greater influence. A substantially larger impact was shown by those who came to the U.S. early in their lives (ages 1–6), even though all had begun to learn Russian before exposure to English, continued to speak Russian at home, and considered themselves moderately proficient in Russian. These data indicate that even for those becoming immersed in the second language in adulthood, there can be an influence of the second language on the first. It is noteworthy, though, that the largest impact was seen for those who had spent less time immersed in Russian and more time immersed in English. This outcome raises new questions about to what extent first-language shifts in the direction of a second language are
30
Barbara C. Malt
related to the initial strength of the memory traces of the first language, the completeness of learning, or the frequency of current use, and to what extent they depend on similar variables for the second language. The possibility of cross-talk between the two languages of secondlanguage learners also raises the question of what learners do who are exposed to two languages from the start. One possibility is that these early learners, acquiring two native languages during the period in which language acquisition is thought to proceed most effortlessly, are able to do something late learners do not: learn and maintain two separate sets of naming patterns, each fully matching monolinguals in each language. Alternatively, these children might still not be able to accomplish this feat despite their early learning and might in some way create a compromise between the languages. We (Ameel et al., 2005) addressed this question in Belgium, where part of the population is Dutch-speaking and part is French-speaking but it is fairly common for Dutch- and French-speakers to intermarry. We looked at the naming patterns of Belgian adults who had been raised with one parent whose native language was Dutch and one whose native language was French, each of whom consistently spoke their own native tongue to the child. We compared bilinguals’ performance in each of their two languages (tested on different days to avoid carryover effects) to that of largely monolingual Belgian speakers of Dutch and French. Stimuli for the study were again sets of photos of household containers and objects for preparing and serving food. Consistent with earlier data, we found that the monolingual speakers had some noteworthy differences in their naming patterns for these objects. Bilinguals, however, showed better correspondence between the naming patterns in their two languages than the monolinguals did for the same two languages. In effect, bilinguals converged the patterns of the two languages toward each other so that they were less distinct. Since they did not merge the patterns to the extent of yielding a single, shared pattern for both, the data imply that the differences are to some extent observed and encoded, but cross-connections between the two lexicons may end up adjusting connections weights between objects and words so that convergence occurs. Our on-going research is examining the time-frame in which this takes place: Do children start off with two distinct patterns that converge over time as repeated uses cause adjustments of connection weights, or is the cross-language influence something that is at work from the start, producing convergence from the early stages of word learning?
4.5. Implications for the Whorfian Hypothesis The Whorfian hypothesis that language shapes thought (Whorf, 1956) suggests that where languages differ from one another in their naming patterns, their speakers’ concepts of common objects should differ. The substantial differences we found in naming patterns for household
Naming Artifacts: Patterns and Processes
31
containers by speakers of English, Chinese, and Spanish suggest that these three groups should have quite noticeably different concepts in the domain. However, people learn a great deal about artifacts from direct interaction with them, not through language alone, and so the degree of linguistic differences may exceed that of conceptual differences (Malt, Gennari, & Imai, 2010; Wolff & Malt, 2010). Malt et al. (1999) evaluated similarity sorting by the three groups as well as naming and did find that groupings according to similarity were shared more strongly across the three groups than groupings according to name, suggesting that perception of the objects’ properties was at least partially independent of language. Even those most sympathetic to Whorfian hypothesis would generally not argue that words completely fix concepts, though, and so this finding, while not necessarily predicted a priori by the Whorfian position, is not entirely incompatible with it. One would want to ask whether the smaller differences in similarity sorting that did exist among the groups could reflect linguistic differences. Many studies testing Whorfian predictions (e.g., Kay & Kempton, 1984; Winawer et al., 2007) have made the straightforward prediction that speakers of a language that labels a certain distinction will see a greater difference between the two sets of referents than speakers of a language that does not make the distinction. This sort of prediction cannot be easily applied to the household container domain, though. For instance, in our study, Spanish speakers used frasco for many of the objects named bottle in English as well as all of those named jar, but on the other hand, they gave distinctive names to some objects that English speakers included within bottle (e.g., mamadera for a baby bottle; talquera for a talc bottle; roceador for a spray bottle.) Given both facts, it is not clear whether Spanish speakers should pay more attention to the form and/or function of objects in the bottle/jar range for English speakers or less. In fact, we (Malt et al.) found no evidence that what differences did exist in similarity sorting corresponded to where the languages differed in their naming patterns for a given pair of objects. The challenge for further testing a Whorfian perspective in the artifact domain is, then, to identify what specific effects of the linguistic differences one could expect given the complexity of the naming patterns and the nonsystematic nature of the differences among the languages.
5. Summary and Conclusion 5.1. Summary Artifact naming patterns are complex. A given artifact noun, such as fan or razor, may be extended from one case (say, a metal key and a manual razor) to other objects that are unlike them in form but share the same function (say, an
32
Barbara C. Malt
electronic card-like key and an electric razor). Conversely, other artifact nouns, such as brush or knife, may be extended from one case (say, a hair brush and a dinner knife) to other objects that are unlike them in function but have similar forms (say, a scrub brush and a putty knife). Furthermore, some individual nouns have extensions based on shared form and others based on shared function, and some extensions may implicate form and function together. These patterns can be captured descriptively by the notion of a family resemblance among the exemplars of a noun, with each use needing only to overlap with some others on one or more dimensions. To account for the pattern theoretically, it is important to recognize that naming patterns result from diachronic, not just synchronic, processes. Naming patterns evolve over the course of a language’s history, with the pattern that emerges being influenced by cultural factors such as what objects are present in the culture at different times and linguistic factors such as what names become available through language contact and borrowing. Furthermore, it is important to recognize that naming patterns are influenced by social processes, not just individual ones. Naming is goal-driven, so that the selection of a name for an object may be influenced by the desire to either highlight similarities with certain other objects or distance the object from them. And naming is cooperative, with speakers and addresses working together in conversation to ensure that artifact nouns are interpreted as intended despite the wide variation in what the noun could refer to. Certain nouns may tend to stray less in their usage than others, but even in those cases, the constraints are not absolute, suggesting that virtually any artifact noun may be able to develop a range of uses that overlap on different dimensions. These points about the nature of artifact naming patterns and how to account for them cannot be readily dismissed by trying to limit the range of exemplars that should count in the analysis of a given name, because no principled bases for limitations are apparent. The observations about artifact naming patterns and how they come about have implications for understanding other aspects of cognition. One is in reconciling conflicting data that have accumulated on the nature of artifact ‘‘categorization.’’ Because of the impact of historical and social influences on naming, an account of artifact naming must differ from an account of how people conceptualize the objects nonlinguistically. Once the distinction between naming as linguistic process and understanding artifacts as a conceptual process is recognized, the observations about naming are not incompatible with arguments that have been made about the nature of artifact conceptualization; they can be different but both correct. Another area of implication is for artifact naming across languages. Because languages have different cultural and linguistic histories, artifact naming patterns may differ from language to language, and this expectation has been confirmed. The cross-linguistic variability, in turn poses special challenges for child language learners, whose task is not just to identify obvious clusters of objects and put a name onto each but to learn a less-obvious grouping that the language of their
Naming Artifacts: Patterns and Processes
33
environment imposes on the objects. Child learners require many years to converge on adult naming patterns even for names of common household objects, and much more remains to be understood about what goes on during this extended learning period. Speakers of two languages have the added challenge of trying to acquire and maintain two distinct sets of naming patterns. Recent data suggest that the naming patterns of the languages can exert mutual influences on each other, with bilingual patterns differing from those of monolinguals in each language. This influence takes place whether the two languages are learned in parallel from infancy or the second is acquired later in life. Finally, the complexity of the naming patterns in any given language adds a wrinkle to understanding how language may influence thought, because the nonsystematic nature of the differences makes it hard to generate straightforward predictions about where linguistic influence may lie.
5.2. Conclusion: Not Amazing, Yet Still Amazing I opened this chapter pointing to a recent television ad in which a company CEO comments that it is amazing we call today’s smart phone by the name phone. I argued that this usage is not amazing at all in the context of a broader consideration of how artifact nouns are used. But in closing, it may be appropriate to turn that judgment on its head. Many artifact names, such as the ones used in examples throughout this chapter, are common, familiar nouns that refer to objects frequently observed and talked about in everyday life. As with many other highly familiar phenomena, in daily life we may take their use for granted, assuming there is little of interest to discover in the distribution of the names or the evolution or acquisition of the patterns. I have tried to show throughout the chapter that there is a rich and intriguing set of observations and issues tied to the use of artifact names. From this perspective, the CEO was right. It is amazing indeed that we still call it a phone.
ACKNOWLEDGMENTS I thank Herb and Eve Clark for helpful discussion and Brian Ross for constructive comments on a previous draft.
REFERENCES Ahn, W., Kim, N. S., Lassaline, M. E., & Dennis, M. J. (2000). Causal status as a determinant of feature centrality. Cognitive Psychology, 41, 361–416. Altarriba, J. (1992). The representation of translation equivalents in bilingual memory. In R. Harris (Ed.), Cognitive processing in bilinguals (pp. 157–174). Amsterdam: Elsevier. Ameel, E., Malt, B. C., & Storms, G. (2008). Object naming and later lexical development: From baby bottle to beer bottle. Journal of Memory and Language, 58, 262–285.
34
Barbara C. Malt
Ameel, E., Storms, G., Malt, B. C., & Sloman, S. A. (2005). How bilinguals solve the naming problem. Journal of Memory and Language, 53, 60–80. Arnold, J. A., Tanenhaus, M. K., Altmann, R. J., & Fagnano, M. (2004). The old and, theee, uh, new: Disfluency and reference resolution. Psychological Science, 9, 578–582. Asher, Y. M., & Kemler Nelson, D. G. (2008). Was it designed to do that? Children’s focus on intended function in their conceptualization of artifacts. Cognition, 106, 474–483. Barr, D. J. (2003). Paralinguistic correlates of conceptual structure. Psychonomic Bulletin & Review, 10, 462–467. Barr, D. J., & Keysar, B. (2005). Making sense of how we make sense: The paradox of egocentrism in language use. In H. L. Colston & A. N. Katz (Eds.), Figurative language comprehension: Social and cultural influences (pp. 21–41). Mahwah, NJ: Erlbaum. Bloom, P. (1996). Intention, history, and artifact concepts. Cognition, 60, 1–29. Bloom, P. (2000). How children learn the meanings of words. Cambridge, MA: MIT Press. Bloom, P. (2007). More than words: A reply to Malt and Sloman. Cognition, 105, 649–655. Brennan, S. E., & Clark, H. H. (1996). Conceptual pacts and lexical choice in conversation. Journal of Experimental Psychology: Learning, Memory, and Cognition, 22, 1482–1493. Bybee, J. L. (1985). Morphology: A study of the relation between meaning and form. Amsterdam: John Benjamins. Casler, K., Terziyan, T., & Greene, K. (2009). Toddlers view artifact function normatively. Cognitive Development, 24, 240–247. Chaigneau, S. E., Barsalou, L. W., & Sloman, S. A. (2004). Assessing the causal structure of function. Journal of Experimental Psychology: General, 133, 601–625. Chomsky, N., & Lasnik, H. (1993). Principles and parameters theory. In J. Jacobs, A. von Stechow, W. Sternfield & T. Vennemann (Eds.), Syntax: An international handbook of contemporary research. Berlin: Mouton de Gruyter. Chouinard, M. M., & Clark, E. V. (2003). Adult reformulations of child errors as negative evidence. Journal of Child Language, 30, 637–669. Clark, E. (2007). Conventionality and contrast in language and language acquisition. In M. Sabbagh & C. Kalish (Eds.), Right thinking: The development of conventionality. New directions in child and adolescent development (vol. 115, pp. 11–23). San Francisco, CA: Jossey-Bass. Clark, E. V. (1980). Here’s the ‘‘top’’: Nonlinguistic strategies in the acquisition of orientational terms. Child Development, 51, 329–338. Clark, E. V., & Berman, R. A. (1987). Types of linguistic knowledge: Interpreting and producing compound nouns. Journal of Child Language, 14, 547–567. Clark, E. V., Gelman, S. A., & Lane, N. M. (1985). Noun compounds and category structure in young children. Child Development, 56, 84–94. Clark, H. H. (1996). Using language. Cambridge, England: Cambridge University Press. Clark, H. H. (1998). Communal lexicons. In K. Malmkjær & J. Williams (Eds.), Context in language learning and language understanding (pp. 63–87). Cambridge, England: Cambridge University Press. Clark, H. H. (2006). Social actions, social commitments. In S. C. Levinson & N. J. Enfield (Eds.), Roots of human sociality: Culture, cognition, and human interaction (pp. 126–150). Oxford: Berg Press. Clark, H. H., & Brennan, S. E. (1991). Grounding in communication. In L. B. Resnick, J. M. Levine & S. D. Teasley (Eds.), Perspectives on socially shared cognition (pp. 127–149). Washington, DC: American Psychological Association. Clark, H. H., & Fox Tree, J. E. (2002). Using uh and um in spontaneous speaking. Cognition, 84, 73–111. Clark, H. H., & Krych, M. A. (2004). Speaking while monitoring addressees for understanding. Journal of Memory and Language, 50, 62–81.
Naming Artifacts: Patterns and Processes
35
Clark, H. H., & Marshall, C. R. (1981). Definite reference and mutual knowledge. In A. K. Joshi, B. L. Webber & I. A. Sag (Eds.), Elements of discourse understanding (pp. 10–63). Cambridge, England: Cambridge University Press. Clark, H. H., & Schaefer, E. F. (1989). Contributing to discourse. Cognitive Science, 13, 259–294. Clark, H. H., & Wilkes-Gibbs, D. (1986). Referring as a collaborative process. Cognition, 22, 1–39. Defeyter, M. A., Hearing, J., & German, T. C. (2009). A developmental dissociation between category and function judgments about novel artifacts. Cognition, 110, 260–264. De Groot, A. M. B. (1993). Word-type effects in bilingual processing tasks: Support for a mixed-representational system. In R. Schreuder & R. Weltens (Eds.), The bilingual lexicon (pp. 27–51). Amsterdam: John Benjamins. Diesendruck, G., Hammer, R., & Catz, O. (2003a). Mapping the similarity space of children and adults’ artifact categories. Cognitive Development, 18, 217–231. Diesendruck, G., Markson, L., & Bloom, P. (2003b). Children’s reliance on creator’s intent in extending names for artifacts. Psychological Science, 14, 164–168. Estes, Z. (2003). Domain differences in the structure of artifactual and natural categories. Memory & Cognition, 31, 199–214. Finegan, E. (1994). Language: Its structure and use. Forth Worth: Harcourt Brace. Gelman, S. A. (2003). The essential child: Origins of essentialism in everyday thought. New York: Oxford University Press. Goodman, N. (1972). Seven strictures on similarity. In N. Goodman (Ed.), Problems and projects (pp. 437–447). New York: Bobbs-Merrill. Greif, M. L., Kemler Nelson, D. G., Keil, F. C., & Gutierrez, F. (2006). What do children want to know about animals and artifacts? Domain-specific requests for information. Psychological Science, 17, 455–459. Gutheil, G., Bloom, P., Valderrama, N., & Freedman, R. (2004). The role of historical intuitions in children’s and adult’s naming of artifacts. Cognition, 91, 23–42. Hampton, J. A. (1997). Conceptual combination: Conjunction and negation of natural concepts. Memory & Cognition, 25, 888–909. Hampton, J. A., Storms, G., Simmons, C. L., & Heussen, D. (in press). Feature integration in natural language concepts. Memory & Cognition. Heit, E. (1992). Categorization using chains of examples. Cognitive Psychology, 24, 341–380. Hock, H. H. (1991). Principles of historical linguistics. Berlin: Mouton de Gruyter. Horton, W. S., & Gerrig, R. J. (2005a). The impact of memory demands on audience design during language production. Cognition, 96, 127–142. Horton, W. S., & Gerrig, R. J. (2005b). Conversational common ground and memory processes in language production. Discourse Processes, 40, 1–35. Kay, P., & Kempton, W. (1984). What is the Sapir-Whorf hypothesis? American Anthropologist, 86, 65–79. Keil, F. C. (1989). Concepts, kinds, and cognitive development. Cambridge, MA: MIT Press. Kelemen, D. (1999). Functions, goals and intentions: Children’s teleological reasoning about objects. Trends in Cognitive Sciences, 12, 461–468. Kelly, S. D., Barr, D. J., Church, R. B., & Lynch, K. (1999). Offering a hand to pragmatic understanding: The role of speech and gesture in comprehension and memory. Journal of Memory and Language, 40(4), 577–592. Kemler Nelson, D. G., Herron, L., & Morris, M. (2002). How children and adults name broken objects: Inferences and reasoning about design intentions in the categorization of artifacts. Journal of Cognition and Development, 3, 301–332. Kemler Nelson, D. G., Russell, R., Duke, N., & Jones, K. (2000). Two-year olds will name artifacts by their function. Child Development, 71, 1271–1288.
36
Barbara C. Malt
Kempton, W. (1981). The folk classification of ceramics: A study of cognitive prototypes. New York: Academic Press. Kohlberg, L. (1976). Moral stages and moralization: The cognitive developmental approach. In T. Lickona (Ed.), Moral development and behavior: Theory, research, and social issues (pp. 31–53). New York: Holt, Rinehart, & Winston. Kroll, J., & Curley, J. (1988). Lexical memory in novice bilinguals: The role of concepts in retrieving second language words. In M. Gruneberg, P. Morris & R. Sykes (Eds.), Practical aspects of memory (vol. 2, pp, 389–395). London: Wiley. Lakoff, G. (1987). Women, fire, and dangerous things: What categories reveal about the mind. Chicago: University of Chicago Press. Landau, B., Smith, L. B., & Jones, S. S. (1988). The importance of shape in early lexical learning. Cognitive Development, 3, 299–321. Lederer, R. (1989). Crazy English. New York, NY: Pocket Books. Love, B., Medin, D. L., & Gureckis, T. M. (2004). SUSTAIN: A network model of category learning. Psychological Review, 111, 309–332. Malt, B. C. (1995). Category coherence in cross-cultural perspective. Cognitive Psychology, 29, 85–148. Malt, B. C., Gennari, S., & Imai, M. (2010). Lexicalization patterns and the world-to-words mapping. In B. C. Malt & P. Wolff (Eds.), Words and the mind: How words capture human experience. Oxford: Oxford University Press. Malt, B. C., & Johnson, E. C. (1992). Do artifact concepts have cores? Journal of Memory and Language, 31, 195–217. Malt, B. C., & Johnson, E. C. (1998). Artifact category membership and the intentionalhistorical theory. Cognition, 66, 79–85. Malt, B. C., & Paquet, M. (2008). The real deal: What judgments of really reveal about how people think about artifacts. In B. C. Love, K. McRae, and V. M. Sloutsky (Eds.), Proceedings of the 30th Annual Conference of the Cognitive Science Society. (pp. 247–252). Austin, TX: Cognitive Science Society. Malt, B. C., & Sloman, S. A. (2003). Linguistic diversity and object naming by non-native speakers of English. Bilingualism: Language and Cognition, 6, 47–67. Malt, B. C., & Sloman, S. A. (2007a). Artifact categorization: The good, the bad, and the ugly. In E. Margolis & S. Laurence (Eds.), Creations of the mind: Theories of artifacts and their representation (pp. 85–123). Oxford: Oxford University Press. Malt, B. C., & Sloman, S. A. (2007b). Category essence or essentially pragmatic? Creator’s intention in naming and what’s really what. Cognition, 105, 615–648. Malt, B. C., & Sloman, S. A. (2007c). More than words, but still not categorization. Cognition, 105, 656–657. Malt, B. C., Sloman, S. A., & Gennari, S. (2003). Universality and language specificity in object naming. Journal of Memory and Language, 49, 20–42. Malt, B. C., Sloman, S. A., Gennari, S., Shi, M., & Wang, Y. (1999). Knowing versus naming: Similarity and the linguistic categorization of artifacts. Journal of Memory and Language, 40, 230–262. Malt, B. C., & Smith, E. E. (1984). Correlated properties in natural categories. Journal of Verbal Learning and Verbal Behavior, 23, 250–269. Markman, E. M., & Wachtel, G. F. (1988). Children’s use of mutual exclusivity to constrain the meaning of words. Cognitive Psychology, 20, 121–157. Medin, D., & Ortony, A. (1989). Psychological essentialism. In S. Vosniadou & A. Ortony (Eds.), Similarity and analogical reasoning (pp. 179–195). Cambridge: Cambridge University Press. Medin, D. L., Wattenmaker, W. D., & Hampson, S. E. (1987). Family resemblance, conceptual cohesiveness, and category construction. Cognitive Psychology, 19, 242–279. Millar, R. M. (2007). Trask’s historical linguistics (2nd ed.). London: Hodder Education.
Naming Artifacts: Patterns and Processes
37
Miller, G. A., & Johnson-Laird, P. N. (1976). Language and perception. Cambridge, MA: Harvard University Press. Murphy, G. L. (1990). Noun phrase interpretation and conceptual combination. Journal of Memory and Language, 29, 259–288. Murphy, G. L. (2002). The big book of concepts. Cambridge, MA: MIT Press. Murphy, G. L., & Medin, D. L. (1985). The role of theories in conceptual coherence. Psychological Review, 92, 289–316. Nerlich, B., & Clarke, D. D. (2003). Polysemy and flexibility: An introduction. In B. Nerlich, Z. Todd, V. Herman & D. D. Clarke (Eds.), Polysemy: Flexible patterns of meaning in mind and language (pp. 3–30). Berlin: Mouton de Gruyter. Nerlich, B., Todd, Z., Herman, V., & Clarke, D. D. (2003). Polysemy: Flexible patterns of meaning in mind and language. Berlin: Mouton de Gruyter. Nunberg, G. (1979). The non-uniqueness of semantic solutions: Polysemy. Linguistics and Philosophy, 3, 143–184. Nunberg, G. (2004). The pragmatics of deferred interpretation. In L. R. Horn & G. Ward (Eds.), The handbook of pragmatics (pp. 344–364). Malden, MA: Blackwell. Pavlenko, A., & Malt, B. C. (in press). Kitchen Russian: Cross-linguistic differences and first-language object naming by Russian-English bilinguals. Bilingualism: Language and Cognition. Pennanen, E. V. (1980). On the function and behavior of stress in English noun compounds. English Studies, 61, 252–263. Petroski, H. (1993). The evolution of useful things: How everyday artifacts—from forks and pins to paper clips and zippers—came to be as they are. New York: Alfred A. Knopf. Petroski, H. (2007). The toothpick: Technology and culture. New York: Alfred A. Knopf. Pinker, S. (1994). The language instinct. New York, NY: William Morrow & Co. Plag, I., Kunter, G., & Lappe, S. (2007). Testing hypotheses about compound stress assignment in English: A corpus-based investigation. Corpus Linguistics and Linguistic Theory, 3–2, 199–232. Plag, I., Kunter, G., Lappe, S., & Braun, M. (2008). The role of semantics, argument structure, and lexicalization in compound stress assignment in English. Language, 84, 760–794. Putnam, H. (1977). Meaning and reference. In S. P. Schwartz (Ed.), Naming, necessity, and natural kinds. Ithaca, NY: Cornell University Press. Quine, W. V. (1969). Natural kinds. In W. V. Quine (Ed.), Ontological relativity and other essays (pp. 114–138). New York: Columbia University Press. Regehr, G., & Brooks, L. R. (1995). Category organization in free classification: The organizing effect of an array of stimuli. Journal of Experimental Psychology: Learning, Memory, and Cognition, 21, 347–363. Rips, L. J. (1989). Similarity, typicality, and categorization. In S. Vosniadou & A. Ortony (Eds.), Similarity and analogical reasoning (pp. 21–59). Cambridge: Cambridge University Press. Rosch, E., & Mervis, C. B. (1975). Family resemblances: Studies in internal structure of categories. Cognitive Psychology, 7, 573–605. Shintel, H., & Keysar, B. (2009). Less is more: A minimalist account of joint action in communication. Topics in Cognitive Science, 1, 260–273. Siegel, D. R., & Callanan, M. A. (2007). Artifacts as conventional objects. Journal of Cognition and Development, 8, 183–203. Sloman, S. A., & Malt, B. C. (2003). Artifacts are not ascribed essences, nor are they treated as belonging to kinds. Language and Cognitive Processes [Special Issue: Conceptual Representation], 18, 563–582. Smith, E. E., & Medin, D. L. (1981). Categories and concepts. Cambridge, MA: Harvard University Press.
38
Barbara C. Malt
Traugott, E. C., & Dasher, R. B. (2005). Regularity in semantic change. Cambridge: Cambridge University Press. Whorf, B. L. (1956). Language, thought and reality: Selected writings of Benjamin Lee Whorf. Cambridge, MA: MIT Press. Winawer, J., Witthoft, N., Frank, M. C., Wu, L., Wade, L. R., & Boroditsky, L. (2007). Russian blues reveal effects of language on color discrimination. Proceedings of the National Academy of the Sciences, 104, 7780–7785. Wisniewski, E. J. (1997). When concepts combine. Psychonomic Bulletin & Review, 4, 167–183. Wittgenstein, L. (1991). Philosophical investigations: The German text, with a revised English translation [transl. G.E.M. Ambscombe]. London: Wiley-Blackwell. Wolff, P., & Malt, B. C. (2010). The language-thought interface: An introduction. In B. C. Malt & P. Wolff (Eds.), Words and the mind: How words capture human experience. Oxford University Press. Wolff, P., & Ventura, T. (2009). When Russians learn English: How the semantics of causation may change. Bilingualism: Language and Cognition. 12, 153–176. Xu, F., & Rhemtulla, M. (2005). In defense of psychological essentialism. In B. G. Bara, L. Barsalou & M. Bucciarelli (Eds.), Proceedings of the 27th Annual Conference of the Cognitive Science Society (pp. 2377–2380). Mahwah, NJ: Erlbaum.
C H A P T E R
T W O
Causal-Based Categorization: A Review Bob Rehder Contents 40 42
1. Introduction 2. Assessing Causal-Based Classification Effects 2.1. Assessing Causal-Based Effects in Natural Versus Novel Categories 2.2. Interpreting Classification Tests 2.3. Terminology 3. Computational Models 3.1. The Dependency Model 3.2. The Generative Model 4. The Causal Status Effect 4.1. Causal Link Strength 4.2. Background Causes 4.3. Unobserved ‘‘Essential’’ Features 4.4. Number of Dependents 4.5. Other Factors 4.6. Theoretical Implications: Discussion 5. Relational Centrality and Multiple-Cause Effects 5.1. Evidence Against a Relational Centrality Effect and for a Multiple-Cause Effect 5.2. Evidence for an Isolated Feature Effect 5.3. Theoretical Implications: Discussion 6. The Coherence Effect 6.1. Causal Link Strength 6.2. Background Causes 6.3. Higher Order Effects 6.4. Other Factors 6.5. Theoretical Implications: Discussion 7. Classification as Explicit Causal Reasoning 7.1. Classification as Diagnostic (Backward) Reasoning 7.2. Classification as Prospective (Forward) Reasoning 7.3. Theoretical Implications: Discussion
Psychology of Learning and Motivation, Volume 52 ISSN 0079-7421, DOI: 10.1016/S0079-7421(10)52002-4
42 44 51 51 51 53 57 59 62 64 68 70 72 75 77 78 79 83 83 86 87 90 92 93 94 97 99
#
2010 Elsevier Inc. All rights reserved.
39
40
Bob Rehder
8. Developmental Studies 8.1. Feature Weights and Interactions 8.2. Explicit Causal Reasoning in Children’s Categorization 9. Summary and Future Directions 9.1. Alternative Causal Structures and Uncertain Causal Models 9.2. Categories’ Hidden Causal Structure 9.3. Causal Reasoning and the Boundaries of Causal Models 9.4. Additional Tests with Natural Categories 9.5. Processing Issues 9.6. Integrating Causal and Empirical/Statistical Information 9.7. Developmental Questions 9.8. Additional Dependent Variables 9.9. Closing Words References
100 100 104 105 105 106 107 108 109 110 110 111 111 111
Abstract This chapter reviews the last decade’s work on causal-based classification, the effect of interfeature causal relations on how objects are categorized. Evidence for and against the numerous effects discussed in the literature is evaluated: the causal status effect, the relational centrality effect, the multiple-cause effect, and the coherence effect. Evidence for explicit causal reasoning in classification and the work conducted on children’s causal-based classification is also presented. The chapter evaluates the implications these findings have for two models of causal-based classification—the dependency model [Sloman, S. A., Love, B. C., & Ahn, W. (1998). Feature centrality and conceptual coherence. Cognitive Science, 22, 189–228] and the generative model [Rehder, B., & Kim, S. (2006). How causal knowledge affects classification: A generative theory of categorization. Journal of Experimental Psychology: Learning, Memory, and Cognition, 32, 659–683]—and discusses methodological issues such as the testing of natural versus novel (artificial) categories and the interpretation of classification tests. Directions for future research are identified.
1. Introduction Since the beginning of investigations into the mind/brain, philosophers and psychologists have asked how people learn categories and classify objects. Interest in this question should be unsurprising given that categories are a central means by which old experiences guide our responses to new ones. Regardless of whether it is a new event or temporally extended object, a political development or a new social group, a new biological species or type of widget on a computer screen, every stimulus we experience is novel
Causal-Based Categorization: A Review
41
in at least some regards, and so the types into which they are grouped become the repositories of new knowledge. Thus I learn that credit default swaps are risky, that elections have consequences, and that the funny symbol on my cell phone means I have voice mail. That the stimuli we classify span such a wide range means that the act of categorization is surprisingly complex and varied. Some categories seem to have a (relatively) simple structure. As a child I learn to identify some parts of my toys as wheels and some letters in words as ‘‘t.’’ Accordingly, much of the field has devoted itself to the categorization of stimuli with a small number of perceptual dimensions, testing in the lab subjects’ ability to learn to classify stimuli such as Gabor patches or rectangles that vary in height and width (see Ashby & Maddox, 2005, for a review). This research has shown that the learning of even these supposedly simple categories can be quite involved, as subtle differences in learning procedures and materials result in large differences in what sorts of representations are formed, how attention is allocated to different stimulus dimension, how feedback is processed, and which brain regions underlie learning. Other categories, in contrast, have an internal structure that is much more integrated with other sorts of knowledge. For example, as compared to wheels or ‘‘t’’s, a notion such as elections is related to many other concepts: that people organize themselves into large groups known as countries, that countries are led by governments, that in democratic countries governments are chosen by people voting, and so on. Ever since this point was made in Murphy and Medin (1985) seminal article, a substantial literature has emerged documenting how the knowledge structures in which categories are embedded have a large effect on how categories are learned, how objects are classified, how new properties are generalized to a category, and how missing features in an object are predicted on the basis of its category membership (see Murphy, 2002, for a review). Because of its ubiquity in our conceptual structures (Ahn, Marsh, Luhmann, & Lee, 2002), one particular type of knowledge—the causal relations that obtain between features of categories—has received special attention. For example, even if you know little about cars, you probably have at least some vague notion that cars not only have gasoline, spark plugs, radiators, fans, and emit carbon monoxide but also that these features causally interact—that the spark plugs are somehow involved in the burning of the gasoline, that burning produces carbon monoxide and heat, and that the radiator and fan somehow work to dissipate the later. Here I focus on the rich database of empirical results showing how this sort of knowledge affects the key category-based judgment, namely, classification itself. This chapter has the following structure. Section 2 addresses important methodological issues regarding the measurement of various effects of causal knowledge on categorization. Section 3 presents models that have been proposed to account for the key empirical phenomena, and those phenomena
42
Bob Rehder
are then described in Sections 4–7. I close with a discussion of development issues (Section 8) and directions for future research (Section 9).
2. Assessing Causal-Based Classification Effects The central question addressed by this literature is: What evidence does an object’s features provide for membership in a category as a function of the category’s network of interfeature causal relations? This section discusses two issues regarding the measurement of such effects, namely, the testing of natural versus novel categories and the type (and interpretation) of the classification tests administered.
2.1. Assessing Causal-Based Effects in Natural Versus Novel Categories Studies have assessed causal-based effects on classification for both natural (realworld) categories and novel ones (made-up categories that are taught to subjects as part of the experimental session). When natural categories are tested, researchers first assess the theories that subjects hold for these categories and then test how those theories affect classification. For example, one common method is the theory-drawing task in which subjects are presented with a category’s features and asked to draw the causal relations between those features and to estimate the strengths of those relations. Using this method, Sloman, Love, and Ahn (1998) measured theories for common-day objects (e.g., apples and guitars); Kim and Ahn (2002a,b) and Ahn, Levin, and Marsh (2005) did so for psychiatric disorders such as depression and schizophrenia. In contrast, for novel categories subjects are explicitly instructed on interfeature causal links. For example, in a seminal study by Ahn, Kim, Lassaline, and Dennis (2000a), participants were instructed on a novel type of bird named roobans with three features: eats fruit (X), has sticky feet (Y), and builds nests on trees (Z). In addition, participants were told that features were related in a causal chain (Figure 1) in which X causes Y (‘‘Eating fruit tends to cause roobans to have sticky feet because sugar in fruits is secreted through pores under their feet.’’) and Y causes Z (‘‘Sticky feet tends to allow roobans to build nests on trees because they can climb up the trees easily with sticky feet.’’). Similarly, Rehder and Hastie (2001) instructed subjects on a novel type of star named myastars and how some features of myastars
X
Y
Z
Figure 1 A three-element causal chain.
Causal-Based Categorization: A Review
43
(e.g., high density) caused others (e.g., a large number of planets). Usually studies also provide some detail regarding causal mechanism regarding why one feature produces another. There are a number of advantages to testing novel rather than natural categories. One is that novel categories provide greater control over the causal relations that are used. For example, when classifying into natural categories, it is possible that limited cognitive resources (e.g., working memory) prevent subjects from using the dozens of causal relations they usually identify in a theory-drawing task. In contrast, experiments using novel categories usually teach subjects 2–4 causal links and the experimental context itself makes it clear that those causal links are the relevant ones (especially so, when the causal links are presented on the computer screen as part of the classification test). Of course, this does not rule out the use of additional causal links that subjects might assume are associated with particular ontological kinds by default (see Section 4.3 for one possibility in this regard). Another advantage of novel categories is that they can control for the numerous other factors besides causal knowledge that are known to influence category membership. For example, features that are more salient will have greater influence than less salient ones (e.g., Lamberts, 1995, 1998). And, feature importance is influenced by what I call empirical–statistical information, that is, how often features or exemplars are observed as occurring as category members and nonmembers (Rosch & Mervis, 1975). Patterns of features that are observed to occur within category members (e.g., a feature’s category validity, the probability that it occurs given the category) may be especially problematic, because this information is likely to covary with their causal role. For example, a feature with many causes is likely to appear in more category members than one with few causes; two features that are causally related are also likely to be correlated in observed category members. Thus, any purported effect of causal knowledge on classification in natural categories might be due to the statistical patterns of features that causal links generate (and classifiers then observe) rather than the links themselves.1 In contrast, when novel categories are used, counterbalancing the assignment of features to causal role or use of multiple novel 1
Moreover, there is evidence suggesting that causal knowledge and within-category empirical–statistical information are conflated in people’s mental representation of natural categories. For example, Sloman et al. (1998) conducted a factor analysis showing that category features vary along three dimensions. The first two were identified as perceptual salience (assessed with questions like ‘‘How prominent in your conception of apples is that it grows on trees?’’) and diagnosticity (or cue validity, assessed with questions like ‘‘Of all things that grow on trees, what percentage are apples?’’). Measures loading on a third factor included both category validity (i.e., ‘‘What percentage of apples grow on trees?’’) and those related to a construct they labeled conceptual centrality or mutability (assessed with questions like ‘‘How good an example of an apple would you consider an apple that does not ever grow on trees?’’). Category validity and centrality were also highly correlated in a study testing a novel category that was designed to dissociate the two measures (Sloman et al., Study 5). Conceptual centrality corresponds to one of the questions addressed this chapter, namely, the evidence that an individual feature provides for a particular category. Thus, it may be difficult to separate the effects of causal knowledge and observed category members on classification into natural categories.
44
Bob Rehder
categories averages over effects of feature salience and contrast categories. And, that subjects have not seen examples of these made-up categories eliminates effects of empirical–statistical information. For these reasons, this chapter focuses on studies testing novel experimental categories. Of course, this is not to say that studies testing natural categories have not furthered our understanding of causal-based classification in critical ways, as such studies have reported numerous interesting and important findings (e.g., Ahn, 1998; Ahn, Flanagan, Marsh, & Sanislow, 2006; Kim & Ahn, 2002a,b; Sloman et al., 1998). As always, research is advanced most rapidly by an interplay between studies testing natural materials (that afford ecological validity) and novel ones (that afford experimental control). However, when rigorous tests of computational models is the goal (as it is here), the more tightly controlled studies are to be emphasized.
2.2. Interpreting Classification Tests After subjects learn a novel category, they are presented with a series of objects and asked to render a category membership judgment. The question of how causal knowledge affects classification can be divided into two subquestions. The first is how causal knowledge affects the influence of individual features on classification. The second concerns how certain combinations of features make for better category members. In categorization research there is precedent for considering these two different types of effects. For example, Medin and Schaffer (1978) distinguished independent cue models, in which each feature provides an independent source of evidence for category membership (prototype models are an example of independent cue models), from interactive cue models, in which a feature’s influence depends on what other features are present (exemplar models are an example of interactive cue models). Of course, whereas most existing categorization models are concerned with how features directly observed in category members influence (independently or interactively) subsequent classification decisions, the current chapter is concerned with how classification is affected by interfeature causal relations. I will refer to one method for assessing the importance of individual features as the missing feature method. As mentioned, in the study by Ahn et al. (2000a), participants were instructed on novel categories (e.g., roobans) with features related in a causal chain (X ! Y ! Z). Participants were then presented with three items missing exactly one feature (one missing only X, one missing only Y, one missing only Z) and asked to rate how likely that item was a category member. Differences among the ratings of these items were interpreted as indicating how the relative importance of features varies as a function of their causal role. For example, that the missing-X item was rated a worse category member than the missing-Y item was taken to mean that X
Causal-Based Categorization: A Review
45
was more important than Y for establishing category membership. (The result that features are more important than those they cause is referred to as the causal status effect and will be discussed in detail in Section 4.) The missing feature method has been used in numerous other studies (Ahn, 1998; Kim & Ahn, 2002a,b; Kim, Luhmann, Pierce, & Ryan, 2009; Luhmann, Ahn, & Palmeri, 2006; Sloman et al., 1998). A different method for assessing feature weights was used by Rehder and Hastie (2001). They also instructed subjects on novel categories (e.g., myastars) with four features that causally related in various topologies. However, rather than just presenting test items missing one feature, Rehder and Hastie presented all 16 items that can be formed on four binary dimensions. Linear regression analyses were then performed on those ratings in which there was one predictor for each feature coding whether a feature was present or absent in a test item. The regression weight on each predictor was interpreted as the importance of that feature. Importantly, the regression equation also included two-way and higher order interaction terms to allow an assessment of how important certain combinations of features are for forming good category members. For example, a predictor representing the two-way interaction between features X and Y encodes whether features X and Y are both present or both absent versus one present and the other absent, and the resulting regression weight on that predictor represents the importance to participants’ categorization rating of dimensions X and Y having the same value (present or absent) or not in that test item. In fact, Rehder and Hastie found that subjects exhibited sensitivity to two-way and higher order feature interactions, producing, for example, higher ratings when cause and effect features were both present or both absent and lower ratings when one was present and the other absent. (This phenomenon, known as the coherence effect, will be discussed in detail in Section 6.) The regression method has been used in numerous studies (Rehder, 2003a,b, 2007; Rehder & Kim, 2006, 2009b). Which of these methods for assessing causal-based classification effects should be preferred? One obvious advantage of the regression method is that it, unlike the missing feature method, provides a measure of feature interactions, an important consideration given the presence of large coherence effects described later. In addition, however, there are also several reasons to prefer the regression method for assessing feature weights, which I now present in ascending order of importance. The first advantage is that regression is a generalization of a statistical analysis method that is already very familiar to empirical researchers, namely, analysis of variance (ANOVA) ( Judd, McClelland & Culhane, 1995). For example, imagine an experiment in which subject are taught a category with three features X, Y, and Z and then rate the category membership of the eight distinct test items that can be formed on three binary dimensions. This experiment can be construed as a 2 2 2 within-subjects design in
46
Bob Rehder
which the three factors are whether the feature is present or absent on dimension X, on dimension Y, and on dimension Z. The question of whether the regression weight on, say, feature X is significantly different than zero is identical to asking whether there is a ‘‘main effect’’ of dimension X. The question of whether the two-way interaction weight between X and Y is different than zero is identical to asking whether there is an interaction between dimensions X and Y. That is, one can ask whether the ‘‘main effect’’ of feature X is ‘‘moderated’’ by the presence or absence of feature Y (as one might expect if X and Y are causally related). The second reason to prefer regression is that it provides a more powerful method of statistical analysis. The regression weight on, say, dimension X amounts to a difference score between the ratings of the test items that have feature X and those that do not.2 Supposing again that the category has three features, the weight on X is the difference between the ratings on test items 111, 110, 101, and 100 versus 011, 010, 001, and 000 (where ‘‘1’’ means that a feature is present and ‘‘0’’ that it is absent, e.g., 101 means that X and Z are present and Y absent). As a consequence, use of regression means that an assessment of X’s weight involves all eight test items, whereas the missing feature method bases it solely on the rating of one item (namely, 011). By averaging over the noise associated with multiple measures, the regression method produces a more statistically powerful assessment of the importance of features. Third, and most importantly, the missing feature method produces, as compared to regression, a potentially different and (as I shall show) incorrect assessment of a feature’s importance. This is the case because any single test item manifests both the independent and interactive effects of its features; in particular the rating of a test item missing a single feature is not a pure measure of that feature’s weight (i.e., it does not correspond to the ‘‘main effect’’ associated with the feature). For example, suppose that a category has three features in which X causes both Y and Z (i.e., X, Y, and Z form a common-cause network) and that subjects are asked to rate test items on a 1–10 scale. Assume that subjects produce a baseline classification rating of 5, that ratings are 1 point higher for each feature present in a test item and 1 point lower for each feature that is absent (and unchanged if the presence or absence of the feature is unknown). That is, features X, Y, and Z are all weighed equally. In addition, assume there exists interactive effects such that the rating goes 1 point higher whenever cause and effect features are both present or both absent and 1 point lower whenever one of those is present and the other absent. The classification ratings for this hypothetical experiment are presented in Example 1 in Table 1 for the eight test items that 2
Indeed, when the regression predictor for a feature is coded as þ1 when the feature is present and 1 when it is absent and all other predictors are orthogonal (which occurs when all possible test items are presented), the resulting regression weight is exactly half this difference. A concrete example of a regression equation follows.
47
Causal-Based Categorization: A Review
Table 1 Hypothetical Classification Ratings for Four Example Categories with Features X, Y, and Z. Hypothetical classification ratings Common cause Y X!Z
Parameters Weight (X) Weight (Y) Weight (Z) Weight on interactions Test items 111 011 (missing only X) 101 (missing only Y) 110 (missing only Z) 100 (missing all but X) 010 (missing all but Y) 001 (missing all but Z) 000 1xx x1x xx1
Chain X!Y!Z
Example 1
Example 2
Example 3
Example 4
1 1 1 1
1 0.5 0.5 1
1 1 1 1
1 0.5 0.5 1
10 4 6 6 2 4 4 4 6 6 6
9 3 6 6 3 4 4 5 6 5.5 5.5
10 6 4 6 4 2 4 4 6 6 6
9 5 4 6 5 2 4 5 6 5.5 5.5
Examples 1 and 2 assume features are related in a common-cause structure; examples 3 and 4 assume they form a causal chain. Examples 1 and 3 assume all features are have equal classification weights; examples 2 and 4 assume X > Y ¼ Z. 1, Feature present; 0, feature absent; x, feature state unknown.
can be formed on the three dimensions and the three test items that have one feature present and two unknown (again, ‘‘1’’ ¼ feature present and ‘‘0’’ ¼ absent; ‘‘x’’ means the state of the feature is unknown). For instance, test item 110 (X and Y present, Z absent) has a rating of 6 because, as compared to a baseline rating of 5, it gains two points due to the presence X and Y, loses one because of the absence Z, gains one because the causally related X and Y are both present, and loses one because X is present but its effect Z is absent. It is informative to compare the different conclusions reached by the missing feature method and linear regression regarding feature weights in this example. Importantly, the rating of 4 received by the item missing only feature X (011) is lower than the rating of 6 given to the items missing only Y (101) or Z (110). This result seems to imply (according to the missing feature method) that X is more important than Y and Z. However, this conclusion is at odds with the conditions that were stipulated in the example,
48
Bob Rehder
namely, that all three features were weighed equally. In fact, item 011 is rated lower not because feature X is more important, but rather because it includes two violations of causal relations (X is absent even though both of its effects are present) whereas 101 and 110 have only one (in each, the cause X is present and one effect is absent). This example demonstrates how the missing feature method can mischaracterize an interactive effect of features as a main effect of a single feature. In contrast, a regression analysis applied to the eight test items in Example 1 correctly recovers the fact that all features are weighed equally and, moreover, the interactive effect between the causally related features X and Y, and X and Z. Specifically, for Example 1 the regression equation would be ratingi ¼b0 þ bX fX þ bY fY þ bZ fZ þ bXY fXY þ bXZ fXZ þbYZ fYZ þ bXYZ fXYZ where ratingi is the rating for test item i, fj ¼ þ1 when feature j is present in test item i and 1 when it is absent, fjk ¼ fj fk, and fXYZ ¼ fX fY fZ. This regression analysis yields b0 ¼ 5, bX ¼ bY ¼ bZ ¼ 1, bXY ¼ bXZ ¼ 1, and bYZ ¼ bXYZ ¼ 0. These b weights are of course just those that were stipulated in the example. It is important to recognize that the alternative conclusions reached by the two methods are not merely two different but equally plausible senses of what we mean by ‘‘feature weights’’ in classification. The critical test of whether the weight assigned to a feature is correct is whether it generalizes sensibly to other potential test items—the value of knowing the importance of individual features is that it allows one to estimate the category membership of any potential item, not just those presented on a classification test. For example, if for Example 1 you were to conclude (on the basis of the missing feature method) that X is the most important feature, you would predict that the item with only feature X (100) should be rated higher than those with only Y (010) or only Z (001). And, for items that have only one known feature, you would predict that 1xx should be rated higher than x1x or xx1. Table 1 reveals that these predictions would be incorrect, however: Item 100 is rated lower than 010 and 001 (because 100 violates two causal links whereas the others violate only one) and 1xx, x1x, and xx1 all have the same rating (6). This example illustrates how the missing feature method can yield a measure of feature importance that fails to generalize to other items. These conclusions do not depend on all features being equally weighed as they are in Example 1. Example 2 in Table 1 differs from Example 1 in that the weights on features Y and Z have been reduced to 0.5 (so that X now is the most heavily weighed feature). The missing feature method again assigns (correctly, in this case) a greater weight to feature X (because 011’s
Causal-Based Categorization: A Review
49
rating of 3 is lower than the 6 given to 101 or 110). But whereas it then predicts that item 100 should be rated higher than 010 or 001, in fact that item is rated lower (3 vs. 4) because the two violations of causal relations in item 100 outweigh the presence of the more heavily weighed X. Nor are these sorts of effects limited to a common-cause network. In Examples 3 and 4, X, Y, and Z are arranged in a causal chain. Despite that features are weighed either equally (Example 3) or X > Y ¼ Z (Example 4), in both examples the item missing feature Y (101) is rated lower than 011 or 110, and thus the missing feature method would incorrectly identify Y as the most important feature. Together, Examples 1–4 demonstrate how the missing feature method can systematically mischaracterize the true effect of individual features on classification. In contrast, a regression analysis recovers the correct feature weights and the interactive effects in all four examples. The three advantages associated with the regression method mean that it is a superior method for assessing the effect of causal knowledge on classification. Are there any potential drawbacks to use of this method? Three issues are worth mentioning. First, to allow an assessment of both feature weights and interactions, the regression method requires that subjects rate a larger number of test items and thus it is important to consider what negative impact this might have. A longer test increases the probability that fatigue might set it in and, in designs in which subjects must remember the causal links on which they are instructed, that subjects might start to forget those links. To test whether this in fact occurs, I reanalyzed data from a study reported by Rehder and Kim (2006) in which subjects were taught categories with five features and up to four causal links and then asked to rate a large number of test items, namely, 32. Even though subjects had to remember the causal links (because they were not displayed during the classification test), they exhibited significant effects of causal knowledge on both feature weights and feature interactions, indicating that they made use of that knowledge. Moreover, the reanalysis revealed that the magnitude of these effects for the last 16 test items was the same as the first 16. Thus, although fatigue and memory loss associated with a larger number of test items are valid concerns, at present there is no empirical evidence that this occurs in the sort of studies reviewed here. Of course, fatigue and memory loss may become problems if an even larger number of causal links and test items than in Rehder and Kim (2006) are used. Another potential consequence of the number of items presented on classification test is that, because it is well known that an item’s rating will change depending on the presence of other items (Poulton, 1989), the missing feature or regression methods may yield different ratings for exactly the same item. For instance, in Example 1 the ratings difference between the missing-X (011) and the missing-Y and -Z items (101 and 110) is likely to
50
Bob Rehder
be larger when those are the only items being rated (the missing feature method) as compared to when both very likely (111) and unlikely (100) category members are included (the regression method). This is so because the absence of these latter items will result in the response scale expanding as subjects attempt to make full use of the scale; their presence will result in the scale contracting because 111 and 100 ‘‘anchor’’ the ends of the scale. (An example of this sort of scale contraction and expansion is presented in Section 4.5.1.) For this reason, it is ill-advised to compare test item ratings across conditions and studies that differ in the number of different types of test items presented. A third potential issue is whether the classification rating scale used in these experiments can be interpreted as an interval scale. As mentioned, the weights produced by the regression method are a series of difference scores. Because in general those differences involve different parts of the response scale, comparing different weights requires the assumption that the scale is being used uniformly. But of course issues such as the contraction that occurs at the ends of scales are well known.3 Transformations (e.g., arc-sine) are available of course; in addition, more recent studies in my lab have begun to use forced-choice judgments (and logistic regression) to avoid these issues (e.g., see Sections 7 and 8). In summary, regression is superior to the missing feature method because it (a) assesses feature interactions, (b) is closely relation to ANOVA, (c) yields greater statistical power, and (d) yields feature weights that generalize correctly to other items. At the same time, care concerning the number of test items and issues of scale usage must be exercised; other sorts of test (e. g., forced-choice) might be appropriate in some circumstances. Of course, although assessing ‘‘effects’’ properly is important, the central goal of research is not to merely catalog effects but also to propose theoretical explanations of those effects. Still, like other fields, this one rests on empirical studies that describe how experimental manipulations influence the presence and size of various effects, and false claims and controversies can arise in the absence of a sound method for assessing those effects. These 3
For example, it is reasonable to ask whether the assumption of linearity that is part of linear regression is appropriate for a classification rating task given research suggesting that the evidence that features provide for category membership combines multiplicatively rather than additively (Minda & Smith, 2002). Thus, either a logarithmic transformation of the classification ratings or an alternative method of analysis that assumes a multiplicative rule might be appropriate. For example, Rehder (2003b) analyzed classification rating data (of all possible test items that could be formed on four binary dimensions) by normalizing the ratings so they summed to 1 and then treating the results as representing a probability distribution. From this distribution, I derived the ‘‘probability’’ of each feature and the ‘‘probabilistic contrast’’ for each pair of features, measures that are analogous to the feature weights and two-way interactions derived from linear regression. Because it is based on probabilities, this method implicitly incorporates the assumption that evidence combines multiplicatively and thus may yield what a more accurate measure of ‘‘feature weights.’’ On the other hand, it requires the strong assumption that ratings map one-to-one onto probability estimates, an assumption that may have its own problems. In practice, this probabilistic method of analysis and linear regression have always yielded the same qualitative conclusions.
Causal-Based Categorization: A Review
51
concerns are not merely hypothetical. Section 8 describes experimental results demonstrating that previous conclusions reached on the basis of a study using the missing feature method were likely due to an interactive effect of features rather than a main effect of feature weights.
2.3. Terminology A final note on terminology is in order. Whereas Sloman et al. (1998) have used the terms mutability or conceptual centrality, these refer to a property of individual features. However, I have noted how causal knowledge may also affect how combinations of features can influence classification decisions. Accordingly, I will simply use the term classification weight to refer to the weight that features and certain combinations of features have for membership in a particular category.
3. Computational Models I now present two computational models that have been offered as accounts of the effects of causal knowledge on categorization. Both models specify a rule that assigns to an object a measure of its membership in a category on the basis of that category’s network of interfeature causal relations. Nothing is assumed about the nature of those causal links other than their strength. Neither model denies the existence of other effects on classification, such as the presence of contrast categories, the salience of particular features, or the empirical/statistical information that people observe firsthand. Rather, the claim is that causal relations will have the predicted effects when these factors are controlled.
3.1. The Dependency Model One model is Sloman et al.’s (1998) dependency model. The dependency model is based on the intuition that features are more important to category membership (i.e., are more conceptually central) to the extent they have more dependents, that is, features that depend on them (directly or indirectly). A causal relation is an example of a dependency relation in which the effect depends on its cause. For example, DNA is more important than the color of an animal’s fur because so much depends on DNA; hormones are more important than the size of its eyes for the same reason. According to the dependency model, feature i’s weight or centrality, ci, can be computed from the iterative equation:
52
Bob Rehder
ci;t þ1 ¼
X dij cj;t
ð1Þ
where ci,t is i’s weight at iteration t and dij is the strength of the causal link between i and its dependent j. For example, if a category has three category features X, Y, and Z, and X causes Y which causes Z (as in Figure 1), then when cZ,1 is initialized to 1 and each causal link has a strength of 2, after two iterations the centralities for X, Y, and Z are 4, 2, and 1. That is, feature X is more important to category membership than Y which in turn is more important than Z. Stated qualitatively, the dependency model predicts a difference in feature weights because X, Y, and Z vary in the number of dependents they have: X has two (Y and Z), Y has one (Z), and Z has none. Table 2 also presents how the feature weights predicted by the dependency model vary as a function of causal strength parameter (the ds). These predictions have been tested in experiments described in Section 4. Although the dependency model was successfully applied to natural categories in Sloman et al., its original formulation makes it technically inapplicable to many causal networks of theoretical interest. However, Kim et al. (2008) have recently proposed new variants of the dependency model that address these issues, allowing it to be applied to any network topology. Still, these variants inherent the same qualitative properties as their predecessor, namely, features grow in importance as a function of their number of dependents and the strengths of the causal links with those dependents.4 Note that while the dependency model and its variants specify how feature weights vary as a function of causal network, it makes no predictions regarding how feature combinations makes for better or worse category Table 2 Feature ‘‘Centralities’’ Predicted by the Dependency Model After Two Iterations for Features X, Y, and Z in the Chain Network of Figure 1 for Different Values of the Causal Strength Parameters dXY and dYZ.
Model parameters dXY, dYZ cZ,1 Feature centralities cX,3 cY,3 cZ,3
4
1 1
2 1
3 1
4 1
1 1 1
4 2 1
9 3 1
16 4 1
For example, one of the variants computes what is known as alpha centralities (Bonacich & Lloyd, 2001). When the ds ¼ 2, alpha centralities for the chain network are 3, 2, and 1 for features X, Y, and Z, respectively, whereas they are 4.75, 2.50, and 1 when the ds ¼ 3.
53
Causal-Based Categorization: A Review
members (i.e., it predicts the absence of interactive effects). This is one important property distinguishing it from the next model.
3.2. The Generative Model The second model is the generative model (Rehder, 2003a,b; Rehder & Kim, 2006). Building on causal-model theory (Sloman, 2005; Waldmann & Holyoak, 1992), the generative model assumes that interfeature causal relations are represented as probabilistic causal mechanisms and that classifiers consider whether an object is likely to have been produced or generated by those mechanisms. Objects that are likely to have been generated by a category’s causal model are considered to be good category members and those unlikely to be generated are poor category members. Quantitative predictions for the generative model can be generated assuming a particular representation of causal relations first introduced by Cheng (1997) and later applied to a variety of category-based tasks (Rehder, 2003a,b; Rehder, 2009a; Rehder & Burnett, 2005; Rehder & Hastie, 2001; Rehder & Kim, 2006, 2009a,b). Assume that category k’s causal mechanism relating feature j and its parent i operates (i.e., produces j ) with probability mij when i is present and that any other potential background causes of j collectively operate with probability bj. Given other reasonable assumptions (e.g., the independence of causal mechanisms, see Cheng & Novick, 2005), then j’s parents and the background causes form a ‘‘fuzzy-or’’ network that together produce j in members of category k conditional on the state of j’s parents with probability: X pk ð jjparentsð jÞÞ ¼ 1 ð1 bj Þ ð1 mij ÞindðiÞ ð2Þ i2parentsð jÞ
where ind(i) is an indicator variable that evaluates to 1 when i is present and 0 otherwise. The probability of a root cause r is a free parameter cr. For example, for the simple chain network in Figure 1 in which nodes have at most one parent, the probability of j when its parent i is present is pk ð jjiÞ ¼ 1 ð1 bj Þð1 mij Þ ¼ mij þ bj mij bj
ð3Þ
That is, the probability of j is the probability that it is brought about by its parent or by its background causes. When i is absent, the causal mechanism mij has no effect on j and thus the probability of j is simply pk ð jji Þ ¼ bj
ð4Þ
By applying Eq. (2) iteratively, one can derive the equations representing the likelihood of any possible combination of the presence or absence of features in any causal network. For example, Table 3 presents the likelihood
Table 3
Predictions for the Generative Model for the Chain Network in Figure 1 for Different Parameter Values. Chain network X!Y!Z ms varying, bs constant
Model parameters cX mXY, mYZ bY, bZ Item likelihoods pk(XYZ) cX(mXY þ bY mXYbY)(mYZ þ bZ mYZbZ) pk(XYZ) (1 cX)bY(mYZ þ bZ mYZbZ) cX[1 (mXY þ bY mXYbY)]bZ pk(XYZ) cX(mXY þ bY mXYbY)[1 (mYZ þ bZmYZbZ)] pk(XYZ) Z) cX[1 (mXY þ bY mXYbY)](1 bZ) pk(XY Z) (1 cX)bY[1 (mYZ þ bZ mYZbZ)] pk(XY YZ) (1 cX)(1 bY)bZ pk(X Y Z) (1 cX)(1 bY)(1 bZ) pk(X
bs varying, ms constant
0.750 0.333 0.100
0.750 0.750 0.100
0.750 0.900 0.100
0.750 1.0 0.100
0.750 0.750 0
0.750 0.750 0.250
0.750 0.750 0.500
0.120 0.010 0.045 0.180 0.405 0.015 0.023 0.203
0.450 0.019 0.017 0.131 0.152 0.006 0.023 0.203
0.621 0.023 0.007 0.061 0.061 0.002 0.023 0.203
0.750 0.025 0 0 0 0 0.023 0.203
0.422 0 0 0.141 0.188 0 0 0.250
0.495 0.051 0.035 0.114 0.105 0.012 0.047 0.141
0.574 0.109 0.047 0.082 0.047 0.016 0.063 0.063
Feature probabilities pk(X) pk(Y) pk(Z) pk(X) pk(Z) [causal status effect] Interfeature contrasts Dpk(X, Y)a [Direct] Dpk(Y, Z)a [Direct] Dpk(X, Z)a [Indirect] Direct Indirect Dpk(Z, X, Y)b Dpk(X, Y, Z)b
0.750 0.325 0.198 0.553
0.750 0.606 0.509 0.241
0.750 0.750 0.708 0.775 0.673 0.798 0.077 0.048
0.750 0.563 0.422 0.328
0.750 0.750 0.672 0.781 0.628 0.793 0.122 0.043
0.300 0.300 0.090 0.210 0 0
0.675 0.675 0.456 0.219 0 0
0.810 0.810 0.656 0.154 0 0
0.750 0.750 0.563 0.188 0 0
0.563 0.563 0.316 0.246 0 0
0.900 0.900 0.810 0.090 0 0
0.375 0.375 0.141 0.234 0 0
cX, the probability that feature X appears in category members; mij, strength of the causal relation between i and j; bj, strength of the j’s background causes. Direct, contrasts between features that are directly causally related (X and Y, and Y and Z); Indirect, contrasts between features that are indirectly related (X and Z). a Dpk(i, j ) ¼ pk( j|i ) pk( j| i ). b Dpk(h, i, j ) ¼ [pk( j|ih) pk( j| ih)] [pk( j|i h) pk( j| i h)].
56
Bob Rehder
equations for the chain network in Figure 1 for any combination of features X, Y, and Z. Table 3 also presents the probability of each item for a number of different parameter values. The strengths of the causal links between X and Y (mXY) and between Y and Z (mYZ) are varied over the values 0, 0.33, 0.75, 0.90, and 1.0 while bY and bZ are held fixed at 0.10. In addition, bY and bZ are varied over the values 0, 0.25, and 0.50 while mXY and mYZ are held fixed at 0.75. Parameter cX (the probability that the root cause feature X appears in members of category k) is fixed at 0.75, consistent with the assumption that X is a typical feature of category k. Table 3 indicates how causal relations determine a category’s statistical distribution of features among category members. The assumption of course is that these item probabilities are related to their category membership: Items that with a high probability of being generated are likely category members and those with a low probability of generation are unlikely ones. That is, according to the generative model, the effects of causal relations on classification are mediated by the statistical distribution of features that those relations are expected to produce. Although the generative model’s main predictions concern whole items, from these probability distributions one can derive statistics corresponding to the two sorts of empirical effects I have described, namely, feature weights and feature interactions. For example, from Eqs. (3) and (4) the probability of j appearing in a k category member is pk ð jÞ ¼ pk ð jjiÞpk ðiÞ þ pk ð jji Þpk ði Þ pk ð jÞ ¼ ðmij þ bj mij bj Þpk ðiÞ þ bj pk ði Þ pk ð jÞ ¼ mij pk ðiÞ þ bj mij bj pk ðiÞ
ð5Þ
where i is the parent of j. Table 3 presents the probability of each feature for each set of parameter values. For example, when cX ¼ 0.75, mXY ¼ mYZ ¼ 0.90, and bY ¼ bZ ¼ 0.10, then pk(X) ¼ 0.750, pk(Y) ¼ 0.708, and pk(Z) ¼ 0.673. That is, feature X has a larger ‘‘weight’’ than Y which in turn is larger than Z’s. Table 3 presents how the feature weights predicted by the generative model vary as a function of parameters. In Section 4, I will show how the generative and dependency models make qualitatively differently predictions regarding how feature weights vary across parameter settings and present results of experiments testing those predictions. Importantly, the generative model also predicts that causally related features should be correlated within category members. A quantity that reflects a dependency—and hence a correlation—between two variables is the probabilistic contrast. The probabilistic contrast between a cause i and an effect j is defined as the difference between the probability of j given the presence or absence of i: Dpk ði; jÞ ¼ pk ð jjiÞ pk ð jji Þ
ð6Þ
Causal-Based Categorization: A Review
57
For the causal network in Figure 1, Table 3 shows how the contrasts between the directly causally related features, Dpk(X, Y) and Dpk(X, Z), are greater than 0, indicating how those pairs of features should be correlated (a relation that holds so long as the ms > 0 and the bs < 1). Moreover, the contrast between the two indirectly related features, Dpk(X, Z), is greater than zero but less than the direct contrasts, indicating that X and Z should also be correlated, albeit more weakly. In other words, the generative model predicts interactive effects: Objects will be considered good category members to the extent they maintain expected correlations and worse ones to the extent they break those correlations. That the generative model predicts interactive effects between features is a key property distinguishing it from the dependency model. Table 3 presents how the pairwise feature contrasts predicted by the generative model vary as a function of parameters, predictions that have been tested in experiments described in Section 6. The generative model also makes predictions regarding the patterns of higher order interactions one expects for a causal network. For example, a higher order contrast that defines how the contrast between i and j is itself moderated by h is given by Eq. (7): pk ð jjihÞ Dpk ðh; i; jÞ ¼ ½ pk ð jjihÞ pk ð jjihÞ ½pk ð jjihÞ
ð7Þ
Table 3 indicates that for a chain network Dpk(X, Y, Z) ¼ 0, indicating that the contrast between Y and Z is itself unaffected by the state of X. This corresponds to the well-known causal Markov condition in which Y ‘‘screens off ’’ Z from X; likewise Dpk(Z, X, Y) ¼ 0 means that Y screens off X from Z (Pearl, 1988). In Section 6, I will demonstrate how these sorts of higher order contrasts manifest themselves in classification judgments.
4. The Causal Status Effect This section begins the review of the major phenomena that have been discovered in the causal-based categorization literature. In each of the following four sections, I define the phenomenon, discuss the major theoretical variables that have been shown to influence that phenomenon, and consider the implications these results have for the two computational models just described. I also briefly discuss other variables (e.g., experimental details of secondary theoretical importance) that have also been shown to have an influence. As mentioned, this review focuses on studies testing novel categories, that is, ones with which subjects have no prior experience because they are learned as part of the experiment. There are two reasons for this. The first is that there are already good reviews of work testing the effects
58
Bob Rehder
of causal knowledge on real-world categories (e.g., Ahn & Kim, 2001). The second is the presence of confounds associated with natural materials (e.g., the presence of contrast categories, the different salience of features, the effects of empirical–statistical information, etc.) already noted in Section 2. The first empirical phenomenon I discuss is the causal status effect, an effect on feature weights in which features that appear earlier in a category’s causal network (and thus are ‘‘more causal’’) carry greater weight in categorization decisions. For example, in Figure 1, X is the most causal feature, Z is the least causal, and Y is intermediate. As a consequence, all else being equal, X should be weighed more heavily than Y which should be weighed more heavily than Z. In fact, numerous studies have demonstrated situations in which features are more important than those they cause (Ahn, 1998; Ahn et al., 2000a; Kim et al., 2008; Luhmann et al., 2006; Rehder, 2003b; Rehder & Kim, 2006; Sloman et al., 1998). Nevertheless, that the size of the causal status effect can vary dramatically across studies—in many it is absent entirely—raises questions about the conditions under which it appears. For example, in the Ahn et al. (2000a) study, participants learned novel categories with three features X ! Y ! Z and then rated the category membership of items missing exactly one feature on a 0–100 scale. The item missing-X was rated lower (27) than one missingY (40) which in turn was lower than the one missing Z (62), suggesting that X is more important than Y which is more important than Z. A large causal status effect was also found in Ahn (1998, Experiments 3 and 4) and Sloman et al. (1998, Study 3).5 In contrast, in Rehder and Kim (2006), participants learned categories in which three out of five features were connected in a causal chain and then rated the category membership of a number of test items. To assess the importance of features, regression analyses were performed on those ratings. Unlike Ahn et al., Rehder and Kim found only a modest (albeit significant) difference in the regression weights of X and Y (7.6 and 6.4), a difference that reflected the nearly equal ratings on the missing-X and missing-Y items (43 and 47, respectively). In contrast, the regression weight on Z (6.2) and the rating of the missing-Z item (48) indicated no difference in importance between features Y and Z. Similarly, testing categories with four features, Rehder (2003b) found a partial causal status effect (a larger weight on the chain’s first feature and smaller but equal
5
Because this study used the missing feature method, there is uncertainty regarding whether feature Y was more important than Z, because the lower rating of missing-Y item could reflect an effect of coherence instead (i.e., it violates two causal relations whereas the missing-Z item violates one; see Example 3 in Table 1 discussed in Section 2.2). Nevertheless, concluding that feature X is more important than Z on the basis of the difference in ratings between the missing-X and missing-Z items is sound, because those items are equated on the number of violated correlations (one each).
Causal-Based Categorization: A Review
59
weights on the remaining ones) in one experiment and no causal status effect at all in another. What factors are responsible for these disparate results? Based on the contrasting predictions of the dependency and generative models presented earlier, I now review recent experiments testing a number of variables potentially responsible for the causal status effect.
4.1. Causal Link Strength One factor that may influence the causal status effect is the strength of the causal links. For example, whereas Ahn (1998) and Ahn et al. (2000a) described the causal relationships in probabilistic terms by use of the phrase ‘‘tends to’’ (e.g., ‘‘Sticky feet tends to allow roobans to build nests on trees.’’), Rehder and Kim (2006) omitted any information about the strength of the causal links. This omission may have invited participants to interpret the causal links as deterministic (i.e., the cause always produces the effect), and this difference in the perceived strength of the causal links may be responsible for the different results.6 To test this hypothesis, Rehder and Kim (2009b, Experiment 1) directly manipulated the strength of the causal links. All participants were taught three category features and two causal relationships linking X, Y, and Z into a causal chain (as in Figure 1). For example, participants who learned myastars were told that the typical features of myastars were a hot temperature, high density, and a large number of planets and that hot temperature causes high density which in turn causes a large number of planets. Each feature included information about the other value on the same stimulus dimension (e.g., ‘‘Most myastars have high density whereas some have low density.’’). Each causal link was accompanied with information about the causal mechanism (e.g., ‘‘High density causes the star to have a large number of planets. Helium, which cannot be compressed into a small area, is spun off the star, and serves as the raw material for many planets.’’). In addition, participants were given explicit information about the strengths of those causal links. For example, participants in the Chain-100 condition were told that each causal link had a strength of 100%: ‘‘Whenever a myastar has high density, it will cause that star to have a large number of planets with probability 100%.’’ Participants in the Chain-75 condition were told that the causal links operated with probability 75% instead. Participants then rated the category membership of all eight items that could be formed on the three binary dimensions. A Control condition in which no causal links were presented was also tested. 6
Indeed, as part of another experiment testing Rehder and Kim’s materials we asked participants to judge how often a cause produced its effect. The average response was 91% (the modal response was 100%), supporting the conjecture that many subjects interpreted the causal links as nearly deterministic.
60
Bob Rehder
The dependency and generative models make distinct predictions for this experiment. Table 2 shows that the dependency model predicts that the size of the causal status effect is an increasing monotonic function of causal strength. For example, after two iterations feature weights are 4, 2, and 1 when cZ,1 ¼ 1 and dXY ¼ dYZ ¼ 2 (yielding a difference of 3 between the weights of X and Z) versus 9, 3, and 1 when dXY ¼ dYZ ¼ 3 (a difference of 8). Intuitively, it makes this prediction because stronger causal relations mean that Y is more dependent on X and Z is more dependent on Y. As a consequence, the dependency model predicts a stronger causal status effect in the Chain-100 condition versus the Chain-75 condition. In contrast, Table 3 shows that the generative model predicts that the size of causal status effect should decrease as the strength of the causal links increases; indeed, the causal status effect can even reverse at high levels of causal strength. For example, when bX ¼ bZ ¼ 0.10, Table 3 shows that the difference between pk(X) and pk(Z) (a measure of the causal status effect) is 0.553, 0.241, 0.077 and 0.048 for causal strengths of 0.33, 0.75, 0.90, and 1.0, respectively. Intuitively, a causal status effect is more likely for probabilistic links because X generates Y, and Y generates Z, with decreasing probability. For example, if cX ¼ 0.75, mXY ¼ mYX ¼ 0.75, and there are no background causes (bs ¼ 0), then pk(X) ¼ 0.750, pk(Y) ¼ 0.7502 ¼ 0.563, and pk(Z) ¼ 0.7503 ¼ 0.422. Thus, so long as the b parameters (which work to increase the probability of Y and Z) are modest, the result will be that pk(X) will be larger than pk(Z). In contrast, a causal status effect is absent for deterministic links because X always generates Y and Y always generates Z. For example, if cX ¼ 0.75, ms ¼ 1, and bs ¼ 0, pk(X) ¼ pk(Y) ¼ pk(Z) ¼ 0.750, and the causal status effect grows increasingly negative (i.e., pk(Z) becomes greater than pk(X)) as the bs increase. Note that because one also expects features to be weighed equally in the absence of any causal links between features, the generative model predicts that the causal status effect should vary nonmonotonically with causal strength: It should be zero when mXY ¼ mYZ ¼ 0, large when the ms are intermediate, and zero (or even negative) when the ms ¼ 1. Thus, the generative model predicts a stronger causal status effect in the Chain-75 condition versus the Chain-100 condition and the Control conditions. Following Rehder and Kim (2006), regression analyses were performed on each subjects’ classification ratings with predictors for each feature and each two-way interaction between features. The regression weights averaged over subjects for features X, Y, and Z are presented in the left panel of Figure 2A. (The right panel, which presents the two-way interactions weights, will be discussed in Section 6.) In fact, a larger causal status effect obtained in the Chain-75 condition in which the causal links were probabilistic as compared to the Chain-100 condition in which they were deterministic; indeed in the Chain-100 condition the causal status effect was absent entirely (the small quadratic effect of features suggested
61
Causal-Based Categorization: A Review
A
Experiment 1
12 10
20
Chain-100 (p lin > 0.20)
16
Chain-100 (p lin = 0.17)
Weight
8 12 6 8 4
Chain-75 (p lin < 0 .01)
2 0
0 X
B
Y
Z
Experiment 2
12 10
4
Chain-75 (p lin < 0 .01)
Direct 20
Background-50 (p lin > 0.20)
16 Background-0 (p lin < 0.005)
Weight
8 12 6 4
8
Background-0 (p lin < 0 .01)
4
2 0
Background-50 (p lin < 0 .001)
0 X
C
Indirect
Y
Z
Experiment 3
16
Indirect
20
Essentialized-chain-80 (p lin < 0.01)
14
Direct
12 10
16
Unconnected-chain-80 (p lin = 0.06)
12
8 8
6 4
Unconnected-chain-80 (p lin > 0 .20)
4
2 0 X
Y Feature
Z
Ess-chain-80 (p lin < 0 .001)
0 Direct Indirect Feature interaction
Figure 2 Classification test results of three experiments from Rehder and Kim (2009b): (A) Experiment 1, (B) Experiment 2, (C) Experiment 3. plin is the significance of the linear trend in each condition.
62
Bob Rehder
by Figure 2A was not significant). Of course, a stronger causal status effect with weaker causal links is consistent with the predictions of the generative model and inconsistent with those of the dependency model. As expected, in the Control condition (not shown in Figure 2A), all feature weights were equal. As a further test of the generative model, after the classification test we also asked subjects to estimate how frequently each feature appeared in category members. For example, subjects who learned myastars were asked how many myastars out of 100 many would have high density. Recall that the generative model predicts that effects of causal knowledge changes people’s beliefs regarding how often features appear in category members, and, if this is correct, the effects uncovered in the classification test should be reflected in the feature likelihood ratings. The results of the feature likelihood ratings, presented in Figure 3A, support this conjecture: Likelihood ratings decreased significantly from feature X to Y to Z in the Chain-75 condition whereas those ratings were flat in the Chain-100 condition, mirroring the classification results in Figure 2A. This finding supports the generative model’s claim that causal relations change classifiers’ subjective beliefs about the category’s statistical distribution of features (also see Sloman et al., 1998). Clearly, causal link strength is one key variable influencing the causal status effect. Additional evidence for this conclusion is presented in Section 4.5.
4.2. Background Causes Experiment 2 from Rehder and Kim (2009b) conducted another test of the generative and dependency models by manipulating the strength of alternative causes of the category features, that is, the generative model’s b parameters. All participants were instructed on categories with three features related in a causal chain in which each causal link had a strength of 75%. However, in the Background-0 condition, they were also told that there were no other causes of the features. For example, participants who learned about myastars learned not only that high density causes a large number of planets with probability 75% but also that ‘‘There are no other causes of a large number of planets. Because of this, when its known cause (high density) is absent, a large number of planets occurs in 0% of all myastars.’’ In contrast, in the Background-50 condition these participants were told that ‘‘There are also one or more other features of myastars that cause a large number of planets. Because of this, even when its known cause (high density) is absent, a large number of planets occurs in 50% of all myastars.’’ Table 3 shows how the generative model’s predictions vary with the b parameters and indicates that the causal status effect should become weaker as features’ potential background causes get stronger; indeed, it should reverse as b grows much larger than 0.50. Specifically, when the
63
Causal-Based Categorization: A Review
A
Experiment 1
B
85 Chain-100 (p lin > 0.20)
80 75 70
Background-50 (p lin > 0.20)
80
Rating
Rating
Experiment 2 85
Chain-75 (p lin < 0.01)
65
75 70
Background-0 (p lin < 0.01)
65 60
60 X
Y
Z
X
Y
Z
Feature C
Experiment 3
85
Unconnected-chain-80 (p lin > 0.20)
80 75 70 65
Essentialized-chain-80 (p lin < 0.01)
60 X
Y Feature
Z
Figure 3 Subjects’ feature likelihood estimates (i.e., out of 100, how many category members have that feature) from Rehder and Kim (2009b): (A) Experiment 1, (B) Experiment 2, (C) Experiment 3. plin is the significance of the linear trend in each condition.
ms ¼ 0.75 the difference between pk(X) and pk(Z) is 0.328, 0.122, and 0.043 for values of the bY and bZ of 0, 0.25, and 0.50, respectively (Table 3). Intuitively, this occurs because as bY and bZ increase they make the features Y and Z more likely; indeed, as the bs approach 1, Y and Z will be present in all category members. As a consequence, the generative model predicts a larger causal status effect in the Background-0 condition as compared to the Background-50 condition. The dependency model, in contrast, makes a different prediction for this experiment. Because it specifies that a feature’s centrality is a sole function of its dependents, supplying a feature with additional causes (in the form of background causes) should have no effect on its centrality. Thus, because
64
Bob Rehder
centrality should be unaffected by the background cause manipulation, the dependency model predicts an identical causal status effect in the Background-0 and Background-50 conditions.7 Regression weights derived from subjects’ classification ratings are shown in Figure 2B. The results were clear-cut: A larger causal status effect obtained in the Background-0 condition in which background causes were absent as compared to the Background-50 condition in which they were not; indeed in the Background-50 condition the causal status effect was absent entirely. Moreover, these regression weights were mirrored in subjects’ explicit feature likelihood ratings (Figure 3B). These results confirm the predictions of the generative model and disconfirm those of the dependency model. The strength of background causes is a second key variable affecting the causal status effect.
4.3. Unobserved ‘‘Essential’’ Features The preceding two experiments tested the effect of varying the m and b parameters on the causal status effect. However, there are reasons to expect that categorizers sometimes reason with a causal model that is more elaborate than one that includes only observable features. For example, numerous researchers have suggested that people view many kinds as being defined by underlying properties or characteristics (an essence) that is shared by all category members and by members of no other categories (Gelman, 2003; Keil, 1989; Medin & Ortony, 1989; Rehder & Kim, 2009a; Rips, 1989) and that are presumed to generate, or cause, perceptual features. Although many artifacts do not appear to have internal causal mechanisms (e.g., pencils and wastebaskets), it has been suggested that the essential properties of artifacts may be the intentions of their designers (Bloom, 1998; Keil, 1995; Matan & Carey, 2001; cf. Malt, 1994; Malt & Johnson, 1992; Rips, 2001). Thus, the causal model that people reason with during categorization 7
There may be some uncertainty regarding the dependency model’s predictions for this experiment that stems from ambiguities regarding how its construct of ‘‘causal strength’’ should be interpreted. We interpret it to be a measure of the propensity of the cause to produce the effect, that is, as a causal power (corresponding to the generative model’s m parameter). An alternative interpretation is that it corresponds to a measure of covariation between the cause and effect (e.g., the familiar DP rule of causal induction). Under this alternative interpretation, the dependency model would also predict a weaker causal status effect in the Background-50 condition (because the causal links themselves are weaker). Although Sloman et al. did not specify which interpretation was intended, we take the work of Cheng and colleagues as showing that when you ask people to judge ‘‘causal strength’’ they generally respond with an estimate of causal power rather than DP (Cheng, 1997; Buehner, Cheng, & Clifford, 2003) and so that is the assumption we make here. Of course, exactly what measure people induce in causal learning tasks is itself controversial (e.g., Lober & Shanks, 2000) and even Buehner et al. found that substantial minority of subjects responded with causal strength estimates that mirrored DP. But even if one grants this alternative interpretation, it means that the dependency model predicts a weaker causal status effect in the Background-50 condition whereas the generative model predicts it should be absent entirely. In addition, of course, the generative model but not the dependency model predicts effects of this experiment’s manipulation on feature frequency ratings and coherence effects (as described below).
65
Causal-Based Categorization: A Review
may include the underlying causes they assume produce a category’s observable features. Rehder and Kim (2009b, Experiment 3) tested the importance of the category being essentialized by comparing the causal structures shown in Figure 4. As in the two preceding experiments, each category consisted of three observable features related in a causal chain. However, the categories were now ‘‘essentialized’’ by endowing them with an additional feature that exhibits an important characteristic of an essence, namely, it appears in all members of the category and in members of no other category. For example, for myastars the essential property was ‘‘ionized helium,’’ and participants were told that all myastars possess ionized helium and that no other kind of star does.8 In addition, in the Essentialized-Chain-80 condition (Figure 4A) but not the Unconnected-Chain-80 condition (Figure 4B), participants were also instructed on a third causal relationship linking feature X to the essential feature (e.g., in myastars, that ionized helium causes high temperature, where high temperature played the role of X). All causal links were presented as probabilistic by describing them as possessing a strength of 80%. A Control condition in which no causal links are provided was also tested. After learning the categories, participants performed a classification test that was identical to the previous two experiments. (In particular, the state of the essential property was not displayed in any test item.) Linking X, Y,
A E
80%
X
80%
Y
80%
Z
B E
X
80%
Y
80%
Z
Figure 4 Causal structures tested in Rehder and Kim (2009b, Experiment 3): (A) Essentialized-Chain-80 condition, (B) Unconnected-Chain-80 condition.
8
Although explicitly defining essential features in this manner controls the knowledge brought to bear during classification, note that these experimentally defined ‘‘essences’’ may differ in various ways from (people’s beliefs about) some real category essences. Although adults’ beliefs about essences are sometimes concrete (e.g., DNA in the case of biological kinds for adults), preschool children’s knowledge about animals’ essential properties is less specific, involving only a commitment to biological mechanisms that operate on their ‘‘insides’’ (Gelman, 2003; Gelman & Wellman, 1991; Johnson & Solomon, 1997). And, an essential property not just one that just happens to be present in all category members (and absent in all nonmembers), it is one that is present in all category members that could exist. But while the concreteness and noncontingency of people’s essentialist beliefs is undoubtedly important under some circumstances, we suggest that a feature that is present in all category members is sufficient to induce a causal status effect under the conditions tested in this experiment.
66
Bob Rehder
and Z to an essential feature should have two effects on classification ratings. First, because the link between E and X has a strength of mEX ¼ 0.80, then the probability of feature X within category members should be at least 0.80. This is greater than the value expected in the Unconnected-Chain-80 condition on the basis of the first two experiments (in which subjects estimated pk(X) to be a little over 0.75; see Figure 3A and B). Second, the larger value of pk(X) should produce an enhanced causal status effect, because the larger value of pk(X) results in a greater drop between it and pk(Y) (and the larger value of pk(Y) results in a greater drop between it and pk(Z)). These effects are apparent in Table 4 that presents the generative model’s quantitative predictions for the case when the b parameters equal 0.10. (The table also includes predictions for a case, discussed below, where the m parameters are 1.0 instead of 0.80.) Table 4 confirms that the size of the causal status effect should be larger in the Essentialized-Chain-80 condition (a difference between pk(X) and pk(Z) of 0.223) than the Unconnected-Chain-80 condition (0.189) when the bs ¼ 0.10; this prediction holds for any value of the bs < 1. In contrast, the dependency model predicts no difference between the two conditions. Because that model claims that a feature’s centrality is determined by its dependents rather than its causes, providing feature X with an additional cause in the EssentializedChain-80 condition should have no influence on its centrality. The feature weights derived from subjects’ classification ratings are presented in Figure 2C. Consistent with the predictions of the generative model, a larger causal status effect obtained in the Essentialized-Chain-80 condition as compared to the Unconnected-Chain-80 condition; indeed, although feature weights decreased in the Unconnected-Chain-80 condition, this decrease did not reach significance. This same pattern was also reflected in feature likelihood ratings (Figure 3C): decreasing feature likelihoods in the Essentialized-Chain-80 but not the Unconnected-Chain-80 condition.9 As expected, all feature weights and likelihood ratings were equal in the Control condition. Other studies have found that essentialized categories lead to an enhanced causal status effect. Using the same materials, Rehder (2003b, Experiment 3) found larger a causal status effect with essentialized categories even when the strength of the causal link was unspecified. And, Ahn and 9
The absence of a significant causal status effect in the Unconnected-Chain-80 condition was somewhat of a surprise given the results from Rehder and Kim’s (2009b) Experiment 1 reviewed in Section 4.1. The Unconnected-Chain-80 condition was identical to that experiment’s Chain-75 condition except for (a) causal strengths were 80% instead of 75% and (b) the presence of an explicit essence, albeit one that is not causally related to the other features. It is conceivable that the 5% increase in causal strengths may be responsible for reducing the causal status effect; indeed the generative model predicts a slightly smaller causal status effect for m ¼ 0.80 versus 0.75. In addition, the presence of an explicit essential feature to which the causal chain was not connected may have led participants to assume that the chain was unlikely to be related to any other essential property of the category (and of course the generative model claims that essential properties to which the causal chain is causally connected promotes a causal status effect).
Table 4 Predictions for the Generative Model for the Causal Networks in Figure 4 for Different Parameter Values. Unconnected model
Essentialized model
Model parameters cE cX mEX mXY, mYZ bX, bY, bZ Exemplar likelihoods (mEX þ bX mEXbX)(mXY þ bY mXYbY)(mYZ þ bZ mYZbZ) pk(XYZ) [1 (mEX þ bX mEXbX)]bY(mYZ þ bZ mYZbZ) pk(XYZ) pk(XYZ) (mEX þ bX mEXbX)[1 (mXY þ bY mXYbY)]bZ ) (mEX þ bX mEXbX)(mXY þ bY mXYbY) pk(XYZ [1 (mYZ þ bZ mYZbZ)] Z ) (mEX þ bX mEXbX)[1 (mXY þ bY mXYbY)](1 bZ) pk(XY Z ) pk(XY [1 (mEX þ bX mEXbX)]bY[1 (mYZ þ bZ mYZbZ)] [1 (mEX þ bX mEXbX)](1 bY)bZ pk(X YZ) Y Z ) pk(X [1 (mEX þ bX mEXbX)](1 bY)(1 bZ) Feature probabilities pk(X) pk(Y) pk(Z) pk(X) pk(Z) [Causal status effect]
1.0
1.0 0.750
0.750
0.800 0.800 0.100
1.0 1.0 0.100
0.800 0.100
1.0 0.100
0.551 0.015 0.015 0.121
1.0 0 0 0
0.504 0.021 0.014 0.111
0.750 0.025 0 0
0.133 0.003 0.016 0.146
0 0 0 0
0.122 0.005 0.023 0.203
0 0 0.023 0.203
0.820 0.690 0.597 0.223
1.0 1.0 1.0 0
0.750 0.750 0.640 0.775 0.561 0.798 0.189 0.048
The likelihood equations for the essentialized model (Figure 4A) assume cE ¼ 1; equations for the unconnected model (Figure 4B) were presented earlier in Table 3. ci, the probability that feature i appears in category members; mij, strength of the causal relation between i and j; bj, strength of the j’s background causes.
68
Bob Rehder
colleagues have found that expert clinicians both view mental disorders as less essentialized than laypersons (Ahn et al., 2006) and exhibit only a weak causal status effect (Ahn et al., 2005). These results show that an essentialized category is a third key variable determining the size of the causal status effect. However, note that the generative model; predictions regarding essentialized categories itself interacts with causal link strength: When links are deterministic, essentializing a category should yield feature weights that are larger but equal to one another (because each feature in the chain is produced with the same probability as its parent, namely, 1.0)—that is, no causal status effect should obtain (Table 4). These predictions were confirmed by Rehder and Kim’s (2009b) Experiment 4, which was identical to Experiment 3 except that the strengths of the causal links were 100% rather than 80%.
4.4. Number of Dependents Yet another potential influence on the causal status effect is a feature’s number of dependents. Rehder and Kim (2006, Experiment 3) assessed this variable by testing the two network topologies in Figure 5. Participants in both conditions were instructed on categories with five features, but whereas feature Y had three dependents in the 1-1-3 condition (1 root cause, 1 intermediate cause, 3 effects), it had only one in the 1-1-1 condition. In this experiment, no information about the causal strengths or background causes was provided. In the 1-1-1 condition, which feature played the role of Y’s effect was balanced between Z1, Z2, and Z3. After learning these causal category structures, subjects were asked to rate the category membership of all 32 items that could be formed on the five binary dimensions.
A
B Z1
Z1
X
Y
Z2
Z3
X
Y
Z2
Z3
Figure 5 Causal networks tested in Rehder and Kim (2006, Experiment 3): (A) 1-1-3 condition, (B) 1-1-1 condition.
69
Causal-Based Categorization: A Review
The dependency and generative models again make distinct predictions for this experiment. According to the dependency model, its greater number of dependents in the 1-1-3 condition means that Y is more central relative to the 1-1-1 condition. Likewise, its greater number of indirect dependents in the 1-1-3 condition means that X is relatively more central as well. As a result, the dependency model predicts a larger causal status effect in the 1-1-3 condition than in the 1-1-1 condition. For example, according to Eq. (1), whereas in the 1-1-3 condition feature centralities are 12, 6, and 1 for X, Y, and the Z’s, respectively, after two iterations when cZ,1 ¼ 1 and the ds ¼ 2, they are 4, 2, and 1 in the 1-1-1 condition. The generative model, in contrast, predicts no difference between conditions, because Y having more effects does not change the chance that it will be generated by its category. The results of this experiment are presented in Figure 6. (In the figure, the weight for ‘‘Z (effect)’’ is averaged over Z1, Z2, and Z3 in the 1-1-3 condition and is for the single causally related Z in the 1-1-1 condition. The weights for the ‘‘Z (isolated)’’ features will be discussed later in Section 5.1.) The figure confirms the predictions of the generative model and disconfirms
12
10
8
1-1-1
6 1-1-3
4
2
0 X
Y
Z’s (effects) Feature
Z’s (isolated)
Figure 6 Classification results from Rehder and Kim (2006, Experiment 3). Weight on the ‘‘effect’’ Z’s is averaged all three Z features in the 1-1-3 condition and is the single Z feature that plays the role of an effect in the 1-1-1 condition. Weight on the ‘‘isolated’’ Z’s is the average of the two causally unrelated Z features in the 1-1-1 condition (see Figure 5).
70
Bob Rehder
those of the dependency model. First, as described earlier, tests of a three element causal chain in this study (the 1-1-1 condition) produced a relatively small and partial causal status effect (X was weighed more than Y which was weighed the same as Z). But more importantly, the size of the causal status effect was not larger in the 1-1-3 condition than the 1-1-1 condition. (Although weights were larger overall in the 1-1-1 condition, this difference was only marginally significant.) These results show that features’ number of dependents does not increase the size of the causal status effect.
4.5. Other Factors In this section, I present other factors that have been shown to influence the size of the causal status effect, factors not directly relevant to the predictions of either the dependency or generative models. 4.5.1. Number of Test Items: Rehder (2009b) Recall that whereas Ahn et al. (2000a) observed a difference of 35 points in the rating of the item missing only X versus the one missing only Z, that difference was an order of magnitude smaller in Rehder and Kim (2006). Besides the difference in the implied strength of the causal links, these studies also differed in the number of classification test items presented (3 vs. 32, respectively). As argued in Section 2.2, it is likely that the presence of very likely and very unlikely category members that anchor the high and low ends of the response scale will decrease the differences between intermediate items such as those missing one feature (scale contraction) whereas the absence of extreme items will increase that difference (scale expansion) (Poulton, 1989). In addition, rating items that differed only on which feature was missing may have triggered a comparison of the relative importance of those features that would not have occurred otherwise. In other words, a large causal status effect may have arisen partly because of task demands. To test these conjectures, Rehder (2009b) replicated the original Ahn et al. (2000a) study but manipulated the total number of test items between 3 (those missing just one feature) and 8 (all test items that can be formed on three binary dimensions). In addition, as an additional test of the role of causal strength, I compared a condition with the original wording implying a probabilistic relation (e.g., ‘‘Sticky feet tends to allow roobans to build nests on trees.’’) with one, which implied a deterministic relation (e.g., ‘‘Sticky feet always allow roobans to build nests on trees.’’). The procedure was identical to that in Ahn et al. except that subjects previewed all the test items before rating them. Figure 7 presents the size of the causal status effect measured by the difference between the missing-Z and missing-X test items. First note that
71
Causal-Based Categorization: A Review
35 30 25
Probabilistic
20 15 10 5 0 −5
Deterministic
−10 3 8 Number of test items
Figure 7 The size of the causal status effect (measured as the ratings difference between the missing-Z and missing-X test items) in Rehder (2009b) as a function of experimental condition.
the causal status effect was larger in the probabilistic versus the deterministic condition, replicating the findings described earlier in Section 4.1 in which a stronger causal status effect obtains with weaker causal links (Rehder & Kim, 2009b, Experiment 1). But the causal status effect was also influenced by the number of test items. For example, whereas the probabilistic condition replicated the large causal status effect found in Ahn et al. when subjects rated only three test items (a difference of 24 points between missing-Z and missing-X items), it was reduced when eight items were rated (11 points); in the deterministic condition it was reduced from 5.5 to 3.4. Overall, the causal status effect reached significance in the probabilistic/three test item condition only. These results confirm that scale expansion and/or task demands can magnify the causal status effect when only a small number of test items are rated. 4.5.2. ‘‘Functional’’ Features: Lombrozo (2009) Lombrozo (2009) tested how feature importance varies depending on whether it is ‘‘functional,’’ that is, whether for a biological species it is an adaptation that is useful for survival or whether for an artifact it affords some useful purpose. For example, participants were first told about a type of flower called a holing, that holings have broom compounds in their stems (feature X) that causes them to bend over as they grow (feature Y). Moreover, they were told that the bending is useful because it allows pollen to brush onto the fur of field mice. In a Mechanistic condition, participants were then asked to explain why holings bend over, a question which invites
72
Bob Rehder
either a mechanistic response (because broom compounds causes them to) or a functional response (because bending over is useful for spreading pollen). In contrast, in the Functional condition participants were then asked what purpose bending over served, a question that invites only a functional response. All subjects were then shown two items, one missing-X and one missing-Y, and asked which was more likely to be a holing. Whereas subjects chose the missing-Y item 71% of the time in the Mechanistic condition (i.e., they exhibited a causal status effect), this effect was eliminated (55%) in the Functional condition. Although the effect size in this study were small (reaching significance at the 0.05 level only by testing 192 subjects), the potential functions that features afford, a factor closely related to their place in category’s causal network, may be an important new variable determining their importance to classification.
4.6. Theoretical Implications: Discussion Together, the reviewed studies paint a reasonably clear picture of the conditions under which a causal status effect does and does not occur. Generally, what appears to be going on is this. When confronted with a causal network of features, classifiers will often adopt a ‘‘generative’’ perspective, that is, they will think about the likelihood of each successive event in the chain. This process may be equivalent to a kind of mental simulation in which they repeatedly ‘‘run’’ (consciously or unconsciously) a causal chain. Feature probabilities are then estimated by averaging over runs. Of course, in a run the likelihood of each subsequent feature in the chain increases as a function of the strength of chain’s causal links. When links are deterministic then all features will be present whenever the chain’s root cause is; in such cases, no causal status effect appears. However, when causal links are probabilistic each feature is generated with less certainty at every step in the causal chain, and thus a causal status effect arises. But working against this effect is classifiers’ beliefs about the strength of alternative ‘‘background’’ causes. Background causes will raise the probability of each feature in the causal chain in each simulation run, and, if sufficiently strong, will cancel out (and possibly even reverse) the causal status effect. The dependency model, in contrast, is based on a competing intuition that features are important in people’s conceptual representations to the extent they are responsible for other features (e.g., DNA is more important than the color of an animal’s fur because so much depends on DNA). But despite the plausibility of this intuition, it is does not conform to subjects’ category membership judgments. Whereas the dependency model predicts that the causal status effect should be stronger with stronger causal links and more dependents, it was either weaker or unchanged. And, whereas the dependency model predicts that features’ weights should be unaffected by
Causal-Based Categorization: A Review
73
the introduction of additional causes, we found instead a weaker causal status effect when background causes were present. It is interesting to consider how classifiers’ default assumptions regarding the causal strengths and background causes might influence the causal status effect. Recently, Lu, Yuille, Liljeholm, Cheng, and Holyoak (2008) have proposed a model that explains certain causal learning results by assuming that people initially assume causal relationships to be sufficient (‘‘strong’’ in their terms, i.e., the cause always produces the effect) and necessary (‘‘sparse,’’ i.e., there are no other causes of the effect) (also see Lombrozo, 2007; Rehder & Milovanovic, 2007). On one hand, because we have seen how features are weighed equally when causal links are deterministic, a default assumption of strong causal relationships works against the causal status effect. On the other hand, that people apparently believe in many probabilistic causal relations (e.g., smoking only sometimes causes lung cancer) means they can override this default. When they do, Lu et al.’s second assumption—the presumed absence of background causes—will work to enhance the causal status effect. The generative perspective also explains why essentialized categories lead to an enhanced causal status effect: The presence of an essential features means that observed features should be generated with greater probability and, so long as causal links are probabilistic, near features (X) should be generated with relatively greater certainty than far ones (Z). Although Rehder and Kim (2009b) tested the power of an essential feature to generate a larger causal status effect, note that an underlying feature would produce that effect even if it was only highly diagnostic of, but not truly essential to, category membership so long as it was sufficient to increase the probability of the observed features. This prediction is important, because the question of whether real-world categories are essentialized is a controversial one. Although good evidence exists for the importance of underlying properties to category membership (Gelman, 2003; Keil, 1989; Rips, 2001), Hampton (1995) has demonstrated that even when biological categories’ so-called essential properties are unambiguously present (or absent), characteristic features continue to exert an influence on judgments of category membership (also see Braisby, Franks, & Hampton, 1996; Kalish, 1995; Malt, 1994; Malt & Johnson, 1992). My own suspicion is that although the unobserved properties of many categories are distinctly important to category membership, few may be truly essential (see Rehder, 2007, for discussion). But according to the generative model, all that is required is that the unobserved property increases the probability of the observed features. The causal status effect may be related to essentialism in two other ways. First, whereas I have described the present results in terms of an essential feature increasing the probability of observed features, it may also be that subjects engaged in a more explicit form of causal inference in which they reasoned from the presence of observable features X, Y, and Z to the presence of the unobserved essential feature (and from the essential feature to category membership). I consider this possibility further in Section 7.
74
Bob Rehder
Second, Ahn (1998) and Ahn and Kim (2001) proposed that the causal status effect is itself a sort of incomplete or weakened version of essentialism. On this account, the root cause X in the causal chain in Figure 1 becomes more important because it is viewed as essence-like (a ‘‘proto-essence’’ if you will), although without an essence’s more extreme properties (i.e., always unobservable, a defining feature that appears in all category members, etc.). Of course, standing as evidence against this principle are the numerous conditions reviewed above in which a causal status effect failed to obtain. Moreover, the need for the principle would seem to be obviated by the finding that the causal status effect is fully explicable in terms of the properties of the category’s causal model, including (a) the strengths of the causal links and (b) the presence of unobserved (perhaps essential) features which are causally related to the observed ones. Another important empirical finding concerns how the changes to features’ categorization importance brought about by causal knowledge is mediated by their subjective category validity (i.e., likelihood among category members). In every experimental condition in which classification ratings revealed a full causal status effect, participants also rated feature X as more likely than Y and Y as more likely than Z; whenever a causal status effect was absent, features’ likelihood ratings were not significantly different. Apparently, causal knowledge changes the perceived likelihood with which a feature is generated by a category’s causal model and any feature that occurs with greater probability among category members (i.e., has greater category validity) should provide greater evidence in favor of category membership (Rosch & Mervis, 1975). Other studies have shown that a feature’s influence on categorization judgments correlates with its subjective category validity. For example, although Study 5 of Sloman et al. (1998) found that features’ judged mutability dissociated from their objective category validity, they tracked participants’ subjective judgments of category validity.10 In summary, what should be concluded about the causal status effect? On one hand, the causal status effect does not rise to the level of an unconditional general law, that is, one that holds regardless of the causal facts involved. Even controlling for the effects of contrast categories, empirical/statistical information, and the salience of individual features, in the 56 10
Some studies have claimed to show just such a dissociation between feature importance and category validity, however. For example, in Ahn et al. (2000, Experiment 2), participants first observed exemplars with three features that appeared with equal frequency and then rated the frequency of each feature. They then learned causal relations forming a causal chain and rated the goodness of missing-X, missing-Y, and missing-Z test items. Whereas features’ likelihood ratings did not differ, the missing-X item was rated lower than the missing-Y item which was lower than the missing-Z item, a result the authors interpreted as demonstrating a dissociation between category validity and categorization importance. This conclusion is unwarranted, however, because the frequency ratings were gathered before the presentation of the causal relations. Clearly, one can only assess whether perceived category validity mediates the relationship between causal knowledge and features’ categorization importance by assessing category validity after the causal knowledge has been taught. Sloman et al. (1998, Study 5) and Rehder and Kim (2009b) gathered likelihood ratings after the causal relationships were learned and found no dissociation with feature weights.
75
Causal-Based Categorization: A Review
experimental conditions in the 15 studies reviewed in this chapter testing novel categories, a full causal status effect obtained in 26 of them, a partial effect (a higher weight on the root cause only, e.g., X > Y ¼ Z) obtained in 7, and there was no effect (or the effect was reversed) in 23.11 On the other hand, these experiments also suggest that a causal status effect occurs under conditions that are not especially exotic—specifically, it is promoted by (a) probabilistic causal links, (b) the absence of alternative causes, (c) essentialized categories, and (d) nonfunctional effect features. Because these conditions are likely to often arise in real-world categories, the causal status effect is one of the main phenomenon in causal-based classification.
5. Relational Centrality and Multiple-Cause Effects The causal status effect is one important way that causal knowledge changes the importance of individual features to category membership. However, there are many documented cases in which effect features are more important than their causes rather than the other way around. For example, Rehder and Hastie (2001) instructed subjects on the two networks in Figure 8. In the common-cause network, one category feature (X) is described as causing the three other features (Y1, Y2, and Y3). In the common-effect network, one feature (Y) is described as being caused by each of the three others (X1, X2, and X3). A
B
X
Y1
X1
Y2
X2
Y3
X3
Y
Figure 8 Causal networks tested in Rehder and Hastie (2001): (A) common-cause network, (B) common-effect network. 11
For the following 15 studies, ‘‘[a/b/c]’’ represents the number of conditions in which a full (a), partial (b), or zero or negative (c) causal status effect obtained: Sloman et al. (1998) [2/0/0], Ahn (1998) [2/0/0]; Ahn et al. (2000) [2/0/0], Rehder and Hastie (2001) [3/0/3], Kim and Ahn (2002b) [1/0/0], Rehder (2003a) [1/0/1], Rehder (2003b) [0/2/1], Rehder and Kim (2006) [0/2/4], Luhmann et al. (2006) [7/2/0], Marsh and Ahn (2006) [2/0/4], Rehder and Kim (2008) [0/1/1], Rehder and Kim (2009b) [5/0/3], Lombrozo (2009) [1/0/1], Rehder (2009) [1/0/3], and Hayes and Rehder (2009) [0/0/2]. Note that because Sloman et al. and Luhmann et al. either did not gather or report ratings for missing-Y test items in some conditions, the causal status effect is counted as ‘‘full’’ in those cases. Also note that the results from Luhmann et al.’s 300 m’s deadline condition of their Experiment 2B are excluded.
76
Bob Rehder
(The causes were described as independent, noninteracting causes of Y.) No information about the strength of the causes or background causes was provided. After learning these causal structures, subjects were asked to rate the category membership of all 16 items that could be formed on the four binary dimensions. The regression weights on features averaged over Experiments 1–3 from Rehder and Hastie (2001) are presented in Figure 9. In the common-cause condition, the common-cause feature X was weighed much more heavily than the effects. That is, a strong causal status effect occurred. However, in the common-effect condition the effect feature Y was weighed much more heavily than any of its causes. That is, a reverse causal status effect occurred. This pattern of feature weights for common-cause and commoneffect networks has been found in other experiments (Ahn, 1999; Ahn & Kim, 2001; Rehder, 2003a; Rehder & Kim, 2006).12 Two explanations of this effect have been offered. The first, which I refer to as the relational centrality effect, states that features’ importance to category membership is a function of the number of causal relations in which it is involved (Rehder & Hastie, 2001). On this account, Y is most important in a common-effect network because it is involved in three causal relations whereas its causes are involved in only one. The second explanation, the multiple-cause effect, states that a feature’s importance increases as a function of its number of causes (Ahn & Kim, 2001; Rehder & Kim, 2006). On this account, Y is most important because it has three causes whereas the A
B
Common cause
Common effect
14
14
12
12
10
1
8
8
6
6 4
4 X
Y1 Y2 Feature
Y3
X1
X2 X3 Feature
Y
Figure 9 Classification results from Rehder and Hastie (2001): (A) common-cause condition, (B) common-effect condition. 12
Although Ahn and Kim (2001) did not themselves report the results of regression analyses, a regression analysis of the average classification results in their Table I yields weights of 0.47 on the causes and 1.12 on the common effect, confirming the greater importance of the common effect in their study.
77
Causal-Based Categorization: A Review
causes themselves have zero. Note that because neither of these accounts alone explains the causal status effect (e.g., feature X in Figure 1 has neither the greatest number of causes nor relations), the claim is that these principles operate in addition to the causal status effect rather than serving as alternative to it. I first review evidence for and against each of these effects and then discuss their implications for the dependency and generative models.
5.1. Evidence Against a Relational Centrality Effect and for a Multiple-Cause Effect A study already reviewed in Section 4.4 provides evidence against the relational centrality effect. Recall that Rehder and Kim (2006, Experiment 3) tested the two causal networks shown in Figure 5. Feature Y has three effects in the 1-1-3 network but only one in the 1-1-1 network. The results showed that Y was not relatively more important in the 1-1-3 condition than in the 1-1-1 condition (Figure 6). These results were interpreted as evidence against the dependency model’s claim that features’ importance increases with their number of effects but they also speak against the claim that importance increases with their number of relations: Feature Y is involved in four causal relations in the 1-13 network but only two in the 1-1-1 network. Feature importance does not appear to generally increase with the number of relations. This result implies that the elevated weight on the common-effect feature in Figure 9 must be due instead to it having multiple causes. Accordingly, Rehder and Kim (2006, Experiment 2) tested the multiplecause effect by teaching subjects the two causal structures in Figure 10. Participants in both conditions were instructed on categories with five features, but whereas feature Y had three causes in the 3-1-1 condition (3 root causes, 1 intermediate cause, 1 effect), it had only one in the 1-1-1 condition. In the 1-1-1 condition, which feature played the role of Y’s cause was balanced between X1, X2, and X3. After learning these causal category structures, subjects were asked to rate the category membership of all 32 items that could be formed on the five binary dimensions. The results of this experiment are presented in Figure 11. (In the figure, the weight for ‘‘X (cause)’’ is averaged over X1, X2, and X3 in the 3-1-1 A
B X1 X2 X3
X1
Y
Z
X2
Y
Z
X3
Figure 10 Causal networks tested in Rehder and Kim (2006, Experiment 2): (A) 3-1-1 condition, (B) 1-1-1 condition.
78
Bob Rehder
condition and is for the single causally related X in the 1-1-1 condition. The weight for ‘‘X (isolated)’’ is for the isolated Xs in the 1-1-1 condition.) Figure 11 confirms the presence of a multiple-cause effect: Feature Y was weighed relatively more in the 3-1-1 condition when it had three causes versus the 1-1-1 condition when it only had one. These results show that a feature’s number of causes influences the weight it has on category membership judgments.13
5.2. Evidence for an Isolated Feature Effect Although a feature’s weight does not generally increase with its number of relations, there is substantial evidence showing that features involved in at least one causal relation are more important than those involved in zero (so-called isolated features). For example, in Rehder and Kim’s (2006) 10 3-1-1 8
6 1-1-1
4
2 X’s (isolated)
X’s Y (causes) Feature
Z
Figure 11 Classification results from Rehder and Kim (2006, Experiment 2). Weight on the ‘‘cause’’ X’s is averaged all three X features in the 3-1-1 condition and is the single X feature that plays the role of a cause in the 1-1-1 condition. Weight on the ‘‘isolated’’ X’s is the average of the two causally unrelated X features in the 1-1-1 condition (see Figure 10).
13
Note that there have been some studies that have failed to find a multiple-cause effect with a common effect network. For example, using virtually the same materials as Rehder and Hastie (2001) and Rehder and Kim (2006) except for the use of the ‘‘normal’’ wording for atypical feature values (see Section 6.4), Marsh and Ahn (2006) failed to find an elevated weight on the common effect. Additional research will be required to determine whether this is a robust finding.
Causal-Based Categorization: A Review
79
Experiment 2 just discussed, weights on the isolated features in the 1-1-1 condition (X1 and X3 in Figure 10) were lower than on any of the causally related features (Figure 11). Likewise, in Rehder and Kim’s Experiment 3, weights on the isolated features in the 1-1-1 condition (Z1 and Z3 in Figure 5) were lower than on any of the causally related features (Figure 6) (also see Kim & Ahn, 2002a). Of course, one might attribute this result to the fact that causally related features were mentioned more often during initial category learning and this repetition may have resulted in those features being treated as more important. However, even when Kim and Ahn (2002b) controlled for this by relating the isolated features to each other via noncausal relations, they were still less important than the causally linked features. That features involved in at least one causal relation are more important than isolated features will be referred to as the isolated feature effect.
5.3. Theoretical Implications: Discussion What implications do these findings have for the dependency and generative models? First, the multiple-cause effect provides additional support for the generative model and against the dependency model. Because the dependency model predicts that feature importance varies with the number of dependents, it predicts no difference between the 3-1-1 and 1-1-1 conditions of Rehder and Kim (2006). In contrast, it is straightforward to show that the generative model generally predicts a multiple-cause effect. Because demonstrating this quantitatively for the networks in Figures 8 and 10 is cumbersome (it requires specifying likelihood equations for 16 and 32 separate items, respectively), I do so for a simpler three-feature commoneffect network, one in which features X1 and X2 each independently cause Y. The likelihood equations generated for each item for this simplified network are presented in Table 5 by iteratively applying Eq. (2). For comparison the table also specifies the likelihood equations for a simplified three-feature common-cause structure (X causes Y1 and Y2). The table also presents the probability of each item for a sample set of parameter values, namely, cX ¼ 0.75, mXY1 ¼ mXY2 ¼ 0:75, and bY1 ¼ bY2 ¼ 0:10 for the common-cause network and cX1 ¼ cX2 ¼ 0:75, mX1 Y ¼ mX2 Y ¼ 0:75, and bY ¼ 0.10 for the common-effect network. From these item distributions, the probability of features of individual features can be computed. (The predicted feature interactions for these networks, also presented in the table, will be discussed in Section 6.) First note that, for the common-cause network, the generative model predicts a larger weight on the common cause than the effect features. Second, for the common-effect network, it predicts a weight on the common-effect feature, pK(Y) ¼ 0.828, which is greater than its causes ( pK(Xi) ¼ 0.750) or the Ys in the common-cause network which each have only one cause ( pK(Yi) ¼ 0.606). This prediction
Table 5
Predictions for the Generative Model for Three-Feature Common-Cause and Common-Effect Networks. Common cause network
Common effect network
Common cause network
Common effect network
Y1
X1
Y2
X2
X
Y
Model parameters cX mXY1 ; mXY2 bY1 ; bY2 Item likelihoods pk(XY1Y2) cX ðmXY1 þ bY1 mXY1 bY1 Þ ðmXY2 þ bY2 mXY2 bY2 Þ 2) cX ðmXY1 þ bY1 mXY1 bY1 Þ pk(XY1Y ð1 mXY2 Þð1 bY2 Þ cX ð1 mXY1 Þð1 bY1 Þ pk(XY 1 Y2) ðmXY2 þ bY2 mXY2 bY 2 Þ 1Y2) ð1 cX ÞbY1 bY2 pk(XY Y 1 Y2) pk(X 1Y 2) pk(XY
ð1 cX Þð1 bY1 ÞbY2 ð1 cX ÞbY1 ð1 bY2 Þ
0.750 0.750 0.100
cX1 ,cX2 mX1 Y ; mX2 Y bY
0.450
pk(X1X2Y)
0.131
pk(X1X2Y)
0.131
2 Y) pk(X1X
0.003
1 X2Y) pk(X
0.023 0.023
1X 2 Y) pk(X 1 X2Y) pk(X
0.750 0.750 0.100 cX1 cX2 ½1 ð1 mX1 Y Þ ð1 mX2 Y Þð1 bY Þ cX1 cX2 ð1 mX1 Y Þ ð1 mX2 Y Þð1 bY Þ cX1 ð1 cX2 Þ½1 ð1 mX1 Y Þ ð1 bY Þ ð1 cX1 ÞcX2 ½1 ð1 mX2 Y Þ ð1 bY Þ ð1 cX1 Þð1 cX2 ÞbY ð1 cX1 ÞcX2 ð1 mX2 Y Þð1 bY Þ
0.531 0.032 0.145 0.145 0.006 0.042
1Y 2) pk(XY
cX ð1 mXY1 Þð1 bY1 Þ ð1 mXY2 Þð1 bY 2 Þ 2) Y 1Y pk(X ð1 cX Þð1 bY1 Þð1 bY2 Þ Feature probabilities pk(X) pk(Y1) pk(Y2) Interfeature contrasts Dpk(X, Y1)a [Direct] Dpk(X, Y2)a [Direct] Dpk(Y1, Y2)a [Indirect] Dpk(Y1, X, Y2)b Dpk(Y2, X, Y1)b
0.038
2 Y) pk(X1X
cX1 ð1 cX2 Þð1 mX1 Y Þð1 bY Þ
0.042
0.203
2 Y) 1X pk(X
ð1 cX1 Þð1 cX2 Þð1 bY Þ
0.056
0.750 0.606 0.606
pk(X1) pk(X2) pk(Y)
0.675 0.675 0.358 0 0
Dpk(X1, Dpk(X2, Dpk(X1, Dpk(X2, Dpk(X1,
0.750 0.750 0.828 Y)a [direct] Y)a [direct] X2)a [indirect] X1, Y)b X2, Y)b
0.295 0.295 0 0.506 0.506
ci, the probability that root feature i appears in category members; mij, strength of the causal relation between i and j; bj, strength of the j’s background causes. Direct, contrasts between features that are directly causally related; Indirect, contrasts between features that are indirectly related. a Dpk(i, j ) ¼ pk( j|i ) pk( j| i ). b Dpk(h, i, j ) ¼ [pk( j|ih) pk( j| ih)] [pk( j|i h) pk( j| i h)].
82
Bob Rehder
of the generative model corresponds to the simple intuition that an event will be more likely to the extent that it has many versus few causes.14 These predictions for higher weights on a common-cause and a common-effect reproduce the empirical results in Figure 9. Other research has also shown that an intuitive judgment of an event’s probability will be greater when multiple potential causes of that event are known. For example, according to Tversky and Koehler’s (1994) support theory, the subjective probability of an event increases when supporting evidence is enumerated (death due to cancer, heart disease, or some other natural cause) rather than summarized (death due to natural causes) (also see Fischoff, Slovic, & Lichtenstein, 1978). And, Rehder and Milovanovic (2007) found that an event was rated as more probable as its number of causes increased (from 1 to 2 to 3). Note that Ahn and Kim (2001) also proposed that the multiple-cause effect found with common-effect networks was due to the greater subjective category validity associated with common-effect features.15 However, whereas the multiple-cause effect provides additional support for the generative model, the isolated feature effect is problematic for both the generative and the dependency models. For example, for the 3-1-1 network in Figure 10, the generative model stipulates one free c parameter for each X and, in the absence of any other information, those cs should be equal. Thus, it predicts that X1 and X3 should have the same weight as X2. Because they have the same number of dependents (zero), the dependency model predicts that X1 and X3 should have the same weight as Z. Why should features be more important because they are involved in one causal relation? Ahn and Kim (2001) have proposed that this effect is related to Gentner’s (1989) structure-mapping theory in which statements that relate two or more concepts (e.g., the earth revolves around the sun, represented as revolves-around (earth, sun)) are more important in analogical mapping than 14
15
It is important to note that these predictions depend on the particular model parameters chosen. First, just as for a chain network, the generative model only predicts a causal status effect for a common cause network when causal links are probabilistic. Second, regarding the common effect network, the claim is that the probability of an effect will increase with its number of causes. Whether it will be more probable than the causes themselves (as it is in the example in Table 5) also depends on the strength of the causal links. Third, whether the probability of a common effect in fact increases will interact with the subject’s existing statistical beliefs about the category. For example, if one is quite certain about the category validity of the effect (e.g., because it is based on many observations), then the introduction of additional causes might be accommodated by lowering one’s estimates of the strengths of the causal links (the m and b parameters) instead. See Rehder & Milovanovic (2007) for evidence of this kind of mutual influence between data and theory. Studies that systematically manipulated the strength of causal links in common cause and common effect networks (like Rehder & Kim 2009b, did with a chain network) have yet to be conducted. They also provided indirect evidence for this claim. They presented subjects with items in which the presence of the common effect feature was unknown and asked them to rate the likelihood that it was present. They found that inference ratings increased as a function of the number of causes present in the item. This result is consistent with the view that people can use causal knowledge to infer category features (Rehder & Burnett, 2005). It also implies that a feature will have greater category validity when it has multiple causes. Note that this result is analogous to the findings in Section 3 in which a change in features weights (a causal status effect) was always accompanied by a change in features’ category validity (likelihood of appearing in category members).
Causal-Based Categorization: A Review
83
statements involving a single argument (e.g., hot(sun)). Of course, the primary result to be explained is not the importance of predicates (e.g., revolves-around and hot) but rather the importance of features (that play the role of arguments in predicates, e.g., causes(X, Y)). But whatever the reason, the isolated feature effect joins the causal status and multiple-cause effects as an important way that causal knowledge influences classification judgments.
6. The Coherence Effect The next phenomenon addressed is the coherence effect. Whereas the causal status and multiple-cause effects (and the isolated feature effect) involve the weights of individual features, the coherence effect reflects an interaction between features. The claim is that better category members are those whose features appear in sensible or coherent combinations. For example, if two features are causally related, then one expects the cause and effect feature to usually be both present or both absent. In fact, large coherence effects have been found in every study in which they have been assessed (Marsh & Ahn, 2006; Rehder, 2003a,b, 2007, 2009b; Rehder & Hastie, 2001; Rehder & Kim, 2006, 2009b). It is important to recognize that effects of causal knowledge on feature weights and feature interactions are not mutually exclusive. As reviewed in Section 2.2, weights and interactions represent two orthogonal effects (corresponding to ‘‘main effects’’ and ‘‘interactions’’ in ANOVA). Indeed, some of the studies reviewed below demonstrating coherence effects are the same ones reviewed in Sections 4 and 5 showing feature weight effects. In other words, causal knowledge changes the importance of both features and combinations of features to categorization. In the subsections that follow I review studies testing the generative model’s predictions regarding how coherence effects are influenced by changes in model parameters (e.g., the strengths of causal links) and the topology of the causal network. The first two studies demonstrate effects manifested in terms of two-way interactions between features; the third one also demonstrates an effect on higher order interactions between features. Recall that the dependency model predicts an effect of causal knowledge on feature weights but not feature interactions, and so is unable to account for coherence effects.
6.1. Causal Link Strength Recall that, according to the generative model, when features are causally related one expects those features to be correlated. For example, for the three-element chain network (Figure 1), one expects the two directly
84
Bob Rehder
causally related feature pairs (X and Y, and Y and Z) to be correlated for most parameter values, and for the indirectly related pair (X and Z) to be more weakly correlated. Moreover, one expects these correlations to be influenced by the strength of the causal relations. Table 3 shows the generative model’s predictions for different causal strengths holding the b parameters fixed at 0.10. Note two things. First, Table 3 indicates that the magnitude of the probabilistic contrasts between features increase as mXY and mYZ increase. The contrasts between directly related features are 0.300, 0.675, 0.810, and 0.900 for causal strengths of 0.33, 0.75, 0.90, and 1.0 (Table 3); the contrast between the indirectly related features are 0.090, 0.456, 0.656, and 0.810. Intuitively, it makes this prediction because features that are more strongly causally related should be more strongly correlated. Second, Table 3 indicates that the difference between the direct and indirect contrasts changes systematically with strength. It makes this prediction because, although the correlation between directly related pairs should be stronger than between the indirectly related one for many parameter values, this difference will decrease as mXY and mYZ approach 1 or 0. For example, when the ms ¼ 1 (and there are no background causes), features X, Y, and Z are all perfectly correlated (e.g., Y is present if and only if Z is and vice versa) and thus there is no difference between direct and indirect contrasts. Likewise, when the ms ¼ 0, features X, Y, and Z should be perfectly uncorrelated and thus there is again no difference between direct and indirect contrasts. In other words, the generative model predicts that the direct/ indirect difference should be a nonmonotonic function of causal strength. These predictions were tested in Experiment 1 of Rehder and Kim (2009b) described earlier in which the strengths of the causal links were varied between either 100% or 75%. Subjects’ sensitivity to correlations between features was assessed via regression analyses that included two-way interactions terms for each pair of features. Regression weights on those interaction terms reflect the sensitivity of classification ratings to whether each pair of features is both present or both absent versus one present versus the other absent. The two-way interaction terms are presented in the right panel of Figure 2A. In the figure, the weights on the two directly related features (X and Y, and Y and Z) have been collapsed together and are compared against the single indirectly related pair (X and Z). The first thing to note is that in both the Chain-100 and the Chain-75 condition both sort of interaction terms were significantly greater than zero. This reflects the fact that subjects granted items higher category membership ratings to the extent they were coherent in light of a category’s causal laws (e.g., whether both cause and effect features were both present and both absent). As expected, in the Control condition both sorts of the two-way interactions terms (not shown in Figure 2A) were close to zero.
85
Causal-Based Categorization: A Review
Moreover, the generative model also correctly predicts how the magnitude of the interaction weights varied over condition (Chain-100 and Chain-75) and type (direct and indirect). First, it was predicted that the magnitude of the interactions terms should be greater in the Chain-100 versus the Chain-75 condition. Second, it was predicted that the difference between the direct and indirect terms should be small or absent in the Chain-100 condition and larger in the Chain-75 condition. In fact, this is exactly the pattern presented in Figure 2A.16 Causal link strength is one important factor that determines not only the size of coherence effects but also more subtle aspects of that effect (e.g., the difference between the direct and indirect terms). The effect of coherence in this experiment can also be observed directly in the classification ratings of individual test items. Figure 12A presents the test item classification ratings as a function of their number of characteristic features. As expected, in the Control condition ratings were a simple monotonic function of the number of features. In contrast, items with 2 or 1 features were rated lower than those with 3 or 0 (i.e., items 111
A
B
Classification rating
Experiment 1
Experiment 2
100
100
90
90
80
80
70
70
60
60
50 40
50 40
30
30
20
20
10
10
0
0 0
1
2
No. of features Chain-100 Chain-75 Control
3
0
1 2 No. of features
3
Background-50 Background-0 Expt. 1 control
Figure 12 Classification ratings from Rehder and Kim (2009b) for test items collapsed according to their number of typical features: (A) Experiment 1, (B) Experiment 2.
16
Although the interaction between condition and interaction term did not reach significance (p > 0.15), a separate analysis of the Chain-75 conditions revealed a significant effect of direct versus indirect interaction terms in the Chain-75 condition, p < 0.01 but not the Chain-100 condition, p ¼ 0.17.
86
Bob Rehder
and 000) in both causal conditions; moreover, this effect was more pronounced in the Chain-100 condition than the Chain-75 condition. Intuitively, the explanation for these differences is simple. When Control participants are told, for example, that ‘‘most’’ myastars are very hot, have high density, and have a large number of planets, they expect that most myastars will have most of those features and that the atypical values exhibited by ‘‘some’’ myastars (unusually cool temperature, low density, and small number of planets) will be spread randomly among category members. That is, they expect the category to exhibit a normal family resemblance structure in which features are independent (i.e., are uncorrelated within the category). But when those features are causally related, the prototype 111 and item 000 receive the highest ratings. Apparently, rather than expecting a family resemblance structure with uncorrelated features, participants expected the ‘‘most’’ dimension values to cluster together (111) and the ‘‘some’’ values to cluster together (000), because that distribution of features is most sensible in light of the causal relations that link them. As a result, the rating of test item 000 is an average 30 points higher in the causal conditions than in the Control condition. In contrast, items that are incoherent because they have one or two characteristic features (and thus have a mixture of ‘‘most’’ and ‘‘some’’ values) are rated 29 points lower than in the Control condition.
6.2. Background Causes Experiment 2 of Rehder and Kim (2009b) described earlier also tested how coherence effects vary with the strength of background causes. Table 3 shows the generative model’s predictions for different background strengths holding causal strengths (mXY and mYZ) fixed at 0.75. Note that Table 3 indicates that the magnitude of the probabilistic contrasts between features decrease as bY and bZ increase. The contrasts between directly related features are 0.75, 0.56, and 0.38 for values of bY and bZ of 0, 0.25, and 0.50, respectively; the contrasts between the indirectly related features are 0.56, 0.32, and 0.14. Intuitively, it makes this prediction because a cause becomes less strongly correlated with its effect to the extent that the effect has alternative causes. In addition, the generative model once again predicts that the direct correlations should be stronger than the indirect one. Recall that Rehder and Kim (2009b, Experiment 2) directly manipulated background causes by describing the strength of those causes as either 0% (Background-0 condition) or 50% (Background-50 condition). Once again, the weights on two-way interaction terms derived from regression analyses performed on the classification ratings represent the magnitude of the coherence effect in this experiment. The results, presented in the right panel of Figure 2B, confirm the predictions: The interaction terms were
Causal-Based Categorization: A Review
87
larger in the Background-0 condition than the Background-50 condition and the direct terms were larger than the indirect terms. Once again, the strong effect of coherence is apparent in the pattern of test item ratings shown in Figure 12B. (The figure includes ratings from the Control condition from Rehder and Kim’s Experiment 1 for comparison.) Whereas in the Control condition ratings are a monotonic function of the number of characteristic features, in the causal conditions incoherent items with two or one features are rated lower relative to the Control condition and the coherent item 000 is rated higher. Apparently, participants expected category members to reflect the correlations that the causal relations generate: The causally linked characteristic features should be more likely to appear together in one category member and atypical features should be more likely to appear together in another. Moreover, Figure 12B shows that this effect was more pronounced in the Background-0 condition than the Background-50 condition. The strength of background causes is a second important factor that determines the size of coherence effects.
6.3. Higher Order Effects In this section, I demonstrate how the generative model predicts not only two-way but also higher order interactions among features. Consider again the common-cause and common-effect networks in Figure 8. Of course, both networks imply that those feature pairs that are directly related by a causal relation will be correlated. In addition, as in a chain network, the common-cause network implies that the indirectly related features (the effects Y1, Y2, and Y3) should be correlated for most parameter values (i.e., so long as cX < 1, the ms > 0, and the bs < 1) albeit not as strongly as the directly related features (so long as the ms < 1). The expected correlations between the effects is easy to understand given the inferential potency that exists among these features. For example, if in some object you know only about the presence of Y1, you can reason from Y1 to the presence of X and then from X to the presence of Y2. This pattern of correlations is exhibited in Table 5 by the simplified three-feature commoncause network: Contrasts of 0.675 between the directly related features and 0.358 between the indirectly related ones. In contrast, the common-effect network implies a different pattern of feature interactions. In a disanalogy with a common-cause network, the common-effect network does not predict any correlations among the independent causes of Y because they are just that (independent). If in some object you know only about the presence of X1, you can reason from X1 to Y but then the (inferred) presence of Y does not license an inference to X2. However, unlike the common-cause network, the common-effect network implies higher order interactions among features, namely, between each pair of causes and the common effect Y. The following example
88
Bob Rehder
provides the intuition behind these interactions. When the common-effect feature Y is present in an object, that object will of course be a better category member if a cause feature (e.g., X1) is also present. However, the presence of X1 will be less important when another cause (e.g., X2) is already present to ‘‘explain’’ the presence of Y. In other words, a common-effect network’s higher order interactions reflect the diminishing marginal returns associated with explaining an effect that is already explained. This pattern of correlations is exhibited in Table 5 by the simplified three-element common-effect network: Contrasts of 0.295 between the directly related features, 0 between the indirectly related effects, and higher order contrasts between X1, X2, and Y: Dpk(X2, X1, Y) ¼ Dpk(X1, X2, Y) ¼ 0.506. These higher order contrasts reflect the normative behavior of discounting for the case of multiple sufficient causation during causal attribution (Morris & Larrick, 1995). In contrast, Table 5 shows that the analogous higher order contrasts for the common-cause network, Dpk(Y1, X, Y2) and Dpk(Y2, X, Y1), are each 0, reflecting the absence of discounting for that network. To test these whether these expected higher order interactions would have the predicted effect on classification judgments, participants in Rehder (2003a) learned categories with four features and three causal links arranged in either the common-cause or common-effect networks shown in Figure 8. No information about the strength of the causes or background causes was provided. To provide a more sensitive test of the effect of feature interactions, this study differed from Rehder and Hastie (2001) by omitting any information about which features were individually typical of the category. Subjects then rated the category membership of all 16 items that can be formed on four binary dimensions. A Control condition with no causal links was also tested. The regression weights from this experiment are presented in Figure 13. Note that the feature weights shown in Figure 13A replicate the feature weights found in Rehder and Hastie (2001): Higher weights on the common cause and the common effect (Figure 9). More importantly, the feature interactions shown in Figure 13B reflect the pattern of interfeature correlations just described. (In Figure 13B, for the common-cause network the ‘‘direct’’ two-way interactions are between X and its effect and the indirect ones are between the effects themselves; for the common-effect network, the ‘‘direct’’ interactions are between Y and its causes and the indirect ones are between the causes.) First, in both conditions the two-way interaction terms corresponding to directly causally related feature pairs had positive weights. Second, in the common-cause condition the indirect two-way interactions were greater than zero and smaller than the direct interactions, consistent with expectation that the effects will be correlated with one another but not as strongly as with the cause. Third, in the common-effect network the indirect terms were not significantly greater than zero,
89
Causal-Based Categorization: A Review
A
Common cause Feature weights 12
12
10
10
8
8
6
6
4
4
2
2
Common effect
0
0 X
Y1 Y2 Feature
Y3
B Interaction weights 10
X1
8
6
6
4
4
2
2
0
0 Direct
Indirect
3-ways w/X Feature interaction
C Classification ratings 5.0
−2
Direct
Indirect
3-ways w/Y Feature interaction
5.0
4.5
4.5
4.0
4.0
3.5
3.5
3.0
Y
10
8
−2
X2 X3 Feature
3.0 0
1 2 No. of effects
3
0
1 2 No. of causes
3
Figure 13 Classification ratings from Rehder (2003a): (A) feature weights, (B) interaction weights, (C) log classification ratings. Unlike previous figures depicting interaction weights, panel (B) presents the average regression weights on the three-way interactions involving (A) X and two of its effects in the common-cause condition and (B) Y and two of its causes in the common-effect condition.
90
Bob Rehder
consistent with the absence of correlations between the causes. Finally, that the average of the three three-way interactions involving Y (i.e., fX1 X2 Y ; fX1 X3 Y ; fX2 X3 Y ) was significantly negative in the common-effect condition reflects the higher order interactions that structure is expected to generate (Table 5). This interaction is also depicted in the right panel of Figure 13C that presents the logarithm of categorization ratings in the common-effect condition for those exemplars in which the common effect is present as a function of the number of cause features. The figure shows the predicted nonlinear increase in category membership ratings as the number of cause features present to ‘‘explain’’ the common-effect feature increases. In contrast, for the common-cause network (Figure 13C, left panel), ratings increased linearly (in log units) with the number of additional effects present. (All two-way and higher order interactions were close to zero in the Control condition.) These results indicate that, consistent with the predictions of the generative model, subjects expect good category members to manifest the two-way and higher order correlations that causal laws generate.
6.4. Other Factors Finally, just as was the case for the causal status effect, questions have been raised about the robustness of the coherence effect. Note that in early demonstrations of this effect, one value on each feature dimension was described as characteristic or typical of the category whereas the other, atypical value was often described as ‘‘normal’’ (Rehder, 2003a,b; Rehder & Hastie, 2001; Rehder and Kim, 2006). For example, in Rehder and Kim (2006), participants were told that ‘‘Most myastars have high temperature whereas some have a normal temperature,’’ ‘‘Most myastars have high density whereas some have a normal density,’’ and so on. Although the intent was to define myastars with respect to the superordinate category (all stars), Marsh and Ahn (2006) suggested that this use of ‘‘normal’’ might have inflated coherence effects because participants might expect all the normal dimension values to appear together and, because of the emphasis on coherence, reduced the causal status effect. To assess this hypothesis, Marsh and Ahn taught participants categories with four features connected in a number of different network topologies. For each, they compared an Unambiguous condition in which the uncharacteristic value on each binary dimension was the opposite of the characteristic value (e.g., low density vs. high density) with an Ambiguous condition (intended to be replications of conditions from Rehder, 2003a,b) in which uncharacteristic values were described as ‘‘normal’’ (e.g., normal density). They found that the Unambiguous condition yielded a larger causal status effect and a smaller coherence effect, a result they interpreted as demonstrating that the ‘‘normal’’ wording exaggerates coherence effects.
91
Causal-Based Categorization: A Review
However, this conclusion was unwarranted because the two conditions also differed on another dimension, namely, only the Unambiguous participants were given information about which features were typical of the category. In the absence of such information it is unsurprising that ratings in the Ambiguous condition were dominated by coherence. To determine whether use of ‘‘normal’’ affects classification, Rehder and Kim (2008) tested categories with four features arranged in a causal chain (W ! X ! Y ! Z) and compared two conditions that were identical except that one used the ‘‘normal’’ wording and the other used bipolar dimensions (e.g., low vs. high density). The results, presented in Figure 14, show a pattern of results exactly the opposite of the Marsh and Ahn conjecture: The ‘‘normal’’ wording produced a smaller coherence effect and a larger causal status effect. Note that large coherence effects were also found in each of the four experiments from Rehder and Kim (2009b) reviewed above that also avoided use of the ‘‘normal’’ wording. Why might bipolar dimensions lead to stronger coherence effects? One possibility is that participants might infer the existence of additional causal links. For example, if you are told that myastars have either high or low temperature and either high or low density, and that high temperature causes high density, you might take this to mean that low temperature also causes low density. These results suggest that subtle differences in the wording of a causal relation can have big effects in how those links are encoded and then used in a subsequent reasoning task. But however one interprets them, these findings indicate that coherence effects do not depend on use of the ‘‘normal’’ wording.
12
12
10 Normal
8
8
Bipolar
6 4
4
Bipolar 2
Normal
0
0 W
X Y Feature
Z
Direct Indirect Feature interaction
Figure 14 Classification ratings from Rehder and Kim (2008).
92
Bob Rehder
6.5. Theoretical Implications: Discussion Causal networks imply the existence of subtle patterns of correlations between variables: Directly related variables should be correlated, indirectly related variables should be correlated under specific conditions, and certain networks imply higher order interactions among variables. The studies just reviewed show that people’s classification judgments are exquisitely sensitive to those expected correlations. These results provide strong support the generative model’s claim that good category members are those that manifest the expected pattern of interfeature correlations and poor category members are those that violate that pattern. As mentioned, the presence of coherence effects supports the generative model over the dependency model because only the former model predicts feature interactions. However, one model that predicts feature interactions is Rehder and Murphy’s (2003) KRES recurrent connectionist model that represents relations between category features as excitatory and inhibitory links (also see Harris & Rehder, 2006). KRES predicts interactions because features that are consistent with each other in light of knowledge will raise each other’s activation level (due to the excitatory links between them) which in turn will activate a category label more strongly; inconsistent features will inhibit one another (due to the inhibitory links between them) which will result in a less active category label. But while KRES accounts for a number of known effects of knowledge on category learning, because its excitatory and inhibitory links are symmetric, KRES is fundamentally unable to account for the effects reviewed above demonstrating that subjects treat causal links as an asymmetric relation. For example, if one ignores causal direction, X and Z in the causal chain in Figure 1 are indistinguishable (and thus there is no basis for predicting a causal status effect) as are the common-cause and common-effect networks in Figure 9 (and thus there is no basis for predicting the different pattern of feature interactions for those networks). Indeed, the asymmetry between commoncause and common-effect networks has been the focus of considerable investigation in both the philosophical and psychological literatures (Reichenbach, 1956; Salmon, 1984; Waldmann & Holyoak, 1992; Waldmann, Holyoak, & Fratianne, 1995). The importance of coherence to classification has been documented by numerous other studies. For example, Wisniewski (1995) found that certain artifacts were better examples of the category ‘‘captures animals’’ when they possessed certain combinations of features (e.g., ‘‘contains peanuts’’ and ‘‘caught a squirrel’’) but not others (‘‘contains acorns’’ and ‘‘caught an elephant’’) (also see Murphy & Wisniewski, 1989). Similarly, Rehder and Ross (2001) showed that artifacts were considered better examples of a category of pollution-cleaning devices when their features cohered (e.g., ‘‘has a metal pole with a sharpened end’’ and ‘‘works to gather
Causal-Based Categorization: A Review
93
discarded paper’’), and worse examples when their features were incoherent (‘‘has a magnet’’ and ‘‘removes mosquitoes’’). Malt and Smith (1984) found that judgments of typicality in natural categories were sensitive to whether items obeyed or violated theoretically-expected correlations (also see Ahn et al., 2002). Coherence also affects other types of category-related judgments. Rehder and Hastie (2004) found that participants’ willingness to generalize a novel property displayed by an exemplar to an entire category varied as a function of the exemplar’s coherence. Patalano and Ross (2007) found that the generalization strength of a novel property from some category members to another varied as a function of the category’s overall coherence (and found the reverse pattern when the generalization was made to a noncategory member). Finally, it is illuminating to compare the relative importance of the effects of causal knowledge on features weights (i.e., the causal status, multiple cause, and relational centrality effects) and feature interactions (the coherence effect) by comparing the proportion of the variance in categorization ratings attributable to the two types of effects. In this calculation, the total variance induced by causal knowledge was taken to be the additional variance explained by a regression model with separate predictors for each feature and each two-way and higher order interaction as compared to a model with only one predictor representing the total number of characteristic features in a test item. The variance attributable to the changes in feature weights is the additional variance explained by the separate predictors for each feature in the full model, whereas that attributable to the coherence effect is the additional variance explained by the interaction terms. In fact, coherence accounts for more variance in categorization judgments than feature weights in every study in which coherence has been assessed: 60% in Rehder and Hastie (2001, Experiment 2), 80% in Rehder (2003a), 82% in Rehder (2003b, Experiment 1), 70% in Rehder and Kim (2006), 64% in Marsh and Ahn (2006), and over 90% in Rehder and Kim (2009b). These analyses indicate that the most important factor that categorizers consider when using causal laws to classify is whether an object displays a configuration of features that make sense in light of those laws.
7. Classification as Explicit Causal Reasoning The final phenomenon I discuss concerns evidence of how categorization can sometimes be an act of causal reasoning. On this account, classifiers treat the features of an object as evidence for the presence of unobserved features and these inferred features then contribute to a category membership decision.
94
Bob Rehder
Causal reasoning such as this may have contributed to instances of the causal status effect described in Section 4. For example, recall that Rehder and Kim (2009b, Experiment 3) found an enhanced causal status effect when subjects were instructed on categories with an explicit ‘‘essential’’ feature (Figure 4A). Although we interpreted those findings in terms of how the essential feature changed the likelihoods of the observed features (and provided evidence for this claim, see Figure 5C), subjects may have also reasoned backward from the observed features to the essential one, and, of course, features closer (in a causal sense) to the essence (e.g., X) were taken to be more diagnostic of the essence than far ones (e.g., Z). Reasoning of this sort may occur even when participants are not explicitly instructed on an essential feature. For example, one of the categories used in Ahn et al. (2000a) was a disease that was described as having three symptoms X, Y, and Z. Although participants were told that X ! Y ! Z, people understand that a disease (D) causes its symptoms, and so participants were likely to have assumed the more complex causal model D ! X ! Y ! Z (and then reasoned backwards from the symptoms to the disease). Given the prevalence of essentialist intuitions (Gelman, 2003), similar reasoning may have occurred for the natural kinds and artifacts tested in many studies. I now review recent studies that provide more direct evidence of causal reasoning during classification.
7.1. Classification as Diagnostic (Backward) Reasoning Rehder and Kim (2009a, Experiment 1) investigated the causal inferences people make in the service of classification by teaching subjects the causal structures in Figure 15A. Unlike the studies reviewed above, subjects were taught two novel categories (e.g., myastars and terrastars). Category A had three features, one underlying feature (UA) and two observable features (A1 and A2). The first observable feature (A1) was described as being caused by UA but the second (A2) was not. Likewise, category B had one underlying feature (UB) that caused the second observable feature (B2) but not the first (B1). Like the pseudoessential feature in Rehder and Kim (2009b, Experiment 3) in Figure 4A, UA and UB were defining because they were described as occurring in all members of their respective category and no nonmembers. Observable features were associated with their category by stating that they occurred in 75% of category members. After learning about the two categories, participants were presented with test items consisting of two features, one from each category, and asked which category the item belonged to. For example, a test item might have features A1 and B1, which we predicted would be classified as an A, because from A1 one can reason to UA via the causal link that connects them, but one cannot so reason from B1 to UB. For a similar reason, an item with features A2 and B2 should be classified a B. Consistent with this prediction,
95
Causal-Based Categorization: A Review
A Category A
Category B
A1
B1 UB
UA
B2
A2 B Category A 90%
Category B 60%
A1
UA
B1
UB 60%
A2
90%
B2
C Category A
Category B
A1
B1
UA
UB A2
100%
75%
B2
D Category A 75%
A1
Category B 50%
UA
75%
B1
0%
UB 75%
A2
0% 75%
B2
50%
Figure 15 Causal category structures from Rehder and Kim (2009a): (A) Experiment 1, (B) Experiment 3, (C) Experiment 4, (D) Experiment 5.
subjects chose the category whose underlying feature was implicated by the observables ones 84% of the time. Moreover, when subjects were presented with items in which the presence of two features was negated (e.g., A1 and B1 both absent), they chose the category whose underlying feature could be inferred as absent (e.g., category A) only 32% of the time. That is, subjects appeared to reason from observable features to underlying ones and then category membership.
96
Bob Rehder
There are alternative interpretations of these results however. In Figure 15A, feature A1 might have been viewed as more important because it was involved in one relation versus B1 which was involved in zero (i.e., an isolated features effect; see Section 5.2). To address this concern, Rehder and Kim (2009a) conducted a number of follow-up experiments. Our Experiment 3 tested the categories in Figure 15B in which the strength of the causal relations between UA and A1, and UA and A2 was described as 90% and 60%, respectively, whereas those between UB and B1, and UB and B2 was described as 60% and 90%. We predicted that test item A1B1 would be classified as an A, because the inference from A1 to UA is more certain than the inference from B1 to UB. Consistent with this prediction, subjects classified test item A1B1 as an A 88% of the time. Experiment 4 tested the categories in Figure 15C. Whereas UA was described as occurring in all category A members just as in the previous Experiments 1–3, UB was described as occurring in only 75% of category B members. We predicted that whereas the observable features of both categories provide equal evidence for UA and UB, respectively, those of category A should be more diagnostic because UA itself is. Consistent with this prediction, test item A1B1 was classified as an A 68% of the time. Finally, Experiment 5 tested the category structures in Figure 15D. Unlike the previous experiments, participants were given explicit information about the possibility of alternative causes of the observable features; specifically, they were told that features A1 and B2 had alternative causes (that operated with probability 50%) whereas A2 and B1 had none. We predicted test items A1B1 should be classified as a B because B1 provides decisive evidence of UB (because it has no other causes). As predicted, test item A1B1 was classified a B 73% of the time. Importantly, these results obtained despite the fact that the more diagnostic feature was involved in either the same number (Experiments 3 and 4) or fewer (Experiment 5) causal relations. Recent evidence suggests that people also reason diagnostically to underlying properties for natural categories. In a replication of Rips’s (2001) well-known transformation experiments, Hampton, Estes, and Simmons (2007) found that whether a transformed animal (e.g., a bird that comes to look like an insect due to exposure to hazardous chemicals) was judged to have changed category membership often depended on what participants inferred about underlying causal processes and structures. As in Rips’s study, a (small) majority of subjects in Hampton et al. judged the transformed animal to still be a bird whereas a (large) minority judged that it was now an insect. But although the judgments of the latter group (dubbed the phenomenalists by Hampton et al.) would seem to be based on the animals’ appearance, the justifications they provided for their choices indicated instead that many used the animals’ new properties to infer deeper changes. For example, subjects assumed that a giraffe that lost its long neck also exhibited new behaviors that were driven by internal changes (e.g., to
Causal-Based Categorization: A Review
97
its nervous system) which in turn signaled a change in category membership (to a camel). Conversely, those subjects who judged that the transformed animal’s category was unchanged (the essentialists) often appealed to the fact that it produced offspring from its original category, from which they inferred the absence of important internal changes (e.g., to the animal’s DNA). In other words, rather than the (so-called) phenomenalists using only observable features, and rather than the essentialists just relying on the presence of previously inferred underlying properties, both groups used observable features to infer the state of internal causal structures and processes, and decided category membership on that basis. Finally, also recall Murphy and Medin’s (1985) well-known example of classifying a party-goer who jumps into a pool as drunk—one reasons from aberrant behavior to its underlying cause even if one has never before observed a swimming drunk.
7.2. Classification as Prospective (Forward) Reasoning The notion of explicit causal reasoning in the service of classification allows for not only backwards, or diagnostic, reasoning to underlying features but also forwards, or prospective, reasoning. For example, a physician may suspect the presence of HIV given the presence of the forms of sarcoma, lymphoma, and pneumonia that HIV is known to produce (diagnostic reasoning). But the case for HIV is made stronger still by the presence of one or more of its known causes, such as blood transfusions, sharing of intravenous needles, or unsafe sex (prospective reasoning). I now review evidence of prospective reasoning in classification. 7.2.1. Rehder (2007) Subjects were taught the common-cause and common-effect networks in Figure 8, but now the common cause and common effect were pseudoessential underlying features, that is, they occurred in all category members and no nonmembers. The classification test only included items with three observable features (the effect features in the common-cause network or the cause features in the common-effect network). Rehder (2007) showed that an object’s degree of category membership increased nonlinearly with its number of observable features when those features were effects as compared to the linear increase that obtained when those features were causes, results consistent with a normative account of causal reasoning (also see Oppenheimer & Tenenbaum, 2009). 7.2.2. Follow-up to Rehder and Kim (2009a) In a follow-up experiment to Rehder and Kim (2009a), our lab taught subjects the two category structures in Figure 16A. Participants were again presented with test items consisting of one feature from each category
98
Bob Rehder
A Category A A1
Category B B1 UB
UA A2
B2
B Historically intended role
Object physical structure Functional outcome
Agent goal
Agent action
Figure 16 Causal category structures: (A) follow up to Rehder and Kim (2009a), (B) from Chaigneau et al. (2004).
(e.g., A1B1). Item A1B1 was classified as an A, suggesting that the evidence that A1 provided for category A via forward causal reasoning was stronger than the evidence that B1 provided for category B. This result is consistent with a well-known property of causal reasoning, namely, the fact that people reason more confidently from causes to effects than vice versa (Tversky & Kahneman, 1980). 7.2.3. Chaigneau, Barsalou, and Sloman (2004) Chaigneau et al. provided particularly compelling evidence of the presence of prospective causal reasoning in categorization. Figure 16B presents the causal structures they hypothesized constitute the mental representation of artifact categories. The function historically intended by the artifact’s designer results in its physical structure. In addition, the goal of a particular agent leads to the agent acting toward the artifact in a way to achieve those goals. Together, the artifact’s physical structure and the agent’s action yield a particular outcome. The authors presented subjects with a number of vignettes that each specified the state of the four causes in which three of the causes were present and the fourth was absent. For example, a vignette might include (a) an object that was created with the intention of being a mop but (b) was made out of plastic bags, and (c) an agent that wanted to clean up a spill and that (d) used the object in the appropriate way for mopping (i.e., all causes normal for a mop were present except physical
Causal-Based Categorization: A Review
99
structure). Subjects were asked how appropriate it was to call the object a mop. Classification ratings were much lower for vignettes in which the appropriate physical structure was missing as when the appropriate historical intention was missing. This result is consistent with subjects reasoning from an object’s physical structure to its potential function and then to category membership—so long as the structure of the artifact is appropriate, the intention of its designer becomes irrelevant.17 However, when physical structure is unspecified, then one can use the designer’s intentions to infer physical structure (and from structure infer potential function). This is what Chaigneau et al. found: The effect of a missing intention on classification was much larger when the physical structure was unspecified.
7.3. Theoretical Implications: Discussion What implications do these findings have for the generative and dependency models? First, there are two ways that the generative model can account for the observed results. The first approach corresponds to the explicit causal reasoning account we have just described. As a type of a causal graphical model, a category’s network of interfeature causal links support the elementary causal inferences required to account for the results in Experiments 1–5. Indeed, Rehder and Burnett (2005) confirmed that people are more likely to infer the presence of a cause feature when its effect was present (and vice versa). Although Rehder and Burnett also observed some discrepancies from normative reasoning, current evidence indicates that people can readily engage in the causal reasoning from observed to unobserved features suggested by these experiments (also see Ahn et al., 2000a, Experiment 5; Sloman & Lagnado, 2005; Waldmann & Hagmayer, 2005). In addition, recall that the generative model also predicts that observed features caused by underlying properties are likely to be perceived as more prevalent among category members. Not only does this multiple-cause effect help explain the enhanced causal status effect found with Rehder and Kim’s (2009b, Experiment 3) essentialized categories, Rehder and Kim (2009a) have shown how it explains the results from all five of their experiments described above. However, it does not explain the cases of prospective causal reasoning. For example, in Figure 16A, feature B1 should have greater category validity in category B than A1 has in category A but A1 was the more diagnostic feature. Thus, demonstrations of prospective reasoning are important insofar as they establish the presence of classification 17
Nevertheless, Chaigneau et al. found that classification ratings for vignettes with inappropriate historical intentions were lower relative to a baseline condition in which all four causes were present. The authors argue that this is a case of causal updating in which information about intentions influenced how subjects represented the artifact’s physical structure (even when information about physical structure was provided as part of the vignette). For example, if the designer intended to create a mop, the subject might be more sure that the object had a physical structure appropriate to mopping.
100
Bob Rehder
effects not mediated by changes in feature likelihoods brought about by the multiple-cause effect, implying a more explicit form of causal reasoning. Second, the dependency model in turn can explain some aspects of the prospective reasoning results, as features should be weighed more heavily when they have an extra effect. Thus, it explains why feature A1 is more diagnostic than B1 in Figure 16A. However, the dependency model fails to explain the results from Chaigneau et al. (2004) in which the importance of a distal cause (the intentions of an artifact’s designer) itself interacts with whether information about the artifact’s physical structure is available. Of course, the dependency model is also unable to account for the cases of diagnostic reasoning in which features become more important to the extent they have additional causes rather than effects.
8. Developmental Studies Given the robust causal-based classification effects in adults just reviewed, it is unsurprising that researchers have asked when these effects develop in children. I review evidence of how causal knowledge changes the importance of features and feature interactions and evidence of explicit causal reasoning in children.
8.1. Feature Weights and Interactions Initial studies of how causal knowledge affects children’s classification were designed to test whether children exhibit a causal status effect. For example, Ahn, Gelman, Amsterdam, Hohenstein, and Kalish (2000b) taught 7- to 9-year-olds a novel category with three features in which one feature was the cause of the other two. They told children, for example, that a fictitious animal called taliboos had promicin in their nerves, thick bones, and large eyes, and that the thick bones and large eyes were caused by the promicin. Ahn et al. found that an animal missing only the cause feature (promicin) was chosen to be a more likely category member than one missing only one of the effect features (thick bones or large eyes). Using a related design, Meunier and Cordier (2009) found a similar effect with 5-year-olds (albeit only when the cause was an internal rather than a surface feature). However, although the authors of these studies interpreted their findings as indicating a causal status effect, I have shown in Section 2.2 how these results can be interpreted as reflecting a coherence effect instead (see Example 1 in Table 1). An item missing only the cause feature (promicin) may have been rated a poor category member because it violated two expected correlations (one with thick bones, the other with large eyes) whereas an item missing only one effect feature violated only one expected
101
Causal-Based Categorization: A Review
correlation (the one with promicin). Thus, the Ahn et al. and Meunier and Cordier results are ambiguous regarding whether children exhibit a causal status effect or a coherence effect (or both). To test for the independent presence of these effects, Hayes and Rehder (2009) taught 5- to 6-year-olds a novel category with four features, two of which were causally related. For example, children were told about a novel animal named rogos that have big lungs, can stay underwater for a long time, have long ears, and sleep during the day. They were also told that having big lungs was the cause of staying underwater for a long time (the other two features were isolated, i.e., were involved in no causal links). After category learning, subjects were presented with a series of trials presenting two animals and asked which was more likely to be a rogo. The seven test pairs are presented in Table 6. For each alternative, dimension 1 is the cause, dimension 2 is the effect, and dimensions 3 and 4 are the neutral features; ‘‘1’’ means a feature is present, ‘‘0’’ means it is absent, ‘‘x’’ means there was no information about the feature (e.g., item 10xx is an animal with big lungs that cannot stay underwater very long, with no information about the two neutral features). A group of adults was also tested on this task. Table 6 presents the proportion of times alternative X was chosen in each test pair for the two groups of subjects. To analyze these data, we performed logistic regression according to the equation choicek ðX; YÞ ¼
1 1 þ exp½ ðdiff k ðX; YÞÞ
ð8Þ
where diffk is defined as the difference in the evidence that alternative X and Y provide for category k: Table 6 Test Pairs Presented by Hayes and Rehder (2009). Empirical responses (preference for X) Test pair
Choice X
Choice Y
Adults
5- to 6-year-olds
TA TB TC TD TE TF TG
11xx xx11 10xx 10xx 01xx 11xx 00xx
00xx xx00 01xx xx10 xx01 xx11 xx00
0.99** 1.0** 0.52 0.30** 0.32** 0.70** 0.62*
0.79** 0.73** 0.51 0.48 0.43 0.67** 0.55
For each test pair Ti subjects choose whether item X or Y is a better category member. Choice probabilities presented in the final two columns are tested against 0.50 ( p < 0.10; *,p < 0.05; **, p < 0.01). 1, Feature present; 0, feature absent; x, feature state unknown. Dimension 1, cause feature; dimension 2, effect feature; dimensions 3 and 4, neutral features.
102
Bob Rehder
diff k ðX; YÞ ¼ evidencek ðXÞ evidencek ðYÞ ¼ ðwc fX;1 þ we fX;2 þ wn fX;3 þ wn fX;4 þ wh hX Þ ðwc fY;1 þ we fY;2 þ wn fY;3 þ wn fY;4 þ wh hY Þ ¼ wc ð fX;1 fY;1 Þ þ we ð fX;2 fY;2 Þ þ w3 ð fX;3 fY;3 Þ þ w4 ð fX;4 fY;4 Þ þ wh ðhX hY Þ ¼ wc mXY;1 þ we mXY;2 þ wn mXY;3 þ wn mXY;4 þ wh mXY;h
ð9Þ
In Eq. (9), fi, j is an indicator variable reflecting whether the feature on dimension j in alternative i is present (þ1), absent (1), or unknown (0), and hi indicates whether i is coherent (þ1), incoherent (1), or neither (0). Thus, mXY, j (¼fX, j fY, j) are match variables indicating whether alternatives X and Y match on dimension j. In addition, wc, we, and wn are the evidentiary weights provided by the cause feature, effect feature, and the neutral features, respectively. That is, a single item’s degree of category membership is increased by wi if a feature on dimension i is present and decreased by wi if it is absent. Finally, wh is defined as the weight associated with whether the object exhibits coherence: An object’s degree of category member is increased by wh if the cause and effect features are both present or both absent and decreased by wh if one is present and the other absent. Note that Eq. (8) predicts a choice probability in favor of X of close to 1 when diffk(X, Y) 0, close to 0 when diffk(X, Y) 0, and close to 0.5 when diffk(X, Y) ffi 0. The values of wc, we, wn, and wh yielded by the logistic regression analysis averaged over subjects are presented in Table 7. First, note that a causal status effect—the difference between the importance of the cause and effect features—is reflected in the difference between parameters wc and we. In fact, this difference was not significantly different than zero for either children or adults (Table 7), indicating that neither group exhibited a causal status effect. The absence of a causal status effect is reflected in test pair C in which alternative 10xx (which has the cause but not the effect) was not considered a more likely category member than alternative 01xx (which has the effect but not the cause). Second, an isolated features effect—measured by the difference between the average of the cause and effect features (wc and we) and the neutral features (wn)—was significantly greater than zero for both groups. Finally, both groups exhibited a coherence effect, as indicated by values of wh that were significantly greater than zero. In addition, the effect of coherence was larger in adults than in children (wh ¼ 0.65 vs. 0.22). These results have several implications. First, this study is the first to document a coherence effect in 5- to 6-year-old children (albeit one smaller in magnitude than in adults). Second, the isolated feature’s effect similarly replicates adult findings reviewed earlier (Section 5) and extends those findings to children. Third, this study replicates the numerous adult studies
103
Causal-Based Categorization: A Review
Table 7 Average Parameter Estimates for Adults and Children in Hayes and Rehder (2009).
Parameter wc we wn wh Effect Causal status [wc we] Isolated [average(wc, we) wn] Coherence [wh]
Adults
5- to 6-year-olds
0.54 (0.07) 0.58 (0.06) 0.47 (0.04) 0.65 (0.09)
0.32 (0.05) 0.28 (0.06) 0.19 (0.04) 0.22 (0.08)
–0.03 (0.08) 0.09** (0.07) 0.65** (0.09)
0.04 (0.07) 0.11* (0.05) 0.22** (0.08)
wc, weight given to cause feature; we, weight given to effect feature; wn, weight given to neutral features; wh, weight given to agreement between cause and effect features (coherence). Standard errors are presented in parentheses. Causal status, isolated feature, and coherence effects are tested against 0 (p < 0.05; *,p < 0.01; **,p < 0.01).
reported in Section 4 in which a causal status effect failed to obtain and shows that this effect is also not inevitable in children. Of course, the finding of a coherence effect but not a causal status effect in children supports the possibility that previous reports of a causal status effect in children (e.g., Ahn et al., 2000b; Meunier & Cordier, 2009) reflected an effect of coherence instead.18 This preliminary study leaves several questions unanswered. One concerns whether a causal status effect fails to obtain in children for the same reasons it does adults. For example, adult studies described above showed no causal status effect with deterministic links. Thus, because Rehder and Hayes did not specify the strength of the causal relation between the cause and effect feature, the absence of a causal status effect may have been due to adults and children interpreting that link as deterministic (e.g., large lungs always allows roobans to stay underwater a long time). Accordingly, new studies are being planned that use the same materials and procedure but test probabilistic causal links. This study also makes an important methodological point, namely, how the separate effects of causal status, isolated features, and coherence can be evaluated in a forced-choice paradigm using logistic regression. Just as linear regression does for rating scale data, logistic regression provides the means to 18
It should be acknowledged that whereas Hayes and Rehder taught their subjects a single link between two features, Ahn et al. and Meunier and Cordier taught theirs a common cause structure in which one feature caused two others, and perhaps a cause with two effects is sufficient to induce a causal status effect in children. Of course, arguing against this possibility are findings above indicating that, at least for adults, a feature’s importance does not generally increase with its number of dependents (see Section 5.1).
104
Bob Rehder
separate the multiple effects of causal knowledge on categorization, including the importance of both features and feature combinations.
8.2. Explicit Causal Reasoning in Children’s Categorization There is considerable evidence that the explicit causal reasoning observed in adult categorization is also exhibited by children. For example, in a series of studies, Gopnik, Sobel, and colleagues have shown that children can reason causally from observed evidence to an unobserved feature that determines category membership (Gopnik, Glymour, Sobel, Schulz, & Kushnir, 2004; Gopnik & Sobel, 2000; Sobel & Kirkham, 2006; Sobel, Tenenbaum, & Gopnik, 2004; Sobel, Yoachim, Gopnik, Meltzoff, & Blumenthal, 2007). In these studies, children are shown a device called a blicket detector and told that it activates (i.e., plays music) whenever blickets are placed on it. They then observe a series of trials in which (usually two) objects are placed on the blicket detector either individually or together after which the machine either does or does not activate. For example, in a backward blocking paradigm tested in Sobel et al. (2004, Experiment 1), two blocks (A and B) were twice placed on the machine causing it to activate followed by a third trial in which A alone caused activation. On a subsequent classification test, 3- and 4-year-olds affirmed that B was blicket with probability 0.50 and 0.13, respectively, despite that the machine activated on every trial in which B was present; these probabilities were 1.0 in an indirect screening off condition that was identical except that the machine did not activate on the final A trial. Apparently, children were able to integrate information from the three trials to infer whether B had the defining property of blickets (the propensity to activate the blicket detector). In particular, in the backward blocking condition they engaged in a form of discounting in which the trial in which A alone activated the machine was sufficient to discount evidence that B was a blicket. Sobel and Kirkham (2006) reached similar conclusions for 24-month-olds (and 8-month-olds using anticipatory eye movements as a dependent measure). The full pattern of results from these studies has no interpretation under alternative associative learning theories. Moreover, Kushnir and Gopnik (2005) have shown that children can infer category membership on the basis of another type of causal reasoning, namely, on the basis of interventions (in which the subject rather than the experimenter places blocks on the machine). Notably, Booth (2007) has shown how these sorts of causal inferences in the service of classification result in children learning more about a category’s noncausal features.
Causal-Based Categorization: A Review
105
9. Summary and Future Directions This chapter has demonstrated three basic types of effect of causal knowledge on classification. First, causal knowledge changes the importance of individual feature. A feature’s importance to category membership can increase to the extent that it is ‘‘more causal’’ (a causal status effect), it has many causes (a multiple-cause effect), and is involved in at least one causal relation (an isolated feature effect). Second, causal knowledge affects which combinations of features make for good category members, namely, those that manifest the interfeature correlations expected to be generated by causal laws (a coherence effect). These expected combinations include both pairwise feature correlations and higher order interactions among features. Finally, causal knowledge supports the inferences from the features of an object one can observe to those that cannot, which in turn influence a category membership decision. Evidence was reviewed indicating how these inferences can occur in both the backward (diagnostic) direction as well as the forward (prospective) direction. Several of these effects have been demonstrated in young children. This chapter has also discussed the implications these results have for current models of causal-based classification. Briefly put, the generative model accounts for vastly more of the results obtained testing experimental categories than the alternative dependency model. As shown, the generative model correctly predicts how the magnitude of the causal status effect varies with (a) causal strength, (b) the strengths of background causes, and (c) the presence of unobserved ‘‘essential’’ features. It also accounts for the multiple-cause effect. It accounts for the coherence effect, including the observed higher order interactions, and how the magnitudes of the two-way interactions vary with experimental conditions (e.g., causal strength) and type (directly related feature pairs versus indirectly related ones). Finally, by assuming that causal category knowledge is represented as a graphical model, it supports the diagnostic and prospective causal reasoning from observed features to unobserved ones. The dependency model, in contrast, is unable to account for any of these phenomena. Nevertheless, note that there was one failed prediction of the generative model, namely, the presence of the isolated feature effect. Of course, the dependency model is also unable to account for this effect. In the final subsections below, I briefly present other issues and directions for future research.
9.1. Alternative Causal Structures and Uncertain Causal Models The experimental studies reviewed here have involved only one particular sort of causal link, namely, a generative cause between two binary features. However, one’s database of causal knowledge includes many other sorts of
106
Bob Rehder
relations. Some causal relations are inhibitory in that the presence of one variable decreases the probability of another. Conjunctive causes obtain when multiple variables are each required to produce an effect (as fuel, oxygen, and spark are all needed for fire). Binary features involved in causal relations can be additive (present or absent) or substitutive (e.g., male vs. female) (Tversky & Gati, 1982); in addition, variables can be ordinal or continuous (Waldmann et al., 1995). There are straightforward extensions to the generative model to address these possibilities, but as yet few empirical studies have assessed their effect on classification. Another important possibility is cases in which variables are related in causal cycles; indeed, many cycles were revealed by the theory-drawing tasks used in Sloman et al.’s (1998) and Kim and Ahn’s (2002a,b) studies of natural categories. Kim et al. (2008) has tested the effect of causal cycles in novel categories and proposed a variant of the dependency model that accounts for their result by assuming that causal cycles are unraveled one time cycle (e.g., the causal model {X $ Y} is replaced by {Xt ! Ytþ 1, Yt ! Xtþ 1}, where t represents time). They also note how a similar technique can allow the generative model to be applied to causal cycles. Still, Kim et al.’s use of the missing feature method prohibited an assessment of coherence effects or the weight of individual features in a manner that is independent of feature interactions. Thus, there is still much to learn about how causal cycles affect classification. Finally, an important property of causal models is the representation of uncertainty. You may believe that 100% of roobans have sticky feet and that 75% of myastars have high density, but your confidence in these beliefs may be either low or high depending on their source (e.g., they may be based on a small and biased sample or a large number of observations; they may come from a reliable or unreliable individual, etc.). Your confidence in interfeature causal relations (e.g., the belief that sticky feet are caused by roobans eating fruit) can similarly vary on a continuum. There are known ways to represent causal model parameters as subjective probability density functions and learn and reason with those models (Griffiths & Tenenbaum, 2005; Lu et al., 2008) that have obvious extensions to classification, but again there have been few empirical studies that have examined these issues. One exception is Ahn et al. (2000) who taught subjects causal relations that were implausible (because they contrasted with prior knowledge) and found, not surprisingly, no causal status effect.
9.2. Categories’ Hidden Causal Structure As mentioned, the purpose of testing novel rather than real-world categories is that it affords greater experimental control over the theoretical/ causal knowledge that people bring to the classification task. Nevertheless, even for a novel category people may assume additional causal structure on
Causal-Based Categorization: A Review
107
the basis of its ontological kind (e.g., whether it is an artifact or biological kind). Research reviewed above has shown how underlying ‘‘essential’’ features can affect classification even when they are not observed in test items (Hampton et al., 2007; Rehder & Kim, 2009a). And, the results of Chaigneau et al. (2004) suggests that classifiers can engage in prospective causal reasoning to infer an unobserved feature (an artifact’s potential function) to decide category membership. Ahn and Kim (2001) have referred to systematic differences between domains as ‘‘content effects,’’ and I would expand this notion to include the sort of default causal models that people assume in each domain. Thus, continuing to elucidate the sorts of default hidden causal structures that are associated with the various ontological kinds (and which of those causal structures are treated as decisive for category membership) and investigating how that those structures influence real-world categorization remains a key aim of future research.
9.3. Causal Reasoning and the Boundaries of Causal Models That classifiers can infer hidden causal structure raises the questions about the sorts of variables that can contribute to those inferences. I have reviewed studies showing that people can causally infer unobserved features from observed ones, but nothing prevents inferences involving variables not normally be considered ‘‘features’’ of the category (Oppenheimer et al., 2009). For example, you may be unable to identify the insects in your basement, until you see your house’s damaged wooden beams and realize you have termites. But is the chewed wood a ‘‘feature’’ of termites? Or, to take a more fanciful example, we all know that bird wings cause flying which causes birds to build nests in trees, but perhaps the nests also cause breaking tree branches, which cause broken windshields, which cause higher car insurance rates, and so on. But although a neighborhood’s high insurance rates might imply a large population of large birds they are not a feature of birds. Causal relations involving bird features also go backwards indefinitely (birds have wings because they have bird DNA, bird DNA produces wings because of a complicated evolutionary past, etc.). These examples raise two questions. The first concerns the boundaries of categories’ causal models. If a causal model includes variables directly or indirectly related to a category’s features then (because everything is indirectly connected to everything else) all variables are included, which means that all categories have the same causal model (the model of everything). Clearly, the causal model approach needs to specify the principles that determine which variables are part of a category’s causal model. The second question concerns how the evidence that a variable provides for category membership differs depending on whether it is part of the causal model or not. My own view is that a full model of causal-based classification will
108
Bob Rehder
involve two steps. In the first step classifiers use all relevant variables (those both inside and outside of the causal model), to infer the presence of unobserved features via the causal relations that link them (Rehder, 2007; Rehder & Kim, 2009a). In the second step, the classifier evaluates the likelihood that the (observed and inferred) features that are part of the category’s causal model were generated by that model.
9.4. Additional Tests with Natural Categories As mentioned, although a large number of studies have tested novel categories, others have assessed causal-based effects in real-world categories. Using the missing feature method, these studies have generally reported a causal status effect (Ahn, 1998, Kim & Ahn, 2002a,b; Sloman et al., 1998). Moreover, in contrast to the studies reviewed above testing novel categories favoring the generative model, research with natural categories has provided evidence supporting the dependency model, as classification judgments were shown to exhibit substantial correlations with the dependency model’s predictions derived from subjects’ category theories (measured, e.g., via the theory-drawing task). There are a number of uncertainties regarding the interpretation of these studies, some of which I have already mentioned. The numerous confounds associated with natural categories mean that the apparently greater importance of more causal features may be due to other factors (e.g., they may also have higher category validity, i.e., been observed more often in category members). In addition, because these studies used the missing feature method to assess feature weights, the causal status effect reported by these studies could have reflected coherence effects instead if the more causal features were also those involved in a greater number of causal relations. Finally, only the dependency model was fit to the data, and so naturally it is unknown whether the generative model would not have achieved a better fit. One notable property of the causal relations measured in these studies is that they correspond to those for which the generative model also predicts a strong causal status effect. For example, Kim and Ahn (2002a, Experiment 2) had subjects rate the strength of the causal links on a three-point scale (1, weak; 2, moderate; 3, strong) and found that the vast majority of links had an average strength of between 1 and 1.5. As I have shown, the generative model predicts a strong causal status effect when weak causal links produce each feature in a causal chain with decreasing probability. Sorely needed therefore are new studies of real-world categories that take into account what has been learned from studies testing novel materials. First, subjects must be presented with additional test items, namely, those that are missing more than just one feature. This technique will allow coherence effects to be assessed and will provide a more accurate measure
Causal-Based Categorization: A Review
109
of features weights (see Section 2.2).19 Second, both the dependency and generative models must be fit to the resulting data. Results showing that the causal status effect increases with causal strength will favor the dependency model, ones showing it decreases with causal strength, and the presence of multiple cause and coherence effects, will favor the generative model. Finally, the confound between a category’s causal and empirical/statistical information can be addressed by seeking objective measures of the later, perhaps by using corpus-based techniques (e.g., Landauer & Dumais, 1997). Statistical methods like multiple regression can then be used to determine whether causal theories provide any additional predictive power above and beyond features’ objective category validity.
9.5. Processing Issues The studies reviewed above all used unspeeded judgments in which subjects were given as long as they wanted to make a category membership decision. It is reasonable to ask whether these judgments would be sensitive to causal knowledge if they were made the same way that categorization decisions are made hundreds of times each day, namely, in a few seconds or less. One might speculate that use of causal knowledge is an example of slower, deliberate, ‘‘analytical’’ reasoning that is unlikely to appear under speeded conditions (Sloman, 1996; Smith & Sloman, 1994). But studies have generally found instead that effects of theoretical knowledge on classification obtain even under speeded conditions (Lin & Murphy, 1997; Palmeri & Blalock, 2000). For example, Luhmann et al. (2006) found that classification judgments exhibited a causal status effect even when they were made in 500 ms. Besides the causal status effect, no studies have examined how the other sorts of causal-based effects respond to manipulations of response deadline. Luhmann et al. proposed that in their study subjects ‘‘prestored’’ feature weights during the initial learning of the category’s features and causal relations and then were able to quickly access those weights during the classification test. If this is correct, then one might expect to also see a multiple-cause effect and isolated feature effect at short deadlines. In contrast, because the coherence effect and explicit causal reasoning involve processing the values on multiple stimulus dimensions, these effects may only emerge at longer response deadlines. 19
One challenge to applying the regression method to natural categories concerns the number of features n involved. Subjects usually know dozens of features of natural categories, implying the need for 2 test items (assuming n binary features) to run a complete regression that assesses main effects (i.e., feature weights), two-way interactions, and all higher order interactions. One compromise is to present only those test items missing either one or two features, allowing an assessment of feature weights and the two-way interactions. Furthermore, the missing-two-feature items could be restricted to those missing two features that are identified (e.g., on a theory drawing task) to be causally related.
110
Bob Rehder
9.6. Integrating Causal and Empirical/Statistical Information Another outstanding question concerns how people’s beliefs about a category’s causal structure is integrated with the information gathered through first-hand observation of category members. On one hand, numerous studies have demonstrated how general semantic knowledge that relates category features alters how categories are learned (e.g., Murphy & Allopenna, 1994; Rehder & Ross, 2001; Wattenmaker, Dewey, Murphy, & Medin, 1986). However, there are relatively few studies examining how a category’s empirical/statistical information is integrated with specifically causal knowledge. One exception is Rehder and Hastie (2001) who presented subjects with both causal laws and examples of category members and by so doing orthogonally manipulated categories’ causal and empirical structure (providing either no data or data with or without correlations that were consistent with the causal links). Although subjects’ subsequent classification judgments reflected the features’ objective category validity, they were generally insensitive to the correlations that inhered in the observed data. On the other hand, Waldmann et al. (1995) found that the presence of interfeature correlations affected subjects’ interpretation of the causal links they were taught. More research is needed to determine how and to what extent the correlational structure of observed data is integrated into a category’s causal model. Note that because the representation of causal relations assumed by the generative model emerged out of the causal learning literature (Cheng, 1997), it generates clear hypotheses regarding how the strengths of causal links (the m and b parameters) should be updated in light of observed data.
9.7. Developmental Questions Given the small number of relevant studies (Ahn et al., 2000b; Hayes & Rehder, 2009; Meunier & Cordier, 2009), it is not surprising that there are many outstanding questions regarding the effect of causal knowledge on classification feature weights and interactions. Because I attributed the absence of a causal status effect in Hayes and Rehder (2009) to 5-yearolds interpreting the causal link as deterministic, an obvious possibility to test is whether this effect would be observed in children when links are probabilistic instead. Another question is whether the size of the coherence effect in children varies with causal link strength in the way predicted by the generative model. Finally, it is currently unknown whether children exhibit a multiple-cause effect.
Causal-Based Categorization: A Review
111
9.8. Additional Dependent Variables Finally, whereas all studies reviewed here asked for some sort of category membership judgment, it is important to expand the types of dependent variables that are used to assess causal-based effects. For example, several studies have examined the effect of theoretical knowledge on category construction (the way in which people spontaneously sort items together; Ahn & Medin, 1992; Kaplan & Murphy, 1999; Medin, Wattenmaker, & Hampson, 1987) but only a few have examined the effect of causal knowledge. One exception is Ahn and Kim (2000, Experiment 4) who presented subjects with match-to-sample trials consisting of a target with one feature that caused another (X ! Y) and two cases that shared with the target either the cause (X ! Z) or the effect (W ! Y). Subjects spontaneously sorted together items on the basis of shared causes rather than shared effects, that is, they exhibited a causal status effect. On the other hand, Ahn (1999) found that sorting failed to be dominated by any feature (including the cause) for items with four features arranged in a causal chain. But subjects did sort on the basis of the cause for common-cause structures and on the basis of the effect for common-effect structures (mirroring the results with explicit classification judgments found by Rehder & Hastie, 2001, Figure 9). Additional research is needed to determine whether the other effects documented here (e.g., sensitivity of the causal status effect to causal strength, coherence effects, etc.) also obtain in category construction tasks.
9.9. Closing Words Twenty-five years have passed since Murphy and Medin (1985) observed how concepts of categories are embedded in the rich knowledge structures that make up our conceptual systems. What has changed in the last 10–15 years is that insights regarding how such knowledge affects learning, induction, and classification have now been cashed out as explicit computational models. This chapter has presented models of how interfeature causal relations affect classification and reviewed the key empirical phenomena for and against those models. That so many important outstanding questions remain means that this field can be expected to progress as rapidly in the next decade as it has in the past one.
REFERENCES Ahn, W. (1998). Why are different features central for natural kinds and artifacts? The role of causal status in determining feature centrality. Cognition, 69, 135–178. Ahn, W. (1999). Effect of causal structure on category construction. Memory & Cognition, 27, 1008–1023.
112
Bob Rehder
Ahn, W., Flanagan, E., Marsh, J. K., & Sanislow, C. (2006). Beliefs about essences and the reality of mental disorders. Psychological Science, 17, 759–766. Ahn, W., Gelman, S. A., Amsterdam, A., Hohenstein, J., & Kalish, C. W. (2000b). Causal status effect in children’s categorization. Cognition, 76, B35–B43. Ahn, W., Kim, N. S., Lassaline, M. E., & Dennis, M. J. (2000a). Causal status as a determinant of feature centrality. Cognitive Psychology, 41, 361–416. Ahn, W., & Kim, N. S. (2001). The causal status effect in categorization: An overview. In D. L. Medin (Ed.), The psychology of learning and motivation (Vol. 40, pp. 23–65). San Diego, CA: Academic Press. Ahn, W., Levin, S., & Marsh, J. K. (2005). Determinants of feature centrality in clinicians’ concepts of mental disorders. In B. Bara, L. Barsalou & M. Bucciarelli (Eds.), Proceedings of the 27th Annual Conference of the Cognitive Science Society. Mahwah, New Jersey: Lawrence Erlbaum Associates. Ahn, W., Marsh, J. K., Luhmann, C. C., & Lee, K. (2002). Effect of theory based correlations on typicality judgments. Memory & Cognition, 30, 107–118. Ahn, W., & Medin, D. L. (1992). A two-stage model of category construction. Cognitive Science, 16, 81–121. Ashby, F. G., & Maddox, W. T. (2005). Human category learning. Annual Review of Psychology, 56, 149–178. Bloom, P. (1998). Theories of artifact categorization. Cognition, 66, 87–93. Bonacich, P., & Lloyd, P. (2001). Eigenvector-like measures of centrality for asymmetric relations. Social Networks, 23, 191–201. Booth, A. (2007). The cause of infant categorization. Cognition, 106, 984–993. Braisby, N., Franks, B., & Hampton, J. (1996). Essentialism, word use, and concepts. Cognition, 59, 247–274. Buehner, M. J., Cheng, P. W., & Clifford, D. (2003). From covariation to causation: A test of the assumption of causal power. Journal of Experimental Psychology: Learning, Memory, and Cognition, 29, 1119–1140. Chaigneau, S. E., Barsalou, L. W., & Sloman, S. A. (2004). Assessing the causal structure of function. Journal of Experimental Psychology General, 133, 601–625. Cheng, P. (1997). From covariation to causation: A causal power theory. Psychological Review, 104, 367–405. Cheng, P. W., & Novick, L. R. (2005). Constraints and nonconstraints in causal learning: Reply to White (2005) and to Luhmann and Ahn (2005). Psychology Review, 112, 694–707. Fischoff, B., Slovic, P., & Lichtenstein, S. (1978). Fault trees: Sensitivity of estimated failure probabilities to problem representation. Journal of Experimental Psychology: Human Perception and Performance, 4, 330–344. Gelman, S. A. (2003). The essential child: The origins of essentialism in everyday thought. New York: Oxford University Press. Gelman, S. A., & Wellman, H. M. (1991). Insides and essences: Early understandings of the nonobvious. Cognition, 38, 213–244. Gentner, D. (1989). The mechanisms of analogical learning. In S. Vosniadou & A. Ortony (Eds.), Similarity and analogical reasoning (pp. 199–241). New York: Cambridge University Press. Gopnik, A., Glymour, C., Sobel, D. M., Schulz, L. E., & Kushnir, T. (2004). A theory of causal learning in children: Causal maps and Bayes nets. Psychological Review, 111, 3–23. Gopnik, A., & Sobel, D. M. (2000). Detecting blickets: How young children use information about novel causal powers in categorization and induction. Child Development, 71, 1205–1222. Griffiths, T. L., & Tenenbaum, J. B. (2005). Structure and strength in causal induction. Cognitive Psychology, 51, 334–384.
Causal-Based Categorization: A Review
113
Hampton, J. A. (1995). Testing the prototype theory of concepts. Journal of Memory and Language, 34, 686–708. Hampton, J. A., Estes, Z., & Simmons, S. (2007). Metamorphosis: Essence, appearance, and behavior in the categorization of natural kinds. Memory & Cognition, 35, 1785–1800. Harris, H. D., & Rehder, B. (2006). Modeling category learning with exemplars and prior knowledge. In R. Sun & N. Miyake (Eds.), Proceedings of the 28th annual conference of the cognitive science society (pp. 1440–1445). Mahwah, NJ: Erlbaum. Hayes, B. K., & Rehder, B. (2009). Children’s causal categorization. In preparation. Johnson, S. C., & Solomon, G. E. A. (1997). Why dogs have puppied and cates have kittens: The role of birth in young children’s understanding of biological origins. Child Development, 68, 404–419. Judd, C. M., McClelland, G. H., & Culhane, S. E. (1995). Data analysis: Continuing issues in the everyday analysis of psychological data. Annual Review of Psychology, 46, 433–465. Kalish, C. W. (1995). Essentialism and graded category membership in animal and artifact categories. Memory & Cognition, 23, 335–349. Kaplan, A. S., & Murphy, G. L. (1999). The acquisition of category structure in unsupervised learning. Memory & Cognition, 27, 699–712. Keil, F. C. (1989). Concepts, kinds, and cognitive development. Cambridge, MA: MIT Press. Keil, F. C. (1995). The growth of causal understandings of natural kinds. In D. Sperber, D. Premack & A. J. Premack (Eds.), Causal cognition: A multidisciplinary approach (pp. 234–262). Oxford: Clarendon Press. Kim, N. S., & Ahn, W. (2002a). Clinical psychologists’ theory-based representation of mental disorders affect their diagnostic reasoning and memory. Journal of Experimental Psychology: General, 131, 451–476. Kim, N. S., & Ahn, W. (2002b). The influence of naive causal theories on lay concepts of mental illness. American Journal of Psychology, 115, 33–65. Kim, N. S., Luhmann, C. C., Pierce, M. L., & Ryan, M. M. (2009). Causal cycles in categorization. Memory & Cognition, 37, 744–758. Kushnir, T., & Gopnik, A. (2005). Young children infer causal strength from probabilities and interventions. Psychological Science, 16, 678–683. Lamberts, K. (1995). Categorization under time pressure. Journal of Experimental Psychology: General, 124, 161–180. Lamberts, K. (1998). The time course of categorization. Journal of Experimental Psychology: Learning, Memory, and Cognition, 24, 695–711. Landauer, T. K., & Dumais, S. T. (1997). A solution to Plato’s problem: The latent semantic analysis theory of knowledge acquisition, induction, and representation. Psychological Review, 104, 211–240. Lin, E. L., & Murphy, G. L. (1997). The effects of background knowledge on object categorization and part detection. Journal of Experimental Psychology: Human Perception and Performance, 23, 1153–1163. Lombrozo, T. (2007). Simplicity and probability in causal explanation. Cognition, 55, 232–257. Lombrozo, T. (2009). Explanation and categorization: How ‘‘why?’’ informs ‘‘what?’’ Cognition, 110, 248–253. Lober, K., & Shanks, D. R. (2000). Is causal induction based on causal power? Critique of Cheng (1997). Psychological Review, 107, 195–212. Lu, H., Yuille, A. L., Liljeholm, M., Cheng, P. W., & Holyoak, K. J. (2008). Bayesian generic priors for causal learning. Psychological Review, 115, 955–984. Luhmann, C. C., Ahn, W., & Palmeri, T. J. (2006). Theory-based categorization under speeded conditions. Memory & Cognition, 34(5), 1102–1111. Malt, B. C. (1994). Water is not H2O. Cognitive Psychology, 27, 41–70.
114
Bob Rehder
Malt, B. C., & Smith, E. E. (1984). Correlated properties in natural categories. Journal of Verbal Learning and Verbal Behavior, 23, 250–269. Malt, B. C., & Johnson, E. C. (1992). Do artifacts have cores? Journal of Memory and Language, 31, 195–217. Marsh, J., & Ahn, W. (2006). The role of causal status versus inter-feature links in feature weighting. In R. Sun & N. Miyake (Eds.), Proceedings of the 28th Annual Conference of the Cognitive Science Society (pp. 561–566). Mahwah, NJ: Erlbaum. Matan, A., & Carey, S. (2001). Developmental changes within the core of artifact concepts. Cognition, 78, 1–26. Medin, D. L., & Schaffer, M. M. (1978). Context theory of classification learning. Psychological Review, 85, 207–238. Medin, D. L., & Ortony, A. (1989). Psychological essentialism. In S. Vosniadou & A. Ortony (Eds.), Similarity and analogical reasoning (pp. 179–196). Cambridge, MA: Cambridge University Press. Medin, D. L., Wattenmaker, W. D., & Hampson, S. E. (1987). Family resemblance, conceptual cohesiveness, and category construction. Cognitive Psychology, 19, 242–279. Meunier, B., & Cordier, F. (2009). The biological categorizations made by 4 and 5-year olds: The role of feature type versus their causal status. Cognitive Development, 24, 34–48. Minda, J. P., & Smith, J. D. (2002). Comparing prototype-based and exemplar-based accounts of category learning and attetional allocation. Journal of Experimental Psychology: Learning, Memory, and Cognition, 28, 275–292. Morris, M. W., & Larrick, R. P. (1995). When one cause casts doubt on another: A normative analysis of discounting in causal attribution. Psychological Review, 102, 331–355. Murphy, G. L., & Medin, D. L. (1985). The role of theories in conceptual coherence. Psychological Review, 92, 289–316. Murphy, G. L. (2002). The big book of concepts. Cambridge, MA: MIT Press. Murphy, G. L., & Allopenna, P. D. (1994). The locus of knowledge effects in concept learning. Journal of Experimental Psychology: Learning, Memory, and Cognition, 20, 904–919. Murphy, G. L., & Wisniewski, E. J. (1989). Feature correlations in conceptual representations. In G. Tiberchien (Ed.), Advances in cognitive science: Vol. 2. Theory and applications (pp. 23–45). Chichester: Ellis Horwood. Oppenheimer, D. M., & Tenenbaum, J. B. (2009). Categorization as causal explanation: Discounting and augmenting of concept-irrelevant features in categorization. Submitted for publication. Palmeri, T. J., & Blalock, C. (2000). The role of background knowledge in speeded perceptual categorization. Cognition, 77, B45–B47. Patalano, A. L., & Ross, B. H. (2007). The role of category coherence in experience-based prediction. Psychonomic Bulletin & Review, 14, 629–634. Pearl, J. (1988). Probabilistic reasoning in intelligent systems: Networks of plausible inference. San Mateo, CA: Morgan Kaufman. Poulton, E. C. (1989). Bias in quantifying judgments. Hillsdale, NJ: Erlbaum. Rehder, B., & Burnett, R. C. (2005). Feature inference and the causal structure of object categories. Cognitive Psychology, 50, 264–314. Rehder, B., & Murphy, G. L. (2003). A Knowledge-Resonance (KRES) model of category learning. Psychonomic Bulletin & Review, 10, 759–784. Rehder, B. (2003a). Categorization as causal reasoning. Cognitive Science, 27, 709–748. Rehder, B. (2003b). A causal-model theory of conceptual representation and categorization. Journal of Experimental Psychology: Learning, Memory, and Cognition, 29, 1141–1159. Rehder, B. (2007). Essentialism as a generative theory of classification. In A. Gopnik & L. Schultz (Eds.), Causal learning: Psychology, philosophy, and computation (pp. 190– 207). Oxford: Oxford University Press.
Causal-Based Categorization: A Review
115
Rehder, B. (2009a). Causal-based property generalization. Cognitive Science, 33, 301–343. Rehder, B. (2009b). Then when and why of the causal status effect. Submitted for publication. Rehder, B., & Hastie, R. (2001). Causal knowledge and categories: The effects of causal beliefs on categorization, induction, and similarity. Journal of Experimental Psychology: General, 130, 323–360. Rehder, B., & Hastie, R. (2004). Category coherence and category-based property induction. Cognition, 91, 113–153. Rehder, B., & Kim, S. (2006). How causal knowledge affects classification: A generative theory of categorization. Journal of Experimental Psychology: Learning, Memory, and Cognition, 32, 659–683. Rehder, B., & Kim, S. (2008). The role of coherence in causal-based categorization. In V. Sloutsky, B. Love & K. McRae (Eds.), Proceedings of the 30th Annual Conference of the Cognitive Science Society (pp. 285–290). Mahwah, NJ: Erlbaum. Rehder, B., & Kim, S. (2009a). Classification as diagnostic reasoning. Memory & Cognition, 37, 715–729. Rehder, B., & Kim, S. (2009b). Causal status and coherence in causal-based categorization. Journal of Experimental Psychology: Learning, Memory and Cognition. Accepted pending minor revisions. Rehder, B., & Milovanovic, G. (2007). Bias toward sufficiency and completeness in causal explanations. In D. MacNamara & G. Trafton (Eds.), Proceedings of the 29th Annual Conference of the Cognitive Science Society (p. 1843). . Rehder, B., & Ross, B. H. (2001). Abstract coherent concepts. Journal of Experimental Psychology: Learning, Memory, and Cognition, 27, 1261–1275. Reichenbach, H. (1956). The direction of time. Berkeley: University of California Press. Rips, L. J. (1989). Similarity, typicality, and categorization. In S. Vosniadou & A. Ortony (Eds.), Similarity and analogical reasoning (pp. 21–59). New York: Cambridge University Press. Rips, L. J. (2001). Necessity and natural categories. Psychological Bulletin, 127, 827–852. Rosch, E. H., & Mervis, C. B. (1975). Family resemblance: Studies in the internal structure of categories. Cognitive Psychology, 7, 573–605. Salmon, W. C. (1984). Scientific explanation and the causal structure of the world. Princeton, NJ: Princeton University Press. Sloman, S. A. (1996). The empirical case for two systems of reasoning. Psychological Bulletin, 119, 3–23. Sloman, S. A. (2005). Causal models: How people think about the world and its alternatives. Oxford, UK: Oxford University Press. Sloman, S. A., & Lagnado, D. A. (2005). Do we "do"? Cognitive Science, 29, 5–39. Sloman, S. A., Love, B. C., & Ahn, W. (1998). Feature centrality and conceptual coherence. Cognitive Science, 22, 189–228. Smith, E. E., & Sloman, S. A. (1994). Similarity- versus rule-based categorization. Memory & Cognition, 22, 377–386. Sobel, D. M., & Kirkham, N. Z. (2006). Blickets and babies: The development of causal reasoning in toddlers and infants. Developmental Psychology, 42, 1103–1115. Sobel, D. M., Tenenbaum, J. B., & Gopnik, A. (2004). Children’s causal inferences from indirect evidence: Backwards blocking and Bayesian reasoning in preschoolers. Cognitive science, 28, 303–333. Sobel, D. M., Yoachim, C. M., Gopnik, A., Meltzoff, A. N., & Blumenthal, E. J. (2007). The blicket within Preschoolers’ inferences about insides and causes. Journal of Cognition and Development, 8, 159–182. Tversky, A., & Kahneman, D. (1980). Causal schemas in judgments under uncertainty. In M. Fishbein (Ed.), Progress in social psychology. Hillsdale, NJ: Erlbaum.
116
Bob Rehder
Tversky, A., & Gati, I. (1982). Similarity, separability, and the triangular inequality. Psychological Review, 89, 123–154. Tversky, A., & Koehler, D. J. (1994). Support theory: A nonextensional representation of subjective probability. Psychological Review, 101, 547–567. Waldmann, M. R., & Hagmayer, Y. (2005). Seeing versus doing: Two modes of accessing causal knowledge. Journal of Experimental Psychology: Learning, Memory, & Cognition, 31, 216–227. Waldmann, M. R., & Holyoak, K. J. (1992). Predictive and diagnostic learning within causal models: Asymmetries in cue competition. Journal of Experimental Psychology: General, 121, 222–236. Waldmann, M. R., Holyoak, K. J., & Fratianne, A. (1995). Causal models and the acquisition of category structure. Journal of Experimental Psychology: General, 124, 181–206. Wattenmaker, W. D., Dewey, G. I., Murphy, T. D., & Medin, D. L. (1986). Linear separability and concept learning: Context, relational properties, and concept naturalness. Cognitive Psychology, 18, 158–194. Wisniewski, E. J. (1995). Prior knowledge and functionally relevant features in concept learning. Journal of Experimental Psychology: Learning, Memory, and Cognition, 21, 449–468.
C H A P T E R
T H R E E
The Influence of Verbal and Nonverbal Processing on Category Learning John Paul Minda and Sarah J. Miles Contents 118 121 121 122 123 125 127 128 130 130 131 133 136 136 137 141 148 150 152 154 154 155 157 157
1. Introduction 2. Multiple Processes and Systems 2.1. Earlier Research 2.2. Rules and Similarity 2.3. Analytic and Holistic Processing 2.4. Multiple-Systems Theory 2.5. Other Models 2.6. Neuroimaging Data 2.7. Summary 3. A Theory of Verbal and Nonverbal Category Learning 3.1. Description and Main Assumptions of the Theory 3.2. Additional Assumptions 3.3. Summary 4. Experimental Tests of the Theory 4.1. Comparisons Across Species 4.2. Developmental Effects 4.3. Interference Effects 4.4. Indirect Category Learning 4.5. Other Predictions 5. Relationship to Other Theories 5.1. Verbal and Nonverbal Learning and COVIS 5.2. Single-System Models 6. Conclusions References
Abstract Categories are learned in a variety of ways and one important distinction concerns the effects of verbal and nonverbal processing on category learning. This chapter reviews the research from behavioral studies, computational modeling, and imaging studies that support this distinction. Although there is
Psychology of Learning and Motivation, Volume 52 ISSN 0079-7421, DOI: 10.1016/S0079-7421(10)52003-6
#
2010 Elsevier Inc. All rights reserved.
117
118
John Paul Minda and Sarah J. Miles
some consensus that subjects will often learn new categories by searching for verbal rules, there is less agreement in the literature about how categories are learned when a rule is not usable or when the subject has restricted access to verbal abilities. Accordingly, we outline a general theory of verbal and nonverbal category learning. We assume that verbal category learning relies on working memory and is primarily involved in rule-based categorization. Nonverbal category learning may rely on visual working memory and is primarily involved in similarity-based categorization. We present the results of several studies from our lab that test many of the predictions from this theory. Although we do not argue for two completely independent learning systems, we argue that the available evidence strongly supports the existence of these two approaches of learning categories.
1. Introduction Categories are fundamental to cognition, and the ability to learn and use categories is present in all humans and animals. For example, when a physician offers a diagnosis to a sick patient, he or she is classifying that patient into a known disease category. In making the diagnosis, the physician can use the category to make other decisions, like how to treat the patient and how to help the patient manage his or her disease. The diagnosis is likely made on the basis of specific symptoms, and possibly by applying a set of diagnostic rules (e.g., blood-glucose level above a certain range, swelling in the ankles, etc.). The diagnosis may also be made via similarity of the patient to previously seen patients (Norman & Brooks, 1997). Choosing symptoms and applying rules is a verbally mediated process whereas calculating or assessing the similarity of the patient to memories of previous patients is a nonverbal process. Many factors likely influence the relative balance of these processes, and different categories probably rely on a different balance of these two processes. The investigation of verbal and nonverbal category learning is a primary focus of our research. In this chapter, we outline a general theory that assumes that humans learn categories in a variety of ways, and that one of the most salient divisions occurs between verbal and nonverbal processing. Beyond the example discussed above, there are several reasons to support this verbal/ nonverbal distinction. First and foremost, there is considerable behavioral evidence that some categories are primarily learnable by verbal means such as learning rules and hypothesis testing (Allen & Brooks, 1991; Ashby, Alfonso-Reese, Turken, & Waldron, 1998; Bruner, Goodnow, & Austin, 1956; Minda, Desroches, & Church, 2008; Minda & Ross, 2004; Zeithamova & Maddox, 2006). That is, people utilize and rely on verbal abilities to assist in learning new categories. Any compromise in verbal processing could interfere with how these categories are learned.
Verbal and Nonverbal Category Learning
119
Second, there is also a long tradition of research that focuses on the nonverbal learning of categories by implicit or indirect learning (Brooks, 1978; Jacoby & Brooks, 1984; Kemler Nelson, 1984, 1988; Smith & Shapiro, 1989; Smith, Tracy, & Murray, 1993; Ward, 1988; Ward & Scott, 1987). Category learning in these cases is thought to be nonverbal to the extent that learners are not actively verbalizing rules and testing hypotheses. Third, there is considerable support from neuroscience that has examined the separate contributions of verbal and nonverbal (i.e., visual) brain regions for learning categories (Ashby & Ell, 2001; Ashby, Ell, & Waldron, 2003; Maddox, Aparicio, Marchant, & Ivry, 2005; Maddox & Ashby, 2004; Patalano, Smith, Jonides, & Koeppe, 2001; Reber, Stark, & Squire, 1998b; Smith, Patalano, & Jonides, 1998). Finally, there is a literature examining the category learning abilities of humans and nonhuman species, that has noted similarities between humans and primates on categories that can be learned via nonverbal means but has noted an advantage by humans on categories that can be learned via verbal means (Smith, Minda, & Washburn, 2004; Smith, Redford, & Haas, 2008). This verbal/nonverbal division in category learning is intuitive, as many objects can be described verbally and classified verbally, but also contain perceptual features that correspond to these verbal rules (Brooks & Hannah, 2006). Consider this example. A young angler might learn to classify two trout fish by describing important features verbally. In fact, these features might be explicitly learned and committed to memory. As an example, Figure 1 shows the distinguishing features between two common species of trout. This guide, adapted from the Province of Ontario’s ‘‘Fish Identification Guide’’ (Fish and Wildlife Branch, 2009) mentions several key common features (e.g., the black dots on the body) and highlights a key distinguishing feature for the Brown Trout (‘‘the only salmon or trout with orange on adipose fin’’). This is a verbal rule or verbal feature list that can be consulted when a classification is being made. If an angler memorized this feature list, he or she would be able to make a correct classification. But in practice, there are a lot of fish for the angler to distinguish and the rules are complicated. It is not hard to imagine that these explicit rules become less and less important as other processes and strategies take over. The angler may catch a fish and classify it because it has a ‘‘brown trout fin,’’ which would be a simplification of the original rule and one that is more like specific feature selection than a verbal rule. Or a more experienced angler may quickly categorize the fish on sight. The rule may no longer be consulted and the classification may be performed solely on the basis of a quick comparison of the perceptual input to stored category representations (prototypes or instances). Still, however, the features that demand the most attention are likely to be those same features that were named in the original rule (Brooks & Hannah, 2006). Although the classification may no longer be rule based, the verbal process used during the initial rule learning may
120
John Paul Minda and Sarah J. Miles
Rainbow trout/inland L: 15–40 cm (6–16 in.). D: South of a line from Kenora to Kesagami Lake. S: Brook and brown trout; juvenile Atlantic salmon. K: Many small black spots on body; spots over tail in radiating rows; pink lateral stripe; Favorite baits: Spinners, leading anal fin ray extends the length spoons, roe, worms, flies of the fin; long, stocky caudal peduncle. Brown trout/inland L: 20–40 cm (8–16 in.). D: Occasional south of the French River, mostly in great lakes tributaries. S: Rainbow and brook trout; juvenile Atlantic salmon. K: Large black, blue or red spots on body, Favorite baits: Spinners, often surrounded by lighter ring; tail with few spoons, worms, flies spots; only salmon or trout with orange on adipose fin; leading anal fin ray extends the length of the fin; short, stocky caudal peduncle.
Figure 1 An excerpt from Ontario’s sport fish identification guide showing the key differences between two species of trout. The categories are defined by verbal rules, but there is strong visual similarity among category member. Used with permission. Note: L ¼ length, D ¼ distribution/habitat, S ¼ similar fish, K ¼ key identifying characteristics.
result in a category representation that was shaped by the initial rule. The point is that for some (the novice) the fish might be classified by the verbal rule. For a more seasoned angler, however, the classification might be based on a prototype or a collection of instance memories. These representations are probably not able to be described verbally and may even be accessed via implicit memory (Smith & Grossman, 2008). As this example illustrates, there is reason to believe that people are able to base categorizations on information that can be described verbally as well as information that cannot be described verbally. Accordingly, we consider a wide range of research that investigates the same issue. This chapter is structured as follows. In the first section, we review research from behavioral studies, computational modeling, and from cognitive neuroscience that strongly suggests and supports the verbal/nonverbal distinction. In the second section, we provide a detailed discussion of a general theory of verbal and nonverbal category learning. This follows with an examination of empirical work from our lab that tests several predictions of this theory. Finally, we consider the relationship of our approach to other theories of concept learning.
Verbal and Nonverbal Category Learning
121
2. Multiple Processes and Systems 2.1. Earlier Research There has been a long tradition in cognitive psychology of comparing and contrasting two or more systems with each other. Research on category learning has often made a distinction regarding the learning of rules, inherently a verbal process, versus learning about overall similarity. For example, some of the earliest ideas of category and concept learning emphasized the learning of definitions (Bruner et al., 1956). These views were collectively called the classical view by Smith and Medin (1981) and remained a dominant view in the literature until several influential programs of research in the 1960s and 1970s. The first of these was the dot pattern research of Posner and Keele, and Homa and colleagues (Homa, Cross, Cornell, & Shwartz, 1973; Homa & Cultice, 1984; Posner & Keele, 1968). In these experiments, subjects were shown various patterns of dots or polygons that were distortions of an original pattern (i.e., the prototype). These distortions were patterns that were similar to, but not exactly like, the original prototype. Small adjustments of the location of each dot resulted in items that were ‘‘low distortions’’ of the original prototype, and larger adjustments resulted in ‘‘high distortions.’’ Subjects were generally trained on high-distortion items. Crucially, subjects were never shown the prototype during the training session. Later, during a test phase, subjects were usually shown the old patterns, some new distortions of varying levels of typicality, and the original prototype. Studies using these dot patterns have generally found a consistent pattern of results. First, subjects often performed as well on the prototype as they did on the old patterns, even though the prototype was originally unseen. Second, if the test was delayed by several hours or days, performance on the training items declined whereas performance on the prototype remained strong. Finally, the endorsement of new items showed a typicality effect, such that items that were closer to the prototype were endorsed more strongly as category members than items that were physically more distant (Homa et al.; Homa & Cultice; Posner & Keele; Smith & Minda, 2001; Smith et al., 2008). The most striking aspect of this research is how well subjects can learn these categories, given how difficult (or impossible) these stimuli are to describe verbally. This suggests that learning dot pattern stimuli may not require much verbal ability or verbal processing. Clearly, this would pose a difficulty for the assumptions of the classical view. A second key development in cognition was Eleanor Rosch’s influential work in the 1970s (Rosch & Mervis, 1975; Rosch, Mervis, Gray, Johnson, & Boyes-Braem, 1976). Rosch introduced the idea of ‘‘family resemblance’’ (FR) as an alternative to the classical rule-based models that were dominant
122
John Paul Minda and Sarah J. Miles
at the time. In a FR category, the exemplars of a category share many features, just like members of a family might, but there is no one feature that can be used as a rule. Rosch argued that for many categories, the prototype was an abstract representation with highest FR to other category members. Although a person might be able to describe the category verbally, this verbal description might not correspond exactly to the prototype. Furthermore although the prototype might determine classification, a verbal description of the prototype need not enter into the classification decision. In other words, a decision could be made without reference to a verbal rule, but by nonverbal reference to a prototype. Rosch’s work, in conjunction with Posner’s and Homa’s research, provided the groundwork for prototype theory’s dominance in the 1970s and 1980s, and the tendency to assume that similarity, rather than rules, was the key factor in categorization.
2.2. Rules and Similarity In contrast to Posner’s and Rosch’s emphasis on similarity, other research argued that similarity is insufficient to explain certain categorization phenomena (Rips, 1989; Smith & Sloman, 1994). For example, Rips asked subjects to consider a set of objects (e.g., a pizza, a quarter, and a 3-in. round object). One group of subjects indicated that the 3-in. round object was more similar to the quarter than to the pizza. Thus, their similarity judgments were correlated with perceptual and featural overlap. However, another group of subjects judged that the 3-in. round object was more likely to be a member of the pizza category. That is, categorization decisions did not track similarity. Rips suggested that similarity is insufficient as the sole driving mechanism for categorization and suggested that other factors can influence classification. In this case, he pointed to category variability and the possibility of a rule. Whereas quarters have very low variability (most are nearly exactly the same) pizzas come in a range of sizes and shapes (e.g., personal size, 18-in. round, square, on a bagel, etc.). The 3-in. round object, while perhaps more similar to the quarter category, was a more likely member of the pizza category. The greater variability of the pizza category allowed for more extreme members to be accepted. The lower variability of the quarter category undermined similarity-based categorization and encouraged rule application. This effect has been examined and demonstrated with other stimuli as well (Cohen, Nosofsky, & Zaki, 2001; Stewart & Chater, 2002). Other research supported the distinction between rule-based category learning and exemplar-based category learning (Allen & Brooks, 1991). Subjects in this experiment learned to categorize artificial animals into two categories, BUILDERs and DIGGERs. The animals were composed from five binary attributes, and were cartoon-like. During the training phase, subjects were asked to learn the categories and one group was taught
Verbal and Nonverbal Category Learning
123
a rule (e.g., ‘‘If an animal has at least two of the following attribute values— long legs, angular body, spotted covering—it is a BUILDER; otherwise it is a DIGGER’’). A second group, the exemplar-similarity group, was shown the same animals but not the rule. This group was instructed that the first time they saw an animal they would have to guess its category, but on subsequent trials they would be able to remember what it was. Later, test stimuli were presented to examine any difference between rule and exemplar-similarity learning. For example, one kind of test item (the ‘‘positive match’’) followed the BUILDER rule and was also similar to an old BUILDER exemplar. A ‘‘negative match’’ was also a BUILDER according to the rule, but it was similar to an old DIGGER exemplar. Allen and Brooks reasoned that if rule subjects were really just following the rule, they should categorize both the positive and negative match items as member of the BUILDER category because they followed the same rule. That is, the rule should trump similarity. On the other hand, if exemplarsimilarity subjects categorize novel items by retrieving the stored exemplar most similar to it and selecting the category associated with that of old exemplar, they should categorize positive matches as BUILDERs and negative matches as DIGGERs. Interestingly, Allen and Brooks found evidence for both rule use and exemplar use. For negative matches, rule subjects tended to follow the rule but still showed evidence of exemplar use and their data suggested that the rule and exemplar similarity were in conflict. For the exemplar-similarity group, categorization tended to follow the old-item similarity. These results provide strong evidence for the existence of two categorization processes, and additional work—described later in this chapter—has explored the neural underpinning regarding the same task (Patalano et al., 2001). Furthermore, these data have been taken by some to suggest the reliance on working memory (rules) and explicit, long-term memory (exemplars) in category learning (Smith & Grossman, 2008). So it is clear that people sometimes rely on rules and may also rely on similarity when learning categories. Accordingly, researchers have examined the factors that might mediate between the nonverbal, FR category learning that subjects sometimes show, and the tendency to look for rules on many tasks that subjects also show. We examine some of this research below.
2.3. Analytic and Holistic Processing Research that distinguishes between analytic and holistic styles of categorization offers one account of subjects’ use of FR and rule-based categorization (Brooks, 1978; Jacoby & Brooks, 1984; Kemler Nelson, 1984, 1988; Smith & Shapiro, 1989; Smith et al., 1993; Ward, 1988; Ward & Scott, 1987). For example, Brooks provided an early account of analytic and
124
John Paul Minda and Sarah J. Miles
nonanalytic concept identification. When analytic concept identification is used, the goal of the task is discovering a sweeping generalization (i.e., a rule) that can be applied to all new instances. In order to discover and apply this generalization, separate aspects/features of the stimulus are evaluated for their ability to predict category membership. The hypothesis testing process described by Brooks is likely to rely on verbal processing to engage in the testing and summarization. Brooks also described a nonanalytic mode of concept identification, in which an item’s category membership is based on overall similarity. That is, an item is placed into the category with the item or cluster of items with which it is most similar. This is much like the FR categorization described earlier. While the nonanalytic/holistic mode does not preclude verbal processing, it may not require it either. Research by Kemler Nelson and others (Kemler Nelson, 1984, 1988; Smith & Shapiro, 1989; Smith et al., 1993) linked analytic categorization to intentional learning (but see Ward, 1988; Ward & Scott, 1987), which occurs when subjects are explicitly trying to learn a new category. Intentional learning involves strategic, goal-directed categorization, often resulting in deliberate hypothesis testing. Since hypothesis testing is encouraged by the intentional learning, analytic processing is the result, and rule-based categories are learned easily. However, incidental learning occurs when subjects are not explicitly told the goal of the task. Rather, subjects learn to do an unrelated task, such as stimulus rating. Incidental learning often results in nonanalytic (holistic) learning since no deliberate hypothesis testing is necessary during the learning phase. Another explanation is that the subject’s verbal abilities are simply not being engaged to learn categories because the subject is occupied with the stimulus rating. Recently, Davis, Love, and Maddox (2009) have applied a similar analytic/holistic distinction to stimulus encoding rather than stimulus categorization. Just like during categorization, the holistic, image-based pathway encodes an object as a whole rather than breaking it down into its constituent parts. This process is rapid and automatic but only occurs with experience. The analytic, part-based pathway encodes an object by breaking it down and labeling important features. This type of encoding requires a sufficiently rich symbolic vocabulary for feature labeling and requires time and cognitive effort. According to this theory, the pathway that is used for stimulus encoding depends on the characteristics of the to-be-encoded object, the observer’s level of expertise with the objects and the availability of cognitive resources. This theory makes many predictions that are similar to the analytic/holistic categorization theories discussed above, but also makes some predictions that are unique. For example, object features can affect the encoding pathway used, ultimately affecting categorization performance. When an object has features that are easily labeled, part-based encoding is favored and exception items can be learned quickly. When features are not easily labeled, Davis et al. argued that image-based encoding
Verbal and Nonverbal Category Learning
125
is favored and exception items are learned slowly. It is clear then, that many factors, such as task and stimulus structure, can influence whether categories are learned using analytic or holistic processing.
2.4. Multiple-Systems Theory The research described above suggests that a reliance on higher order, verbal functioning, working memory for example, might result in the better learning of rules. If the reliance on verbal learning is downplayed or compromised, one might expect category learning to proceed in a more holistic fashion. The idea that working memory and executive functioning play a role in the learning of some categories but not in others is one of the central predictions of a multiple-systems theory of category learning called the Competition of Verbal and Implicit Systems, or COVIS (Ashby & Ell, 2001; Ashby et al., 1998). This model specifies that at least two broadly defined brain systems are fundamentally involved in category learning. The explicit, verbal system is assumed to learn rule-described categories. These are categories for which the optimal rule is relatively easy to describe verbally (Ashby & Ell). For example, consider a category set in which round objects belong to one group and angular objects belong to another group. These categories could be quickly mastered by the explicit system because a rule is easy to verbalize (‘‘category 1 items are round’’). According to COVIS, the explicit system is mediated by the prefrontal cortex and it requires sufficient cognitive resources (e.g., working memory and executive functioning) to search for, store, and apply a rule (Zeithamova & Maddox, 2006). Furthermore, this system is assumed to be the default approach for normally functioning adults learning new categories (Ashby et al.; Minda et al., 2008). COVIS also assumes that an implicit system learns non-rule-described categories. These are categories for which no easily verbalizable rule exists or for which two or more aspects of the stimulus must be integrated at a predecisional stage (Ashby & Ell, 2001). According to COVIS, the neurobiology of the implicit system constrains the type of learning that can be done by this system. Once a to-be-categorized stimulus is viewed, the visual information is sent from the visual cortex to the tail of the caudate nucleus where a motor program is chosen to carry out the categorization. When an item is categorized correctly, the feedback acts as an unexpected reward and causes dopamine to be released, strengthening the association between the stimulus and the correct categorization response. When an item is categorized incorrectly, the release of dopamine is depressed and the association between the stimulus and categorization response is not strengthened (Ashby et al., 1998; Spiering & Ashby, 2008). The reliance of the implicit system on this type of dopamine-mediated learning has two implications. First, because dopamine plays a role in motor activation, the implicit system
126
John Paul Minda and Sarah J. Miles
is well suited for procedural learning. Second, feedback causes the release of dopamine that is used for learning, so proper feedback is necessary for the implicit system to learn (Wickens, 1990).1 The procedural learning system described by COVIS makes several predictions. First, a consistent association between a stimulus and a response location facilitates learning. In a study by Ashby et al. (2003) subjects learned to categorize using one hand/button configuration and were later tested using another hand/button configuration. Similar to the results found in procedural memory studies (Willingham, Nissen, & Bullemer, 1989; Willingham, Wells, Farrell, & Stemwedel, 2000) learning by the implicit system was facilitated when each stimulus was associated with a consistent response location and hindered when no consistent response location existed. Other studies compared category learning with an ‘‘A’’ or ‘‘B’’ response to learning with a ‘‘Yes’’ or ‘‘No’’ response and found similar results (Maddox, Bohil, & Ing, 2004). As each exemplar was presented, the question ‘‘Is this an A’’ or ‘‘Is this a B’’ was also presented, and the subject was instructed to indicate yes or no. Subjects received feedback on this ‘‘Yes’’/‘‘No’’ response. In this way, each exemplar was equally associated with the ‘‘Yes’’ response and the ‘‘No’’ response and this interfered with the learning of consistent response locations. Second, the implicit system is also compromised when feedback is delayed. For the implicit system, it is imperative that feedback occurs soon after a categorization response so that the stimulus-response connections are still active when feedback causes dopamine to be released. Feedback timing is less important for the verbal system, which is able to store the categorization rule in working memory until feedback is provided and the effectiveness of the rule is evaluated. However, delaying feedback by as little as 2.5 s is detrimental to performance on non-rule-described categories but not rule-described categories (Maddox, Ashby, & Bohil, 2003). In contrast, once feedback is provided, the implicit system of COVIS does not require time and cognitive resources to process the feedback. Instead, feedback processing occurs through the automatic strengthening of synapses. The verbal system, on the other hand, relies on working memory, attention, and time to process feedback. Therefore, when working memory and attentional resources are made unavailable immediately following feedback, many subjects fail to learn using the verbal system but are able to learn using the implicit system (Maddox, Ashby, Ing, & Pickering, 2004; Zeithamova & Maddox, 2006). Both the verbal and implicit systems are assumed to operate in normally functioning adults, and both can contribute to performance, even after learning has progressed. In general, COVIS assumes that the system with 1
COVIS emphasizes the release of dopamine that accompanies a reward signal. Although error signals may also affect category learning, COVIS is relatively silent on the issue how errors may differ from rewards beyond simply not strengthening the connection.
Verbal and Nonverbal Category Learning
127
the more successful responding will eventually dominate performance. For instance, although the verbal system is considered to be the default system for adults, some categories may not be easily learned by a verbal rule. In this case, the implicit system would produce more accurate responses and would take over. Also, if rule-based categories are learned under conditions in which the learner is distracted and working memory is being used for another task, the implicit system would have to take over for the struggling explicit system.
2.5. Other Models Beyond the analytic/holistic distinction and COVIS, several other models have been proposed that make an assumption regarding the multiple cognitive processes involved in category learning. For example, Nosofsky and Palmeri proposed the RULEX model which is a model of rule and exception learning (Nosofsky & Palmeri, 1998; Nosofsky, Palmeri, & McKinley, 1994). RULEX assumes that subjects will begin learning categories by finding a simple rule that works reasonably well, and will fine-tune performance by learning exceptions to the rule at a later stage. In this case, rule learning is the default, and there is a premium on simplicity (i.e., singledimensional rules). RULEX was not specifically designed as a ‘‘multiple systems’’ model, but the assumptions in RUELX are consistent with the other theories and models we have been discussing. Like COVIS, RULEX assumes that people first try to learn rules. Like the Allen and Brooks (1991) work, RULEX also places an importance on exemplars, though in this case, they are learned only as exceptions to the rule. Another model that is closely related to RULEX, but assumes a larger role for exemplar similarity in category learning, is the ATRIUM model which combines rules and exemplars (Erickson & Kruschke, 1998). As with RULEX, ATRIUM also assumes an initial reliance on simple rules (typically single-dimensional rules), and stores exemplars that can also produce classifications. In ATRIUM, one pathway is designed to learn rules while the other pathway activates stored exemplars according to their similarity to the to-be-categories item. ATRIUM is notable for having a gating mechanism which allows both pathways (rules and exemplars) to operate simultaneously. The model determines category membership based on the mixture of evidence provided from both pathways and the gating mechanism can adjust the relative importance of these two sources of information. In other words, unlike COVIS, which makes a decision based on evidence from either the verbal or implicit system, ATRIUM can assume that a mixture is used. Like RULEX and COVIS, this model has been successful at accounting for a variety of categorization phenomena.
128
John Paul Minda and Sarah J. Miles
2.6. Neuroimaging Data The preceding section makes clear that there is behavioral and theoretical precedence for the consideration of category learning as involving verbal rules and nonverbal similarity. There is also considerable evidence from cognitive neuroscience for the existence of separate verbal and nonverbal contributions in category learning. An early example was provided by Patalano and colleagues (Patalano et al., 2001), who asked subjects to learn a set of categories by either using a rule-based strategy or an exemplar-based strategy (Allen & Brooks, 1991). During the categorization, they tracked the cerebral blood flow with a PET scan and found distinct patterns of neural activation for each task. Rule-based classification showed increased activation of the occipital cortex, the posterior parietal cortex, and the prefrontal cortex, consistent with the cognitive functions of visual processing, selective attention, and working memory, respectively. In contrast, the exemplar-similarity learning showed activation in the occipital cortex, consistent with the primary role of visual memory when subjects were not using a verbal rule. Other recent evidence has indicated a strong role for the occipital cortex in the learning of dot pattern categories (Reber et al., 1998b; Reber, Stark, & Squire, 1998a), again suggesting a heavy contribution of visual areas when category learning does not depend of a rule. Research has also examined the neural correlates to the learning of ruledefined and information-integration categories, which are commonly used by researchers who work on multiple-systems models. Figure 2A illustrates a rule-defined category for Gaussian blur stimuli that are defined by the frequency and orientation of the dark and light bands. Points (exemplars) on the left of the line are members of Category A and points to the right of the line are members of Category B. The vertical line separating Category A and Category B corresponds to a strategy that maximizes categorization accuracy. A single-dimensional rule, emphasizing frequency, can be verbalized and employed to correctly classify the exemplars. Figure 2B illustrates a type of non-rule-defined category set that is sometime known as an ‘‘information-integration’’ category set. Because both frequency and orientation contribute to category membership, the decision boundary is not parallel to either axis/feature, and these categories are not easily described by a verbal rule. Instead, for successful categorization, information from multiple dimensions must be combined before a categorization decision can be made. Recent work by Nomura and colleagues (Nomura et al., 2007) asked subjects to learn rule-defined or information-integration categories while tracking their BOLD signal in an fMRI scanner. They found increased activation in the medial temporal lobe during correct categorization of rulebased stimuli, suggesting a role for declarative knowledge and underscoring
129
Verbal and Nonverbal Category Learning
A
Rule-defined 500 400
Orientation
300 200 100 0
−200 −100
0
B
100
200 300 Frequency
400
500
600
400
500
600
Non-rule-defined 500 400
Orientation
300 200 100 0
−200 −100
0
100
200 300 Frequency
Figure 2 Panel A shows an example of a rule-defined category set of Gaussian blur stimuli that vary in terms of the spatial frequency and orientation of the light and dark bands. In the scatter plot, the open circles, represent Category A stimuli and the filled circles represent category B stimuli. Panel B shows an example of a non-rule-defined (sometime called information integration) category set.
the explicit, verbal nature of rule-based category learning. Subjects who correctly categorized information-integration stimuli showed increasing activation in the caudate, implicating a procedural learning style.
130
John Paul Minda and Sarah J. Miles
2.7. Summary The research that we reviewed suggests that a multiple systems or multiple processes theory of category learning is not exactly new. In fact, the distinction between rules and similarity is one of the central aspects of the literature on concepts and category learning. A number of studies have pointed to the effects of different learning styles (the analytic/holistic distinction) or the effects of different cognitive processes (verbal ability and procedural learning mechanisms). Other research has suggested that some categories lend themselves well to verbal analysis and/or rules whereas others less so (categories with discernible features and low variability vs. dot patterns). Recent work with neuroimaging has shed light on the different brain systems that underlie the learning of new categories. Some of this work argues for a fairly strong distinction between the various systems (e.g., COVIS or ATRIUM) whereas other accounts have taken a more interactive approach (the rules and exemplar learning envisioned by Brooks and colleagues (Allen & Brooks, 1991; Brooks & Hannah, 2006)). In short, it seems an unavoidable conclusion that there is more than one way to learn categories and represent them as concepts. Accordingly, in the section that follows, we make a proposal that centers on the role of verbal processes and nonverbal processes on category learning.
3. A Theory of Verbal and Nonverbal Category Learning The research reviewed above suggests that there are multiple categorization systems (or multiple processes, or dual pathways, etc.). Given this broad agreement on the existence of multiple processes, systems, or modes, how should one draw the dividing line? Our survey of the literature suggests that there is a clear distinction between categorization that is mediated by verbal processes (verbal descriptions of the stimuli, a reliance on verbal working memory, and hypothesis testing, etc.) and category learning that is primarily mediated by nonverbal processes (associations between stimuli and responses, visual pattern completion, visual working memory, and imagery). One of the central problems that we work on in our lab is discovering and delineating how this verbal/nonverbal distinction works, and discovering the fundamental cognitive processes involved in category learning. By fundamental cognitive processes, we mean constructs like selective attention, associative learning, working memory, etc. These are functions and processes available for many tasks and learning environments, including category learning.
131
Verbal and Nonverbal Category Learning
3.1. Description and Main Assumptions of the Theory We propose that there are two broadly defined systems or pathways by which new categories are learned and items are classified. A key distinction between these systems is that one relies heavily on verbal abilities and the other does not. That is, learners can make use of verbal descriptions and rules when learning categories but can also make use of nonverbal aspects of the stimulus or category. A possible conceptualization of the verbal/nonverbal distinction is shown in Figure 3. We have defined each of these pathways as a ‘‘cognitive system’’ and we define cognitive system as a collection of cognitive processes and functions, possibly mediated by distinct cortical structures, that work together to carry out an information processing task. We refer to one of the pathways as the ‘‘verbal system’’ and the other as the ‘‘nonverbal system.’’ The verbal system learns categories by trying to find a good verbal rule that will classify most of the stimuli. Of course, for rule-based categories reliance on this system will result in good category learning. For non-rule-based categories, like FR categories, this system may be less successful. The verbal system carries out this task by relying on working memory and hypothesis testing ability. The executive functioning assists in testing hypotheses, directing selective attention, inhibiting the responding to features and cues that are
Uses/Functions: Intentional learning Rule-based categories Analytic processing Hypothesis testing Executive function
Verbal working memory Verbal system
Stimulus input
Attentional allocation can be shared
Decision process
Nonverbal system Visual–spatial memory
Associative learning
Mental imagery
Uses/Functions: Incidental learning Perceptual categories Prototype learning Similarity-based learning
Figure 3 This figure shows a basic description of a verbal/nonverbal theory of category learning. The verbal system relies on verbal working memory as well as hypothesis testing abilities. The nonverbal system relies on visual working memory and associative learning.
132
John Paul Minda and Sarah J. Miles
not part of the rule, and inhibiting responses to rules that have been tried but are no longer being used. Because these are verbal rules, some degree of verbal working memory is required to state the rule, consider feedback, etc. A good rule in this context has several characteristics. First, the rule should not exceed the limits of working memory capacity: shorter rules are better, longer rules are not. Second, the rule should be related to a feature that is readily discernible. That is, a rule that emphasizes a shape of a body part is better than a rule that emphasizes the size of an internal organ (even if the latter is very reliable). Third, a good rule should work: it should have relatively few exceptions and should produce reliably good performance. Consider again the example that we began the chapter with: the fish shown in Figure 1. The rules that are given are good in that they are reliable rules that are related to readily discernible features, but not good in the sense that they would surely exceed the capacity of working memory during a fishing trip. In other words, there are rules for these categories but they may not be very usable. The other pathway is labeled as the nonverbal system. This system operates alongside the verbal system and would play a dominant role in the incidental learning of categories, for learning perceptual categories, and possibly for abstracting visual prototypes (as in the dot-pattern research discussed earlier). This system encompasses a broader range of abilities and functions relative to the verbal system. For example, there is evidence from animal learning work that selective attention plays a role in deciding which perceptual features matter most. But as we’ll discuss below, in a full description of both systems, the attentional allocation that is part of the rule system can influence the attentional policy in nonverbal system (Harris & Minda, 2006). The nonverbal system can also learn stimulus and response associations. Like the implicit systems in COVIS (Ashby et al., 1998), our nonverbal system may rely on a close connection between the response and the reward to drive the learning processes. Strong stimulus/response/reward association facilitates learning by strengthening the neural connections between the visual neurons and response selection neurons. However, there is evidence that many non-rule-defined categories can be learned without feedback or with minimal feedback. First, dot-pattern categories can be acquired during test, without any training (Palmeri & Flanery, 1999). Second, there is evidence of non-rule-based category learning with indirect feedback (Minda & Ross, 2004) and in unsupervised conditions with no feedback at all (Love, 2002, 2003). Finally, the research reviewed above regarding holistic category learning also seems to result in learning FRs, even without feedback (Kemler Nelson, 1984). In other words, strong stimulus/response/reward association can be beneficial to the nonverbal learning of categories, but this may not be a requirement. But as we describe below, we think that other nonverbal processes—like visual working memory and possibly visual imagery— might allow for flexibility in this system.
Verbal and Nonverbal Category Learning
133
3.2. Additional Assumptions 3.2.1. The Verbal System The basic theory described above has a number of key assumptions beyond the basic verbal and nonverbal qualities of each system. In this section, we describe the core cognitive properties associated with each system. First and foremost, the verbal system of category learning makes use of two components of the working memory system: the phonological loop— what we also refer to as verbal working memory—and the central executive. The phonological loop is described as a buffer for the temporary activation and storage of verbal information (Baddeley, 2003; Baddeley & Hitch, 1974; Baddeley, Lewis, & Vallar, 1984). Verbal information stored in the phonological loop fades quickly so rehearsal is used to keep information active and available for use. When rehearsal takes longer than the time capacity of the loop, some information will fade from memory. One implication of this limited capacity is that information stored in the phonological loop, such as categorization rules, must be simple enough to be kept active. Furthermore, if the phonological loop is involved in other tasks during category learning, rule-based learning should suffer. Therefore, there is limit to the complexity of verbal rules that can be learned. Another function of verbal working memory may be to store categorization responses and the corresponding feedback. In short, if aspects of the category-learning task can be described verbally, it is likely that verbal working memory is going to be occupied with the business of learning these categories. The verbal system of category learning also makes use of the central executive, which operates as a control system for working memory (Baddeley, 2003; Baddeley & Hitch, 1974; Baddeley et al., 1984). Among other activities, the central executive is thought to be involved in the selection and inhibition of information (like rules and responses) and has limited resources. The verbal system that we are describing involves some degree of hypothesis testing during category learning. We assume that the central executive is used by verbal working memory to interpret feedback or generate new categorization rules. In short, if aspects of the learning task involve deliberation, considering alternatives, or inhibiting responses to features or rules then the central executive will be involved. 3.2.2. The Nonverbal System The nonverbal system relies on several cognitive components. First, the nonverbal system relies on associative learning mechanisms—like strengthening the association between paired stimuli and responses—to learn categories. Learning by the nonverbal system will be especially enhanced when cues are maximally predictive. For example, in the FR category, no one feature can be used as a rule, but generally many features are predictive of category membership, and attention to multiple cues may result in forming
134
John Paul Minda and Sarah J. Miles
a strong association between a set of cues and a category label. As we discussed earlier, the role of associative learning between a category and its response (i.e., procedural learning) has been investigated under the COVIS framework. In general, it has been found that procedural learning is important for the nonverbal system. However, a series of recent studies have shown that the nonverbal system may not be as reliant on procedural learning mechanisms as Ashby and colleagues originally thought. For example, category and task difficulty can explain nonverbal system impairments that were originally attributed to its reliance on procedural learning mechanisms (Nosofsky & Stanton, 2005; Spiering & Ashby, 2008). In addition, the claim that consistent response locations are important for the nonverbal system may not be as strong as once thought. Although consistent response locations are favored by the nonverbal system, any consistent category cue can be used for learning nonverbal categories (Spiering & Ashby). In reality, the nonverbal system is not entirely tied to the learning of motor responses, and can also rely on other types of associative learning. More to the point, procedural learning is only one kind of nonverbal category learning and a more complete account must explore additional mechanisms and processes. Second, just like the verbal system, we assume that working memory also plays a role. In this case, however, we assume that only the visuo-spatial component is involved. The visuo-spatial sketchpad (Baddeley, 2003; Baddeley & Hitch, 1974; Baddeley et al., 1984) is a buffer in which visual and spatial information are stored and are thought to undergo rehearsal in much the same way as information within the phonological loop. Visuospatial working memory may be used during the initial processing of an object or to compare an object and a category representation during a categorization decision. The central executive does not play a role (or plays a minimal role) in the nonverbal system. Considerable evidence from comparative work, developmental work, and neuroimaging shows that non-rule-based categories can be learned with little or no contribution from the areas of the brain that mediate executive control, specifically the prefrontal cortex (Ashby et al., 1998; Minda et al., 2008; Reber et al., 1998b; Smith et al., 2004). We suspect that mental imagery also plays a role in learning categories. Although it is unclear how this might differ from the visuo-spatial working memory already described. Perhaps the visuo-spatial working memory system and the imagery system work in conjunction so that the former is used for manipulation and short-term storage of stimulus information but the later is used over longer periods of time. One clear advantage to verbal learning is that the stimuli can be redescribed verbally, and can be categorized even if the perceptual information is lost. We suspect that there is a comparable role for mental imagery in this system, and we have begun to evaluate this claim in our lab. For example, we suspect that visual
Verbal and Nonverbal Category Learning
135
interference will disrupt learning by the nonverbal system more so than the verbal system. Whereas the verbal systems can recode aspects of the stimuli into verbal code and insulate it against any visual interference, the nonverbal systems might rely on an image-based code that would be more susceptible to visual interference. 3.2.3. Parallel Operation One key additional assumption is that the two systems operate together, in parallel. There is no claim that categorization must proceed via one or the other pathway. At any given time, a stimulus might be encoded verbally (‘‘a fish with large black fins’’) and nonverbally (visual memory, similarity to stored memory traces, or even mental imagery). This claim is reminiscent of the dual-coding hypothesis of Pavio (Pavio, 1986) which argues that both visual and verbal information are processed differently and along distinct channels with the human mind creating dual representations for each encoding. To be clear, though, we are not arguing for the necessity of dual representations, but rather for the inclusion of at least two kinds of input encoding. COVIS makes a similar prediction that the verbal and implicit systems both operate and are in competition with each other (Ashby et al., 1998). A prediction that follows from this assumption is that because both kinds of information are encoded and represented, then when one source is missing, categorization should proceed via the other system. In fact, several studies have shown these effects. Although both pathways are operating during category learning, a decision is made from the evidence from only one pathway. In practice, this implies that the pathway that arrives at the answer first would produce the answer. Another possibility is that the pathway that arrives at the answer with the strongest source of evidence would drive the decision. In this sense, we make the same assumption as COVIS that the decision may involve a competition. Our theory differs from ATRIUM, which assumes a true mixture of responses. One prediction that follows is that when similarity information conflicts with a rule (Allen & Brooks, 1991; Minda & Ross, 2004), the decision may take longer. In reality, rules often correlate with similarity, but conflict and the need for disambiguation still arise. For example, consider two species of mushrooms (one poisonous and one edible) that appear very similar but can be distinguished on the basis of a single feature or set of features. In fact, the highly toxic and appropriately named ‘‘death cap’’ mushroom (Amanita phalloides) is extremely similar to the commonly consumed ‘‘straw mushroom’’ (Volvariella volvacea). In this case, the color of the spoors can be used to distinguish the two kinds of mushrooms (pink for the straw mushrooms, white for the death cap). In other words, the rule, rather than overall similarity, is used to differentiate the categories.
136
John Paul Minda and Sarah J. Miles
We assume that the two pathways can share an attentional allocation. This means that the features that are necessarily part of the rule or rule selection process (in the verbal system) will also be heavily weighted in the nonverbal system. There is already precedence in the literature for this idea. For example, Brooks and Hannah (2006) argued that verbal rules are essential for directing attention to the features that are relevant on categorization. Harris and Minda (2006) demonstrated that explicit classification encourages rule use and the same features that are important for the rule may be shared by other, non-rule-based processes and functions. We assume that under most circumstances, the verbal system has an initial bias. This is in line with other research suggesting that explicit learning systems are the default (Ashby et al., 1998) or that subjects often start learning via analytic means ( Jacoby & Brooks, 1984). Furthermore, explicit reasoning follows naturally from the expectations in a standard category learning experiment. We also assume that since the verbal system relies heavily on working memory and executive functioning, the bias will not be present (or at least less strong) in young children. This is because working memory and the prefrontal cortex (which mediates the key executive functions like selection and inhibition) is not fully developed.
3.3. Summary Our theory is designed to account for the apparent division between the verbal and nonverbal processes that mediate category learning. The verbal system is characterized by the explicit search and application of verbally described rules. We do not mean that the rules will always be present or will always work and we do not mean that this system works to the exclusion of the other, nonverbal system. Learning by the verbal system means that people are using their verbal ability (and reasoning capacity) to the service of learning categories. The nonverbal system is characterized by associative learning, similarity, and visual memory and it operates in conjunction with the verbal systems. The allocation of attention for this system can be directed by the verbal system.
4. Experimental Tests of the Theory We have discussed already a variety of evidence for the verbal/nonverbal distinction. Now we concentrate on experiments from our lab and from our collaborators’ labs that test some specific predictions of our theory. First, we consider a variety of subject effects because verbal ability differs between humans and other primates and among various developmental groups. Second, we consider cognitive effects. Several different methods
Verbal and Nonverbal Category Learning
137
can interfere with the verbal system while leaving the nonverbal system intact, and there are some tasks that will interfere with visual processing and interfere with the nonverbal system while leaving the verbal system intact. Third, we consider the possibility that some modes of category learning that do not explicitly require a classification decision may divert resources from the verbal system onto the nonverbal system. Finally, we consider other, as yet untested predictions about differential roles of verbal and nonverbal processes. One key thing to keep in mind is that the verbal/nonverbal distinction may not be the only explanation for these data. However, all of the studies we are about to discuss show how access to verbal processing abilities can shape the learning of categories. Another key thing to keep in mind is that we are not arguing for a dichotomy in which categories are learned via rules or similarity (though there may be cases when that is possible). Rather, we are arguing that in many learning scenarios, people can use verbal ability to assist in learning categories. The category learning process will ultimately involve an interaction between verbal and nonverbal processes in which verbal rules shape the attention to features and perceptual similarity can affect and sometimes override the rules.
4.1. Comparisons Across Species One of the strongest sources of evidence for the role that verbal processing plays in category learning comes from the examination of category learning behavior in nonhuman primates. Nonhuman primates (in this case, Rhesus Macaques) share many cortical structures with humans [i.e., V1, V2, middle temporal area; (Preuss, 1999; Sereno & Tootell, 2005)]. But of course, monkeys do not have the ability to use verbal labels to help solve a category learning problem. They do not have the same ability to recode a visual stimulus into verbal descriptions, essentially employing a symbolic stand-in for the original stimuli. On the other hand, visual discrimination learning, visual classification learning, and stimulus response association should be equivalent between the two species. As a result, macaques should learn categories such that their performance can be described on the basis of the perceptual coherence of the categories to be learned. A category set with high within-category similarity and low between category similarity will be easy for macaques to learn whereas a category set that has overlapping members or a nonlinear boundary should be more difficult. Of course humans have many of the same constraints, but should also be able to put their verbal ability to work and should show a distinct advantage for categories that have an optimal verbal rule. Smith et al. (2004) investigated this prospect by comparing the abilities of monkeys and humans on a set of categorization tasks. They used the six category sets originally used by Shepard, Hovland, and Jenkins (1961) and
138
John Paul Minda and Sarah J. Miles
created six types of categories from a set of eight stimulus objects (see Figure 4). Each item was defined by three dimensions (size, color, and shape) and each category contained four objects. We will describe each of these six category sets in moderate detail, because they feature prominently in several of the studies described in the next few sections. Under typical learning conditions, the relative ease with which subjects learn these categories follows the pattern (least difficult to most difficult): I < II < III ¼ IV ¼ V < VI (Shepard et al., 1961). Each category set presents specific information-processing demands, and the use of verbal processing affects each category set differently. Type I is a single-dimensional set and perfect performance can be attained by the formation of a straightforward verbal rule using a single proposition (e.g., if black then category 1). As such, a verbal/nonverbal theory predicts easy learning of this category by the verbal system. The nonverbal system could also learn this category without a verbal rule by learning to associate a cue (black) with a response (category 1), but learning might proceed more gradually. B
A
Type I A
B
A
B
Type II B
Type IV
A
A
Type V
Type III B
A
B
Type VI
Figure 4 This figure shows an example of the kinds of stimuli originally used by Shepard et al. (1961), and used in many other studies since. The actual features that are used differ across studies, but the conceptual structure remains the same.
Verbal and Nonverbal Category Learning
139
The Type II set is best described by a verbal, disjunctive rule that puts black triangles and white squares in the same category. This should still be relatively easy to learn via the verbal system, since the two-predicate rule is readily verbalized. The nonverbal system should have difficulty in learning these categories, because it would be undermined by the category structure. Specifically, the structure of this category set is such that items in a category are as perceptually distant from each other (e.g., black triangles and white squares) as they are similar to members of the opposite category (e.g., white triangles and black squares). Furthermore, each of the relevant cues is nonpredictive on its own and each is equally associated with both categories. As a result, these categories are difficult or impossible for the nonverbal system to learn because it relies in part on high within-category perceptual similarity, it benefits from greater perceptual distance between categories, and is helped by a consistent mapping between cue and response. If the verbal system is not present, not fully developed, or not accessible, a learner should have difficulty with this type of category since they would then have to rely on nonverbal learning. Type III is a nonlinearly separable category set that is defined as having a rule and some exceptions. The verbal system learns this category accurately by finding the verbalizable rule and memorizing the exceptions. For example, one could learn ‘‘black objects and the small white triangle’’ as the rule for category 1. These categories should place a heavier demand on the verbal system than the Type I or Type II categories because the rule is more complex (Feldman, 2000, 2003). This heavier demand and complexity is a result of the extracognitive resources required to learn the exceptions and because attention to all three dimensions is needed in order to learn them. These categories require verbalizing multiple propositions to learn exclusively via the verbal system. This category set would be difficult for the nonverbal system to learn to perfection because of the nonlinear boundary and because there is no consistent association of cues and responses to correctly classify the exceptions. And so the similarity-based learning of the nonverbal systems would be compromised. As with the Type II set, if the verbal system is not fully developed or not fully accessible, the nonverbal system would take over and the learner would have difficulty with this type of category. Type IV is a FR category set because all category members share the majority of their features with the other category members, but no one feature is perfectly diagnostic. Although this type of category might be able to be described by a complex rule with multiple propositions (possibly, ‘‘any two of the following three features’’ or ‘‘black objects and the large white triangle’’), the rule is difficult to verbalize and learn. Unlike the nonlinearly separable Type III categories, which can also be learned by a rule and exception strategy, the Type IV categories have a FR structure that permits perfect performance by nonverbal, similarity-based mechanisms. Strengthening the association between the three cues (‘‘large size,’’ ‘‘black color,’’
140
John Paul Minda and Sarah J. Miles
and ‘‘triangle-shaped’’) with a response (‘‘category 1’’) will result in the correct response for all the items in the category. As a consequence, the nonverbal system can operate successfully and should dominate in the acquisition of these categories by learning the FR structure, even in cases when the verbal system would otherwise be compromised (Waldron & Ashby, 2001). The Type V categories are also rule-plus-exception tasks, with the verbal rule leaving an exception item that requires additional cognitive processing to master (e.g., exemplar memorization). In this case, the exception is more difficult because it is less similar to the other category members. As with the Type III categories, these will be difficult for the verbal system because of the extra steps required to find the suboptimal rule and memorize the exceptions. These categories also pose a difficulty for the nonverbal system, since the nonlinear boundary will defeat a similarity-based system unless individual exemplars are learned. The Type VI category set is a very ill-defined set because its category members have no FR to each other. Each category member shares only one feature with members of its own category but two features with several members of the other category. Neither verbal rules nor similarity-based categorization strategies will help performance. The only viable strategy is individual stimuli-response paring and/or exemplar memorization. With respect to the availability of verbal resources, the Type II set is likely to benefit the most. It is poorly structured in terms of overall similarity, but because it has an optimal, verbalizable rule, it would be learned more readily than a category set with poor structure and no rule. Accordingly, Smith et al. (2004) expected humans to use their verbal resources (working memory and executive functioning) when learning that category set and to perform relatively well. They expected monkeys—who have no access to verbal resources—to perform poorly on Type II categories. Smith et al. (2004) trained four macaques (over the course of a month) on each of these category sets. They also trained a group of human subjects as a comparison group. We’ll highlight two of the most relevant comparisons. First, Smith et al. found that the humans performed as expected and they showed a rank-order difficulty of I < II < III ¼ IV ¼ V < VI. That is, unlike the similarity/generalization hypothesis (i.e., the idea that category difficulty should track perceptual coherence), which predicts difficult learning for Type II, humans performed well on Type II, and only Type I was easier. As we suggested earlier, Type II would be easy if one relied on a verbal description of the stimuli and the disjunctive rule. The four monkeys showed a different pattern, and their rank order difficulty was I < III ¼ IV ¼ V < II < VI, which is exactly what is predicted if these categories were being learned via stimulus generalization. In other words, whereas monkeys learned in a way that suggested associative learning, humans learned in a way that suggested verbal processes may have come into play.
Verbal and Nonverbal Category Learning
141
However, another analysis made this point more clearly. Smith et al. looked for evidence of rule discovery in the Type I and Type II categories. Recall that Type I could be learned by a single-dimensional rule and Type II could be learned by a disjunctive rule. For each subject, they found the point at which that subject reached a criterion of one perfect block (eight correct stimuli in a row). This criterion block was set as block zero so that regardless of when an individual subject reached the criterion (because of individual performance differences), that criterion block was the starting point. Smith et al. then averaged across subjects for the blocks leading up to the criterion and the blocks following the criterion block and plotted these values with the pre- and postcriterion block on the X axis and proportion correct as the Y axis. Figure 5A and C shows the results of the monkeys on Type I and II. Their performance is indicative of similarity generalization. On both cases, the learning curve suggests a gradual acquisition, and the criterion block is probably due to chance (a few good ‘‘guesses’’). The data from humans, shown in Figure 5B and D reveals a different pattern. Humans show gradually increasing performance with a spike to the criterion, and then near-perfect performance after that. Smith et al. suggested that this pattern was clear evidence of rule discovery by the humans. Once subjects learned this optimal rule they continued to use it, and their performance stayed nearly perfect. This result strongly suggests that there are two ways to learn the same kind of category. Monkeys learned these categories, but without discovering the rule, and clearly without the reliance on any kind of verbal process. Humans showed a pattern that suggested rule use and we argue that this rule came about because the humans recruited verbal ability to find that rule. This difference is not present for categories that do not have a verbal rule. Both humans and monkeys found the Type IV categories to be moderately difficult because they have a moderate FR structure and no easily verbalizable rule. So both species resort to the similarity-based visual systems. Furthermore, both humans and monkeys displayed similar performance on dot-pattern categories (Smith & Minda, 2001; Smith et al., 2008), suggesting again that the fundamental difference between the two species is humans’ access to verbal and executive processing.
4.2. Developmental Effects The comparative research tests a core prediction about the existence of two category learning systems. That is, humans can use verbal ability to help in learning certain categories—those with an optimal verbal rule. But given that humans have access to verbal ability, working memory, and executive processing, how and when might these abilities reveal themselves developmentally? Surely infant categorization is not verbal but is similarity-based instead (Quinn, Palmer, & Slater, 1999; Sloutsky, 2003;
142
John Paul Minda and Sarah J. Miles
Monkey type II
C
% Correct
100 95 90 85 80 75 70 65 60 55 50 −11−9 −7 −5 −3 −1 1 3 5 7 9 11 Trial block
Human type I
B
% Correct
% Correct
100 95 90 85 80 75 70 65 60 55 50 −11−9 −7 −5 −3 −1 1 3 5 7 9 11 Trial block
100 95 90 85 80 75 70 65 60 55 50 −8 −6 −4 −2 0 2 Trial block
4
6
8
4
6
8
Human type II
D
% Correct
Monkey type I
A
100 95 90 85 80 75 70 65 60 55 50 −8 −6 −4 −2 0 2 Trial block
Figure 5 An example of rule discovery by humans (panels B and D) but not monkeys (panels A and C).This figure is adapted from the figure shown in Smith et al. (2004).
Sloutsky & Fisher, 2004). But with age comes a greater reliance on rules and probably a greater recruitment of verbal ability to learn categories. We examined this idea in our lab (Minda et al., 2008) by comparing the abilities of children (3, 5, and 8 years old) and adults to learn a subset (Types I, II, III, and IV) of the same categories described above (Shepard et al., 1961; Smith et al., 2004), although the stimuli were now presented with faces. See Figure 6 for an example of the stimuli and the task design. The children were seated at the computer along with an experimenter and were told that they would be playing a game in which they would see pictures of different creatures on the screen. They were told that some of these creatures lived in the mountains and some lived in the forest. Their job was to help these creatures find their homes by pointing to the correct place on the screen. On each trial, the stimulus appeared in the center of the screen and the two category icons (mountains and trees) were shown to the
143
Verbal and Nonverbal Category Learning
left and the right of the stimulus. When the child pointed to a location on the screen, the experimenter made the selection with a mouse, and the stimulus moved to where the child had pointed. The stimulus was animated to show a smile for 2 s as feedback for a correct choice. For an incorrect classification, the stimulus frowned for 1 s and then moved to the correct location and smiled for 2 s as feedback.2
Correct trial
Incorrect trial
Figure 6 An example of a CORRECT trial and an INCORRECT trial in the Minda et al. (2008) experiments. Correct classifications were always indicated with a smiling stimulus and incorrect classifications were indicated by a frowning stimulus, after which the stimulus moved to the correct location and smiled.
2
Note that although there were consistent mappings of stimulus to response, the feedback was not presented immediately after the response, since the experimenter required a second or two to make the selection. This would undermine the strictly procedural account of learning proposed by COVIS (Maddox et al., 2003).
144
John Paul Minda and Sarah J. Miles
Experiment 1: Type I
A 1.0 Proportion correct
°
0.8 0.6
°
°
°
0.8
° °
°
°
0.6
0.4 3YO ° 5YO
8YO
Ad.
°
°
3YO ° 5YO
0.2
0.0
°
°
8YO
Ad.
0.0 1
2
3
4
5
1
6
Experiment 1: Type III
C
0.8
2
3
4
5
6
Experiment 1: Type IV
D 1.0
1.0 Proportion correct
°
0.4
0.2
0.8
0.6 0.4
Experiment 1: Type II
B 1.0
°
° °
°
°
0.6
°
° °
°
°
°
0.4 3YO ° 5YO
0.2
°
8YO
Ad.
3YO ° 5YO
0.2
0.0
8YO
Ad.
0.0 1
2
3 4 Block
5
6
1
2
3 4 Block
5
6
Figure 7 Average performance at each block (across subjects) for each category set and each age group. Note that 3YO ¼ 3-year-old children, 5YO ¼ 5-year-old children, 8YO ¼ 8-year-old children, Ad. ¼ Adult subjects.This figure is adapted from Minda et al. (2008).
The results from one experiment are shown in Figure 7. Adults and children differed on how well they learned the Type II categories, which required the formation of a disjunctive rule, and on the Type III categories, which required the formation of a rule and exception strategy. Adults performed relatively well on these categories whereas children performed very poorly. However, children and adults displayed similar levels of performance on the Type I categories, which were defined by a rule that was simple, easy to describe, and directly related to a perceptual cue information. Children and adults also displayed similar levels of performance on the Type IV FR categories because these categories were able to learned without verbal processing and could be learned by the nonverbal similarity-based systems instead. Consistent with the predictions of the verbal/ nonverbal distinctions, children generally lagged behind adults when learning categories that depended on complicated verbal rules but not when learning categories that required a simple rule, or when the categories did not depend on verbal rules.
145
Verbal and Nonverbal Category Learning
More recently (Minda & Miles, 2009), we asked children (age 5) and adults to learn a set of categories that could be acquired by finding a singlefeature rule or by learning the overall FR structure. Subjects learned to classify drawings of bugs that varied along five binary dimensions: antenna (forward-facing or backward-facing), head (circle or square), wings (rounded or pointy), legs (bent or straight), and tail (bent or straight). The category set was made up of 10 objects with 5 objects belonging to each of two categories. The binary structure for Category A and Category B is shown in Table 1. The values 1 and 0 indicate the assigned feature values for each of the five dimensions. For example, round head, forward-facing antenna, rounded wings, straight legs and a straight tail were each assigned a value of 1, and the complementary set of features were assigned a value of 0. The item 1 1 1 1 1 represents the prototype for Category A and the item 0 0 0 0 0 represents the prototype for Category B while the remaining category members have four features in common with their own category’s prototype and one feature of the opposite category’s prototype. Note, the Table 1
Stimuli Used by Minda and Miles (2009).
Stimulus
Category A 1 2 3 4 5 Category B 6 7 8 9 10 Transfer 11 12 13 14 15 16 17 18 19 20
CA
d2
d3
d4
d5
1 1 1 1 1
1 0 1 1 1
1 1 0 1 1
1 1 1 0 1
1 1 1 1 0
0 0 0 0 0
0 1 0 0 0
0 0 1 0 0
0 0 0 1 0
0 0 0 0 1
0 0 0 0 0 1 1 1 1 1
1 0 1 1 1 0 1 0 0 0
1 1 0 1 1 0 0 1 0 0
1 1 1 0 1 0 0 0 1 0
1 1 1 1 0 0 0 0 0 1
146
John Paul Minda and Sarah J. Miles
first dimension is the criterial attribute (CA), upon which the optimal rule is based. The feature that corresponded to the CA was counterbalanced across participants. Perfect categorization performance could be attained by learning the CA (e.g. ‘‘round heads in Category A, otherwise Category B.’’) or by learning the FR structure. Transfer stimuli were used to distinguish between CA and FR categorization strategies. That is, the feature corresponding to the CA indicated membership in one category but the overall FR indicated membership in the opposite category. As shown in Table 1, the first dimension of the first transfer stimulus (0 1 1 1 1) was consistent with CA evidence for category B, but the overall FR evidence is consistent with the evidence for Category A. When we examined their learning data, we found that the children and adults did not differ from each other in terms of how well they had learned the categories. However, children and adults did differ in their classifications of the test stimuli. We also found that children were significantly less likely to classify the test stimuli according to the CA rule than were the adults (Figure 8). These results echo earlier developmental work on the holistic/ analytic distinction in category learning, which found that children tended to prefer overall similarity and adults tended to prefer rules (Kemler Nelson, 1984). In addition, research using a similar category set found that adults who were asked to learn the categories in the presence of indirect feedback, or via incidental means were also less likely to find the CA, possibly because they were not relying on their verbal systems (Kemler Nelson; Minda et al., 2008). The results of both of the studies we’ve just discussed are consistent with a verbal/nonverbal distinction. The verbal system should learn these categories by facilitating the testing of various rules and eventually allowing the subject to apply a verbal description for the correct single-dimensional rule. We assume that all of this testing, and considering, and applying happens within working memory. It is an active process. Subjects are explicitly aware and are trying to find a rule. Adults default to this verbal system under most classification learning conditions (Ashby et al., 1998; Minda et al., 2008; Zeithamova & Maddox, 2006), and so they apply the rule to classify the test stimuli. However, the nonverbal system could also learn these categories by relying on the good FR structure. The FR structure is difficult to verbalize because of the number of propositions in the verbal rules, but less difficult to learn nonverbally because of the straightforward relationship between features and responses. Children, unlike adults, have more difficulty relying on the verbal system in part because the prefrontal cortex has not sufficiently developed to allow the executive processing ability needed to search for rules (Bunge & Zelazo, 2006; Casey et al., 2004). The verbal system also relies on working memory and executive functioning to test and store hypotheses and rules. As such, the category learning differences between children and adults are consistent with other observed differences in working memory ability between children and
147
Verbal and Nonverbal Category Learning
A
Block by block learning 1.0
°
Proportion correct
0.9
°
°
0.8 °
0.7 ° °
0.6
Children Adults
0.5 1
2
3 Block
4
5
Proportion of criterial attribute responding
B
Proportion of CA responding
1.0
0.8
0.6
0.4
0.2
0.0
n = 17
n = 17
Adults
Children Age
Figure 8 Panel A shows category learning performance for children and adults. Panel B shows the proportion of criterial attribute (CA) responding by children and adults in the transfer stage, with individual subject data shown as points.This figure was adapted from Minda and Miles (2009). Note: error bars denote SEM.
adults (Gathercole, 1999; Swanson, 1999). Working memory plays a large role in the verbal system and is required to learn categories for which the optimal rule is verbalizable (Waldron & Ashby, 2001; Zeithamova & Maddox, 2006). Adult subjects (but children less so) rely on verbal working
148
John Paul Minda and Sarah J. Miles
memory to help learn these categories with the verbal system. Without the efficient use of the verbal system, the child is less able to efficiently engage in hypothesis testing. As a result, children could still learn these categories, but many of the children may have relied instead on the nonverbal system to learn the categories and subsequent classifications of the transfer stimuli were not likely to be based on a rule. Although we claim that the verbal system is less effective in children, the results of our experiment suggest that it can (and does) operate. Some of the children in this study were able to learn rules, and many continued to make rule-based responses in the transfer phase. It is possible that some children did learn the rule, but were unable to resolve the conflict between the rule and FR during the transfer phase. This would be expected to happen in children, since their prefrontal cortex areas are less well developed (compared with adult) and they would have difficulty in inhibiting the response to the FR structure. Furthermore, Figure 8B reveals some subjects who relied on other non-rule-based strategies. These other strategies could be a mix of responses from the two systems (some rule-based, some similaritybased) or may also be imperfect exemplar-based strategies. At this point our data do not allow a strong conclusion about this subset of subjects and additional research is needed to understand the interaction of these two learning systems in general and at different stages in development.
4.3. Interference Effects In order to test the hypothesis that the explicit system and verbal working memory play a crucial role in learning rule-defined categories but not non-rule-defined categories, we turned to a dual-task methodology. The rationale is that as subjects are engaged in the category learning task, they are also asked to engage in a secondary task. This task can be designed such that it will interfere with either verbal or visual resources, and so will interfere with learning by one system and not the other. We describe here another experiment from Minda et al. (2008) in which three groups of adults learned four category sets (originally presented to children and adults in the earlier study we discussed). Participants were assigned to one of the three concurrent-task conditions and were assigned to learn one of the four category sets (Types I–IV from Figure 4). In the no concurrent-task condition, subjects saw a stimulus on the screen and were instructed to press the ‘‘1’’ or the ‘‘2’’ key to indicate category 1 or category 2, respectively. After responding, subjects were given feedback indicating a correct or an incorrect response. A verbal concurrenttask condition was similar to the no-task condition except that as subjects were learning to classify the stimuli, they performed a coarticulation task in which random letters appeared at the rate of one per second in the center of the screen, right below the stimulus. Subjects read these letters aloud as they
149
Verbal and Nonverbal Category Learning
were viewing the stimuli and making responses. A nonverbal concurrenttask condition was similar to the no-task condition except that as subjects were learning to classify the stimuli, subjects tapped their finger to match an asterisk that flashed on the screen at the rate of one per second. The key finding, shown in Figure 9, was that subjects in the verbal concurrent-task group were impaired relative to both the nonverbal concurrent-task and the no-task groups on the Type II categories but not on the Type III or V categories. That is, verbal interference seriously interrupted the learning of categories that depended most strongly on access to verbal resources. These were also the same categories that were difficult for monkeys and difficult for children. However, the nonverbal concurrenttask did not appear to disrupt performance at all. This suggests that learning the Type II categories well depends on having access to verbal working memory. Learning the other categories does not seem to depend on verbal working memory as strongly. These results are consistent with other
A
Experiment 2: Type I
Proportion correct
1.0
°
°
°°
°
°°
°
°
B
°
0.8 °
°
°
°
° °° ° ° ° °
°° °
°
°
0.6 °
0.4
0.4 NT
0.2
°
NVT
VT
C
10
15
D 1.0
Experiment 2: Type III °
0.8
° °
° °°
°
°
°
°°
°°
°
NVT
VT
°
NVT
15
20
Experiment 2: Type IV
°
°
0.6
VT
°
°
0.8 °
0.4 NT
10
°
°°
°
0.2
5
20
1.0
0.4
NT
0.0 5
0.6
°
0.2
0.0
Proportion correct
°
°
0.8 0.6
Experiment 2: Type II
1.0
°°° °°
°
°°
°
°
°
°
°
°
°
°
°° °
°
NT
0.2
°
NVT
VT
0.0
0.0 5
10
15
Block
20
5
10
15
20
Block
Figure 9 Average performance at each block (across subjects) for each category set and each experimental group.This figure is adapted from Minda et al. (2008). Note that NT ¼ no task, VT ¼ verbal task, and NVT ¼ nonverbal task.
150
John Paul Minda and Sarah J. Miles
findings in the literature that demonstrate a role for verbal working memory in the learning of rule-described categories but not in the learning of non-rule-described categories (Waldron & Ashby, 2001; Zeithamova, Maddox, & Schnyer, 2008).3
4.4. Indirect Category Learning A key prediction of our theory is that diverting verbal resources from the main task of learning categories will impair rule learning because it will knockout the verbal system. Instead, learning should proceed via the nonverbal system. Other multiple-systems theories make similar assumptions. In COVIS, for example, a procedural learning system takes over when the verbal system is not operating, in which case the response and feedback must be closely associated (Ashby et al., 1998). We do not make the same assumption and suggest that there are other nonverbal, similarity-based learning mechanisms in addition to the procedural systems envisioned by COVIS. If true, categories may be learned by the nonverbal system, even if the feedback is not directly connected to the stimuli and response. Minda and Ross tested this prediction by devising an indirect learning paradigm (Minda & Ross, 2004). Unlike direct learning, in which a subject is explicitly instructed to learn a category and may be able to use verbal processing to do so, indirect learning occurs when the subjects are trying to learn something else about the stimuli. In this case, the subject may not be aware of the categories, but learning them will still be beneficial to succeeding in the task. Category learning occurs as a matter of course. For example, doctors may learn to categorize patients into a number of useful, but nondiagnostic, categories—such as patients who are not compliant or who have no prescription insurance. They may use the categories when making management decisions about the patient, but may never receive direct feedback on the classification per se (Devantier, Minda, Goldszmidt, & Haddara, 2009). Minda and Ross carried out an experiment in which some subjects learned to first classify a series of imaginary creatures into two groups and then to predict how much food the creature would eat. The creatures appeared in three different sizes and larger animals always ate more than smaller animals. But animals in one category also ate more than the same sized animals in the other categories. Think of them as two species in which one has a higher metabolism (subjects just saw the label A and B, they were 3
One might wonder why the nonverbal task did not seem to affect Type IV learning by the nonverbal system. We think this is because the secondary task was a purely motor task. This suggests that the hypothesized procedural learning systems of COVIS is incomplete and suggests something about the basic cognitive processes used by the nonverbal system. A visual task might affect FR learning, though. And that is something we’re working on in our lab now.
Verbal and Nonverbal Category Learning
151
not told about the connection between category and eating). Furthermore, the light-eater/heavy-eater categories were defined on the basis of good FR (4 out of 5 features) as well as a perfectly predictive single-feature rule. Another group of subjects did not perform the classification task but only made the food prediction. But since the correct amount of food depended on category, these subjects would have to learn the categories in order to perform well on the prediction task. This prediction-only condition tested the idea that indirect feedback (the correct food amount was indirectly related to the category) would encourage more similarity-based learning than the classification and prediction group because subjects’ verbal abilities are occupied with the prediction task and not with learning to classify. In the classification-and-prediction group, subjects’ verbal abilities were free to test hypotheses and search for the rule. The test of these competing strategies (rule or overall similarity) was determined by transfer stimuli that presented a rule feature that was associated with one category but the rest of the features that were associated with the opposite category. This is the same idea used in other research we have already described (Allen & Brooks, 1991; Kemler Nelson, 1984; Minda & Miles, 2009). In other words, a creature might have a light-eater tail, but heavy eater head, eyes, antennae, etc. Minda and Ross found that subjects in both groups were able to learn the categories well (i.e., performance did not differ significantly between groups). However, subjects in the prediction only group were less likely to find the rule and more likely to learn the FR structure, which was not easily verbalized. Furthermore, computational modeling suggested a broader distribution of attention by subjects in the prediction only group. Subject who learns to classify first and then make a prediction tended to find the rule and as a result, tended to have a narrow attentional distribution. In short, diverting resources from the main task of categorization resulted in less rule learning and more FR learning. This is similar to research by Brooks, Squire-Graydon, and Wood (2007) who used a different indirect learning paradigm to show that subjects who indirectly learn to categorize did not explicitly consider the category’s structure. In their experiment, some subjects were explicitly instructed to categorize creatures and some subjects learned to determine the number of moves a creature needed to make to reach a goal. Critically, the type of move that a creature could make depended on its category membership, so that categorization was necessary to solve the problem. Brooks et al. reasoned that subjects in the indirect condition never explicitly considered a creature’s category membership and so would be less knowledgeable of the category’s nonrule-defined structure. Although both groups of subjects categorized the creatures equally well, subjects in the direct condition were aware that no feature was perfectly predictive of category membership because they had tried, and failed, to find the rule. Subjects in the indirect condition were not
152
John Paul Minda and Sarah J. Miles
aware of this. This finding confirmed that when resources are diverted to another task during indirect category learning, the explicit testing of categorization rules does not take place, resulting in reduced knowledge of the category structure. Not only does an indirect task decrease rule learning by the verbal system, it also decreases explicit consideration of the category structure so that categories are learned in a less explicit manner using the nonverbal system.
4.5. Other Predictions 4.5.1. Mood Effects We’ve described the results of testing a number of predictions that follow from the verbal/nonverbal approach to category learning. But there are several other predictions that remain to be tested and that we are working on in our lab. As an example, consider the effects of depression and mood. What people commonly refer to as ‘‘depression’’ is referred to as ‘‘major depression’’ in the DSM-IV. According to the DSM-IV, depression is a psychiatric syndrome comprised of multiple symptoms including sad mood and/or anhedonia, appetite and weight changes, sleep changes, decreased energy, psychomotor agitation, decreased ability to concentrate or think (American Psychiatric Association, 1994). A number of these symptoms are likely to have an effect on basic category learning, especially learning by the verbal system, since any reduction in executive functioning should impair the verbal system. Indeed, Ashby et al. (1998) predicted that depressed subjects should be impaired on rule-based, explicit category sets relative to controls. Earlier research has found some support for this idea. Smith, Tracy, and Murray (1993) compared the category learning performance of depressed subjects (with mean BDI scores ranging from 17.25 to 36.6) and a control group in two experiments. In both experiments, subjects learned a CA (verbal) category set and an FR (nonverbal) category set. As predicted, depressed subjects were impaired at rule-based categorization but unimpaired at FR categorization, relative to controls. These results confirm the importance of executive functioning for the verbal system, and show that the nonverbal system still functions well when executive functioning is depleted. While the results of Smith et al. (1993) support the prediction that depressed subjects should be impaired on verbal, rule-based category learning, it is an open question whether or not mood, rather than depression, will affect categorization performance. Specifically, we predict that negative affect will impair performance on rule-based tasks because we expect rule selection and hypothesis testing abilities to be diminished relative to a control. At the same time, we speculate that positive affect may actually improve performance on rule-described categories, because of the enhanced processing capacity that may come from positive affect
Verbal and Nonverbal Category Learning
153
(Ashby, Isen, & Turken, 1999). We are currently evaluating this set of predictions in our lab by inducing a positive (or negative) mood in subjects and then asking them to learn either a rule-defined category or a non-ruledefined category (as in Figure 2). In this case, we predict that positive mood will enhance learning in the rule-defined categories but not for the nonrule-defined categories. 4.5.2. Dot-Pattern Categories A second prediction that follows from our verbal/nonverbal distinction concerns prototype learning by children. Because dot-pattern learning seems to require very little verbal processing or even executive processing, we predict that young children should be as good as adults on this task. This is a straight forward prediction, since good dot-pattern categorization has been observed in monkeys (Smith et al., 2008) and in amnesics (Knowlton & Squire, 1993). Furthermore, in the same way that the dual task methodology has been used to interfere with rule-based categories but not with information-integration categories, we expect that dot-pattern prototype abstraction will not be hindered by a dual verbal task, but may be hindered by a dual visual task. This type of finding would support our suspicion that some types of nonverbal categorization are particularly reliant on visual processing. 4.5.3. Language Effects As another example, consider the condition known as specific language impairment (SLI). SLI is a diagnosis describing problems in the acquisition and use of language, typically in the context of otherwise normal development (Leonard & Deevy, 2006). These problems might reflect difficulty in combining and selecting speech sounds of language into meaningful units and might manifest as the use of short sentences, and problems producing and understanding syntactically complex sentences. SLI has been linked with working memory problems as well (Gathercole & Baddeley, 1990). Since these children have reduced verbal capacity, they should be impaired relative to control subjects in learning categories that are rule-defined as opposed to categories that are non-rule-defined. In fact, it is possible that these subjects might be better than age-matched controls in learning nonrule-defined categories like FR categories and information-integration categories because the nonverbal system would not have to compete with and overcome the verbal system. In fact, very recent research has examined individual working memory capacities and has found that subjects with lower working memory capacity actually perform better than other subjects on non-rule-described categories (DeCaro, Thomas, & Beilock, 2008).
154
John Paul Minda and Sarah J. Miles
5. Relationship to Other Theories 5.1. Verbal and Nonverbal Learning and COVIS We have tried to highlight the relative importance of verbal and nonverbal processes for category learning. We’ve described a two-system model, and we suggest that these two systems operate simultaneously during category learning. Obviously, this description shares many assumptions with COVIS. Both share an assumption of a verbal system that relies on working memory and executive functioning. The overlap between COVIS and our verbal/ nonverbal account is unavoidable. After all, there is converging evidence, discussed in the first section of this chapter, that one way to learn categories is to engage in hypothesis testing and to rely on verbal rules (Patalano et al., 2001; Smith et al., 1998). In other words, there are a number of models and theories that posit a verbal system. But these two theories differ in how they describe the other, nonverbal category learning system. COVIS assumes that categories can also be learned by an implicit system. The implicit system is mediated by structures in the tail of the caudate nucleus and it seems to require a close connection between the stimulus, the response, and the reward. Whereas COVIS describes an implicit (procedural) learning system that learns to associate stimuli with various regions of perceptual space, we assume a much larger role for the nonverbal system. That is, we assume that categories can be learned by this system without feedback, or without the direct connection between stimulus and response. These are all viable ways of learning categories and these all end up producing performance that is similarity based. That is, these modes of category learning tend to result in performance that shows less of an emphasis on single-feature rules and shows more emphasis on overall FR. Why do we suggest this expansion for nonverbal learning? Some of the evidence comes from our research on indirect learning. For example, the indirect learning paradigm employed by Minda and Ross (2004) did not have a direct connection between stimuli and response, as should be required by the implicit system in COVIS. Feedback was delayed, and the response and feedback were only indirectly related. And yet the subjects learned the categories as well as a direct classification group. Interestingly, the indirect learning subjects took a little longer and were more likely to learn FRs. In other words, they learned in much the same way as the implicit systems in COVIS predicts, but without the direct response and feedback connection. Consider also the learning of Shepard et al. stimuli by children and adults (Minda et al., 2008). The children were taught categories without the direct connection between response and feedback. Although they still received feedback, the experimenter, rather than the
Verbal and Nonverbal Category Learning
155
child, carried out the categorization response. Furthermore, the feedback took several second to be displayed (the stimulus smiled for a correct classification and frowned for an incorrect one). Yet the children still learned the FR categories as well as adults (and via a nonverbal system), even though the adults made their own response with a key press. We think that the basic version of COVIS may have difficulty in explaining these results, and we think that a more broadly construed nonverbal system explains these results better. In the case of indirect learning, for example, the verbal system was dealing with the predictions and so the categories were learned in a nonverbal way, despite the disconnect between the stimulus and response.
5.2. Single-System Models A multiple-systems or dual process account of category learning, like what we are advocating for in this chapter, has traditionally been contrasted with a single-system account of category learning (Nosofsky & Johansen, 2002; Zaki & Nosofsky, 2001). In general, when two models or theories predict learning equally well, the model or theory that uses the simplest set of representational assumptions (i.e., a single system) is preferable. A common version of a single-system theory is exemplar theory—formalized as the Generalized Context Model or GCM—which assumes that people learn categories by storing exemplar traces and make classifications on the basis of similarity to these stored exemplars (Nosofsky, 1987, 1988, 1991). With respect to learning the Shepard et al. (1961) categories used by Smith et al. (2004) and Minda et al. (2008), the exemplar model can predict the basic ordering effect observed on the Type I–VI stimuli. That is, Type I is learned the most quickly, followed by Type II, etc. (Kruschke, 1992; Nosofsky, Gluck, Palmeri, McKinley, & Glauthier, 1994). For Type II categories, the exemplar model learns that only two dimensions are relevant, which reduces the amount of information to be learned, and so the GCM predicts good learning. In its basic form, an exemplar model has no way to account for poor learning of Type II categories by monkeys and young children, nor can it predict the rule discovery observed in humans and shown in Figure 5. The GCM can make additional assumptions in order to predict poor learning of the Type II categories. For example, suppose the stimuli were created from integral dimensions (e.g., hue, saturation, and brightness) as opposed to separable dimensions (e.g., size, color, and shape) the GCM can adjust one of its parameters (the exponential in the distance equation) and the result is that Type II learning is slowed. Nosofsky and Palmeri (1996) found that when subjects learned the Shepard et al. (1961) categories with integral-dimension stimuli, learning on Type II was affected more than the learning of the other category types. So the GCM can account for the effects
156
John Paul Minda and Sarah J. Miles
discussed earlier by treating the dimensions as integral because separable dimensions are able to be separately described by verbal processing; integral dimensions are not. Although the GCM is a single-system model, it can capture many of the same effects that we have discussed, albeit at the expense of simplicity. So in the end the GCM solves this problem in a manner consistent with earlier work suggesting that children tend to perceive objects as integral wholes (Offenbach, 1990; Smith, 1989; Smith & Shapiro, 1989). However, an exemplar model does not make an a priori assumption about whether or not children or adults should perceive the stimulus dimensions as integral or separable, and it has no explanation for why an individual’s ability to treat dimensions as integral versus separable should be impacted by working memory demands. On the other hand, a multiple systems approach, like the verbal/nonverbal distinction that we are proposing or like COVIS, makes clear predictions about both developmental and working memory load differences because of the role it assigns to prefrontal cortical areas for the use of the verbal system. As a result, we prefer a multiple systems account of category learning. Another single-system model that can account for a variety of phenomena is the SUSTAIN model, which is a clustering model of category learning (Love, Medin, & Gureckis, 2004). This model does not make the assumption that categories are learned via different brain systems or even different processes. Instead, it assumes that categories can be learned as clusters of similar stimuli. A single cluster can represent one or many exemplars. As such, SUSTAIN has the ability to represent categories with a single prototype, several prototypes, or with many single exemplars. Furthermore, SUSTAIN has a mechanism for supervised learning (e.g., explicit, feedback-driven classification) and unsupervised learning. SUSTAIN has been successfully applied to a broad range of developmental and patient data (Love & Gureckis, 2007). In SUSTAIN, reduced memory capacity is modeled by reducing the number of clusters that the model forms (e.g., less memory ¼ fewer possible clusters). In general, the mechanisms for forming new clusters are thought to be mediated in part by the prefrontal cortex as well as the hippocampus (Love & Gureckis). This means that tasks that impair or interfere with functions carried out by these areas (i.e., explicit memory and executive functions) should result in greater FR learning. For example, for Type II categories, a reduced number of clusters would result in impaired learning, similar to the impaired learning observed in the young children in Figure 7 (Minda et al., 2008). However, reduced numbers of clusters would not be expected to have as much effect on the FR categories, like the Type IV categories. In short, although SUSTAIN does not posit separate verbal and nonverbal systems, it accounts for the dissociations by appealing to many of the same cortical areas that most multiple systems approaches do. The obvious shortcoming is that
Verbal and Nonverbal Category Learning
157
SUSTAIN, as with the GCM, has difficulty in making a priori predictions about working memory capacity, developmental differences, and crossspecies comparisons. In addition, it is not clear how any single-system model would predict the rule-discovery behavior that subjects seem to show.
6. Conclusions Throughout this chapter, we have been making the case that people learn categories by relying on verbal abilities but also by relying on nonverbal processes. As is clear from our review of the literature, this is not a new argument. It is an issue that has been central to many developments in the psychological study of category learning. We feel that our proposal— concentrating on how and why subjects rely on verbal processing and nonverbal processing—addresses a middle ground between the strongly, neurobiologically motivated models like COVIS and the single-system approaches like the GCM or SUSTAIN. Any complete model of category learning has to deal with the reality that people do recruit explicit reasoning abilities and verbal processes when they are learning new categories and when they are making classifications. In other words, subjects really try to find rules and may use them if they can. We are not claiming that this indicates the existence of a separate and abstract rule system. But we are claiming that this is one clear approach that people take when learning categories. Any complete model will also have to deal with the reality that some categories have no rule, or that subjects ignore the rules, or that subjects learn categories when they cannot or do not verbalize anything about them. In some cases, this nonverbal, similarity-based learning may be influenced by attempts to learn verbal rules and in other cases it may proceed implicitly. The review of literature we presented here, and our own work, suggests there is much exciting work to be done on how all of these cognitive processes come together in the behaviors of category learning and categorization.
REFERENCES Allen, S. W., & Brooks, L. R. (1991). Specializing the operation of an explicit rule. Journal of Experimental Psychology, 120, 3–19. American Psychiatric Association. (1994). Diagnostic and statistical manual of mental disorders (4th ed.). Washington, DC: American Psychiatric Association. Ashby, F. G., Alfonso-Reese, L. A., Turken, A. U., & Waldron, E. M. (1998). A neuropsychological theory of multiple systems in category learning. Psychological Review, 105, 442–481.
158
John Paul Minda and Sarah J. Miles
Ashby, F. G., & Ell, S. W. (2001). The neurobiology of human category learning. Trends in Cognitive Sciences, 5, 204–210. Ashby, F. G., Ell, S. W., & Waldron, E. M. (2003). Procedural learning in perceptual categorization. Memory & Cognition, 31, 1114–1125. Ashby, F. G., Isen, A. M., & Turken, A. U. (1999). A neuropsychological theory of positive affect and its influence on cognition. Psychological Review, 106, 529–550. Baddeley, A. (2003). Working memory: Looking back and looking forward. Nature Reviews Neuroscience, 4, 828–839. Baddeley, A., & Hitch, G. (1974). Working memory. In G. Bower (Ed.), The psychology of learning and motivation, Vol. 8, (pp. 47–89). New York, NY: Academic Press. Baddeley, A., Lewis, V., & Vallar, G. (1984). Exploring the articulatory loop. Quarterly Journal of Experimental Psychology A: Human Experimental Psychology, 36, 233–252. Brooks, L. R. (1978). Nonanalytic concept formation and memory for instances. In E. Rosch & B. Lloyd (Eds.), Cognition and categorization (pp. 169–211). Hillsdale, NJ: Erlbaum. Brooks, L. R., & Hannah, S. D. (2006). Instantiated features and the use of ‘‘rules’’ Journal of Experimental Psychology: General, 135, 133–151. Brooks, L. R., Squire-Graydon, R., & Wood, T. (2007). Diversion of attention in everyday concept learning: Identification in the service of use. Memory & Cognition, 35(1), 1–14. Bruner, J. S., Goodnow, J. J., & Austin, G. A. (1956). A study of thinking. New York, NY: Wiley. Bunge, S. A., & Zelazo, P. D. (2006). A brain-based account of the development of rule use in childhood. Current Directions in Psychological Science, 15, 118–121. Casey, B. J., Davidson, M. C., Hara, Y., Thomas, K. M., Martinez, A., Galvan, A., et al. (2004). Early development of subcortical regions involved in non-cued attention switching. Developmental Science, 7, 534–542. Cohen, A. L., Nosofsky, R. M., & Zaki, S. R. (2001). Category variability, exemplar similarity, and perceptual classification. Memory & Cognition, 29, 1165–1175. Davis, T., Love, B. C., & Maddox, W. T. (2009). Two pathways to stimulus encoding in category learning? Memory & Cognition, 37, 394–413. DeCaro, M., Thomas, R., & Beilock, S. (2008). Individual differences in category learning: Sometimes less working memory capacity is better than more. Cognition, 107, 284–294. Devantier, S. L., Minda, J. P., Goldszmidt, M., & Haddara, W. (2009). Categorizing patients in a forced-choice triad task: The integration of context in patient management. PLoS ONE, 4(6), e5881. Erickson, M. A., & Kruschke, J. K. (1998). Rules and exemplars in category learning. Journal of Experimental Psychology: General, 127, 107–140. Feldman, J. (2000). Minimization of boolean complexity in human concept learning. Nature, 407, 630–632. Feldman, J. (2003). The simplicity principle in human concept learning. Current Directions in Psychological Science, 12, 227–232. Fish and Wildlife Branch. (2009). Fishing regulations summary 2008–2009. Ottawa, ON: Queen’s Printer for Ontario. Gathercole, S. E. (1999). Cognitive approaches to the development of short-term memory. Trends in Cognitive Sciences, 3, 410–419. Gathercole, S. E., & Baddeley, A. D. (1990). Phonological memory deficits in language disordered children: Is there a causal connection. Journal of Memory and Language, 29(3), 336–360. Harris, H. D., & Minda, J. P. (2006). An attention based model of learning a function and a category in parallel. In R. Sun & N. Miyake (Eds.), The Proceedings of the 28th Annual Meeting of the Cognitive Science Society (pp. 321–326). Hillsdale, NJ: Lawrence Erlbaum Associates.
Verbal and Nonverbal Category Learning
159
Homa, D., Cross, J., Cornell, D., & Shwartz, S. (1973). Prototype abstraction and classification of new instances as a function of number of instances defining the prototype. Journal of Experimental Psychology, 101, 116–122. Homa, D., & Cultice, J. C. (1984). Role of feedback, category size, and stimulus distortion on the acquisition and utilization of ill-defined categories. Journal of Experimental Psychology: Learning, Memory, & Cognition, 10, 83–94. Jacoby, L. L., & Brooks, L. R. (1984). Nonanalytic cognition: Memory, perception and concept learning. In G. H. Bower (Ed.), The psychology of learning and motivation, Vol. 18, (pp. 1–43). New York, NY: Academic Press. Kemler Nelson, D. G. (1984). The effect of intention on what concepts are acquired. Journal of Verbal Learning & Verbal Behavior, 100, 734–759. Kemler Nelson, D. G. (1988). When category learning is holistic: A reply to ward and scott. Memory & Cognition, 16, 79–84. Knowlton, B., & Squire, L. (1993). The learning of categories: Parallel brain systems for item memory and category knowledge. Science, 262(5140), 1747–1749. Kruschke, J. K. (1992). Alcove: An exemplar-based connectionist model of category learning. Psychological Review, 99, 22–44. Leonard, L. B., & Deevy, P. (2006). Cognitive and linguistic issues in the study of children with specific language impairment. In M. Traxler & M. Gernsbacher (Eds.), Handbook of psycholinguistics (pp. 1143–1171). (2nd ed.). Boston, MA: Elsevier. Love, B. C. (2002). Comparing supervised and unsupervised category learning. Psychonomic Bulletin & Review, 9, 829–835. Love, B. C. (2003). The multifaceted nature of unsupervised category learning. Psychonomic Bulletin & Review, 10, 190–197. Love, B. C., & Gureckis, T. M. (2007). Models in search of a brain. Cognitive, Affective, & Behavioral Neuroscience, 7, 90–108. Love, B. C., Medin, D. L., & Gureckis, T. M. (2004). Sustain: A network model of category learning. Psychological Review, 111, 309–332. Maddox, W. T., Aparicio, P., Marchant, N. L., & Ivry, R. B. (2005). Rule-based category learning is impaired in patients with parkinson’s disease but not in patients with cerebellar disorders. Journal of Cognitive Neuroscience, 17, 707–723. Maddox, W. T., & Ashby, F. G. (2004). Dissociating explicit and procedural-learning based systems of perceptual category learning. Behavioural Processes, 66, 309–332. Maddox, W. T., Ashby, F. G., & Bohil, C. J. (2003). Delayed feedback effects on rule-based and information-integration category learning. Journal of Experimental Psychology: Learning, Memory, and Cognition, 29, 650–662. Maddox, W. T., Ashby, F. G., Ing, A. D., & Pickering, A. D. (2004a). Disrupting feedback processing interferes with rule-based but not information-integration category learning. Memory & cognition, 32, 582–591. Maddox, W. T., Bohil, C. J., & Ing, A. D. (2004b). Evidence for a procedural-learningbased system in perceptual category learning. Psychonomic Bulletin & Review, 11, 945–952. Minda, J. P., Desroches, A. S., & Church, B. A. (2008). Learning rule-described and nonrule-described categories: A comparison of children and adults. Journal of Experimental Psychology: Learning Memory, & Cognition, 34, 1518–1533. Minda, J. P., & Miles, S. J. (2009). Learning new categories: Adults tend to use rules while children sometimes rely on family resemblance. In N. Taatgen, H. van Rijn & L. Schomaker (Eds.), Proceedings of the 31th annual conference of the cognitive science society. Hillsdale, NJ: Erlbaum. Minda, J. P., & Ross, B. H. (2004). Learning categories by making predictions: An investigation of indirect category learning. Memory & Cognition, 32, 1355–1368.
160
John Paul Minda and Sarah J. Miles
Nomura, E. M., Maddox, W. T., Filoteo, J. V., Ing, A. D., Gitelman, D. R., Parrish, T. B., et al. (2007). Neural correlates of rule-based and information-integration visual category learning. Cerebral Cortex, 17, 37–43. Norman, G. R., & Brooks, L. R. (1997). The non-analytical basis of clinical reasoning. Advances in Health Sciences Education, 2, 173–184. Nosofsky, R. M. (1987). Attention and learning processes in the identification and categorization of integral stimuli. Journal of Experimental Psychology: Learning, Memory, & Cognition, 13, 87–108. Nosofsky, R. M. (1988). Exemplar-based accounts of relations between classification, recognition, and typicality. Journal of Experimental Psychology: Learning, Memory, & Cognition, 14, 700–708. Nosofsky, R. M. (1991). Tests of an exemplar model for relating perceptual classification and recognition memory. Journal of Experimental Psychology: Human Perception & Performance, 17, 3–27. Nosofsky, R. M., Gluck, M. A., Palmeri, T. J., McKinley, S. C., & Glauthier, P. (1994a). Comparing models of rule-based classification learning: A replication and extension of shepard, hovland, and jenkins (1961). Memory & Cognition, 22, 352–369. Nosofsky, R. M., & Johansen, M. K. (2002). Exemplar-based accounts of ‘‘multiple-system’’ phenomena in perceptual categorization. Psychonomic Bulletin and Review, 7(3), 375–402. Nosofsky, R. M., & Palmeri, T. J. (1996). Learning to classify integral-dimension stimuli. Psychonomic Bulletin & Review, 3, 222–226. Nosofsky, R. M., & Palmeri, T. J. (1998). A rule-plus-exception model for classifying objects in continuous-dimension spaces. Psychonomic Bulletin & Review, 5, 345–369. Nosofsky, R. M., Palmeri, T. J., & McKinley, S. C. (1994b). Rule-plus-exception model of classification learning. Psychological Review, 101, 53–79. Nosofsky, R. M., & Stanton, R. D. (2005). Speeded classification in a probabilistic category structure: Contrasting exemplar-retrieval, decision-boundary, and prototype models. Journal of Experimental Psychology: Human Perception and Performance, 31, 608–629. Offenbach, S. I. (1990). Integral and separable dimensions of shape. Bulletin of the Psychonomic Society, 28, 30–32. Palmeri, T. J., & Flanery, M. A. (1999). Learning about categories in the absence of training: Profound amnesia and the relationship between perceptual categorization and recognition memory. Psychological Science, 10, 526–530. Patalano, A. L., Smith, E. E., Jonides, J., & Koeppe, R. A. (2001). Pet evidence for multiple strategies of categorization. Cognitive, Affective, & Behavioral Neuroscience, 1, 360–370. Pavio, A. (1986). Mental representations: A dual coding approach. Oxford, England: Oxford University Press. Posner, M. I., & Keele, S. W. (1968). On the genesis of abstract ideas. Journal of Experimental Psychology, 77, 353–363. Preuss, T. M. (1999). The argument from animals to humans in cognitive neuroscience. In M. S. Gazzaniga (Ed.), Cognitive neuroscience: A reader (pp. 483–501). Oxford, England: Wiley-Blackwell. Quinn, P. C., Palmer, V., & Slater, A. M. (1999). Identification of gender in domestic-cat faces with and without training: Perceptual learning of a natural categorization task. Perception, 28, 749–763. Reber, P., Stark, C., & Squire, L. (1998a). Contrasting cortical activity associated with category memory and recognition memory. Learning & Memory, 5, 420–428. Reber, P., Stark, C., & Squire, L. (1998b). Cortical areas supporting category learning identified using functional MRI. Proceedings of the National Academy of Sciences, 95, 747–750. Rips, L. J. (1989). Similarity, typicality, and categorization. In S. Vosniadou & A. Ortony (Eds.), Similarity and analogical reasoning (pp. 21–59). Cambridge, England: Cambridge University Press.
Verbal and Nonverbal Category Learning
161
Rosch, E., & Mervis, C. B. (1975). Family resemblances: Studies in the internal structure of categories. Cognitive Psychology, 7, 573–605. Rosch, E., Mervis, C. B., Gray, W., Johnson, D., & Boyes-Braem, P. (1976). Basic objects in natural categories. Cognitive Psychology, 8, 382–439. Sereno, M. I., & Tootell, R. B. H. (2005). From monkeys to humans: What do we now know about brain homologies? Current Opinion in Neurobiology, 15(2), 135–144. Shepard, R. N., Hovland, C. I., & Jenkins, H. M. (1961). Learning and memorization of classifications. Psychological Monographs, 75(13) Whole No. 517. Sloutsky, V. M. (2003). The role of similarity in the development of categorization. Trends in Cognitive Sciences, 7, 246–251. Sloutsky, V. M., & Fisher, A. V. (2004). Induction and categorization in young children: A similarity-based model. Journal of Experimental Psychology: General, 133, 166–188. Smith, E. E., & Grossman, M. (2008). Multiple systems of category learning. Neuroscience & Biobehavioral Reviews, 32, 249–264. Smith, E. E., & Medin, D. L. (1981). Categories and concepts. Cambridge, MA: Harvard University Press. Smith, E. E., Patalano, A. L., & Jonides, J. (1998). Alternative strategies of categorization. Cognition, 65, 167–196. Smith, E. E., & Sloman, S. A. (1994). Similarity- versus rule-based categorization. Memory & Cognition, 22, 377–386. Smith, J. D., & Minda, J. P. (2001). Journey to the center of the category: The dissociation in amnesia between categorization and recognition. Journal of Experimental Psychology: Learning, Memory, and Cognition, 4, 501–516. Smith, J. D., Minda, J. P., & Washburn, D. A. (2004). Category learning in rhesus monkeys: A study of the shepard, hovland, and jenkins (1961) tasks. Journal of Experimental Psychology: General, 133, 398–414. Smith, J. D., Redford, J. S., & Haas, S. M. (2008). Prototype abstraction by monkeys (Macaca mulatta). Journal of Experimental Psychology: General, 137, 390–401. Smith, J. D., & Shapiro, J. H. (1989). The occurrence of holistic categorization. Journal of Memory & Language, 28, 386–399. Smith, J. D., Tracy, J. I., & Murray, M. J. (1993). Depression and category learning. Journal of Experimental Psychology: General, 122, 331–346. Smith, L. B. (1989). A model of perceptual classification in children and adults. Psychological Review, 96, 125–144. Spiering, B., & Ashby, F. (2008). Response processes in information-integration category learning. Neurobiology of Learning and Memory, 90(2), 330–338. Stewart, N., & Chater, N. (2002). The effect of category variability in perceptual categorization. Journal of Experimental Psychology: Learning, Memory, & Cognition, 28, 893–907. Swanson, H. L. (1999). What develops in working memory? A life span perspective. Developmental Psychology, 35, 986–1000. Waldron, E. M., & Ashby, F. G. (2001). The effects of concurrent task interference on category learning: Evidence for multiple category learning systems. Psychonomic Bulletin & Review, 8, 168–176. Ward, T. B. (1988). When is category learning holistic? A reply to Kemler Nelson. Memory & Cognition, 16, 85–89. Ward, T. B., & Scott, J. (1987). Analytic and holistic modes of learning family-resemblance concepts. Memory & Cognition, 15, 42–54. Wickens, J. (1990). Striatal dopamine in motor activation and reward-mediated learning: Steps towards a unifying model. Journal of Neural Transmission, 80, 9–31. Willingham, D., Nissen, M., & Bullemer, P. (1989). On the development of procedural knowledge. Journal of Experimental Psychology: Learning, Memory, & Cognition, 15, 1047–1060.
162
John Paul Minda and Sarah J. Miles
Willingham, D., Wells, L., Farrell, J., & Stemwedel, M. (2000). Implicit motor sequence learning is represented in response locations. Memory & Cognition, 28, 366–375. Zaki, S. R., & Nosofsky, R. M. (2001). A single-system interpretation of dissociations between recognition and categorization in a task involving object-like stimuli. Cognitive, Affective & Behavioral Neuroscience, 1, 344–359. Zeithamova, D., & Maddox, W. T. (2006). Dual-task interference in perceptual category learning. Memory & Cognition, 34, 387–398. Zeithamova, D., Maddox, W. T., & Schnyer, D. M. (2008). Dissociable prototype learning systems: Evidence from brain imaging and behavior. Journal of Neuroscience, 28, 13194–13201.
C H A P T E R
F O U R
The Many Roads to Prominence: Understanding Emphasis in Conversation Duane G. Watson Contents 163 166 170 176 180 181
1. Introduction 2. Continuous Representations of Prominence 3. Acoustic Correlates of Prominence in Production 4. Acoustic Correlates of Prominence in Comprehension 5. Conclusions References
Abstract Traditionally, it has been assumed that emphasis is used to signal that information in a conversation is new, focused, or important. In this chapter, evidence from three sets of experiments suggests that emphasis is the product of a number of different factors that can affect the acoustic prominence of a word in different ways. I present evidence that (1) emphasis can vary continuously, not categorically; (2) differing factors like difficulty of production and informational importance have different effects on how emphasis is acoustically realized; and (3) listeners do not treat all acoustic correlates of emphasis equally when processing speech, suggesting that they are sensitive to the fact that emphasis is the product of multiple sources. These findings suggest that rather than being a unitary linguistic or psychological construct, emphasis is the product of an array of different cognitive and linguistic factors.
1. Introduction Most speakers have the intuition that how something is said can be almost as important as what is said in successful communication. This can become painfully apparent when a wry joke in an instant message can sound like cruel sarcasm in the absence of appropriate intonation, or when a quick Psychology of Learning and Motivation, Volume 52 ISSN 0079-7421, DOI: 10.1016/S0079-7421(10)52004-8
#
2010 Elsevier Inc. All rights reserved.
163
164
Duane G. Watson
email sent by a busy professor unintentionally sounds like curtness to an undergraduate. Both email and instant messages lack the acoustic cues that we rely on to communicate our message, intent, and attitude. This aspect of language is called prosody. Roughly defined, prosody includes the properties of the speech signal that are independent of the words that are actually produced. Prosody can include emphasizing words, producing breaks in speech, rhythm, and a speaker’s intonation or tune. Prosody is of particular interest to psycholinguists because it can tell us something about the architecture of the language production and comprehension systems. One of the roles of prosody is to package information in ways that are useful to a listener. Breaks in the speech signal tend to correspond with syntactic breaks (Gee & Grosjean, 1983; Selkirk, 1984; Truckenbrodt, 1999; Watson & Gibson, 2004 to name a few), and listeners use information about prosodic breaks to make inferences about syntactic structure in parsing (e.g., see Wagner & Watson, submitted for a review). Emphasis tends to occur on words that are new or important in the conversation (e.g., Bolinger, 1972; Halliday, 1967; Selkirk, 1996), and listeners are sensitive to this link (e.g., Dahan, Tanenhaus, & Chambers, 2002; Terken & Nooteboom, 1987). By understanding how prosody facilitates communication, we can start to make guesses about how listeners prefer to have linguistic information packaged and how speakers make this possible. Knowing the answer to this question might provide clues to the mechanisms that underlie comprehension and production. In this chapter, I discuss one aspect of prosody: emphasis or acoustic prominence. In conversation, some words stand out to a greater degree than other words. This foregrounding of information tends to correlate with a change in the fundamental frequency of the sound wave (F0), increased intensity, lengthening of the prominent word, and stronger articulation of the phonemes, or segments, that compose the word (Shattuck-Hufnagel & Turk, 1996). Foregrounding a word can play a critical role in signaling information about its role in the conversation. For example, consider (1), where capital letters convey prominence: (1a) (1b) (2a) (2b)
Who angered Cheri? BRIAN angered Cheri. Who did Brian anger? Brian angered CHERI.
Although the words in (1b) and (2b) are identical, the prominence of the words in these sentences conveys very different meanings. In a context in which (1b) is the answer to (1a), speakers tend to produce the word Brian more prominently than the other words in the utterance. As an answer to Who did Brian anger?, speakers tend to produce the word Cheri with more prominence (2b).
Understanding Emphasis
165
There are a number of explanations for these differences in prominence. In what I will call the Focus Tradition, linguists have argued that prominence is a linguistic construct that signals that information is new or important (e.g., Gussenhoven, 1983; Schwarzschild, 1999; Selkirk, 1996). This is called focusing a referent. Although focus can be conveyed in a variety of ways, including through syntactic and word choices, focus in English is thought to primarily be accomplished by acoustic prominence. Thus, in examples (1) and (2), the words that serve as the answer to a question are focused because they are new and important. Other researchers have taken an information theoretic approach. Researchers in the Information Theoretic Tradition have argued that the primary role of prominence is to convey information about predictability (Aylett & Turk, 2004; Levy & Jaeger, 2007). It is generally agreed that predictable words as measured by frequency or transitional probability tend to be less prominent than unpredictable words (e.g., Bell, Brenier, Gregory, Girand, & Jurafsky, 2009; Gregory, 2001; Lieberman, 1963; Pluymaekers, Ernestus, & Baayen, 2005a,b). Lieberman (1963) pointed out that the word ‘‘nine’’ in ‘‘a stitch in time saves nine’’ is shorter than in ‘‘the word you are about to hear is nine.’’ Gregory (2001) found that in corpus data, words that have a low transitional probability are more likely to be accented. Gahl and Garnsey (2004) found that a verb that is followed by a dispreferred syntactic structure is longer than when the same verb is followed by a preferred structure. In corpus work, Aylett and Turk (2004) found that words that are new, have a low transitional probability, or are infrequent, are more likely to be lengthened. Thus, the words Brian and Cheri are prominent in (1) and (2), respectively, because as answers to the posed questions, their predictability is low compared to that of the other words in the sentence. One of the challenges in understanding the role of prominence and its acoustic correlates is the differing perspectives that have been taken in studying it. Researchers in the Focus Tradition assume that the acoustic features underlying prominence and the information that it conveys are both categorical (e.g., Schwarzschild, 1999; Selkirk, 1996). A word is either pitch accented or not pitch accented and it is either focused or not focused. In contrast, researchers in the Information Theoretic Tradition assume that changes in duration associated with predictability vary continuously as does the potential information load conveyed by the accented word (Aylett & Turk, 2004). Both of these approaches assume that acoustic prominence’s primary role is to signal information about discourse, focus, or information structure to the listener. More recently, some researchers have argued that some aspects of prominence are linked to processes related to the mechanics of speech production and that this may fall outside the scope of accenting discourse new words for the listener (Bard & Aylett, 1999; Bard et al., 2000; Pluymaekers et al., 2005a).
166
Duane G. Watson
In this chapter, I will argue that rather than being a unitary phenomenon, acoustic prominence is the product of a number of different cognitive processes, and that these processes give rise to different, but related, acoustic realizations of a word. I call this the Multiple Source view of prominence. The work reviewed here will suggest that assumptions that prominence is categorical, that it is linked to a single cognitive process, and that it is produced primarily for the listener must be rethought. In many ways, the idea that there are different ‘‘flavors’’ of prominence is not a new one. Linguists have long distinguished between prominence at the word level that distinguishes whether a syllable is accented or not, prominence that may be due to the rhythmic structure of an utterance, and prominence that provides cues about information structure (see Shattuck-Hufnagel & Turk, 1996 for a review). It has also been proposed that different types of prominence can signal different types of information about discourse structure (Pierrehumbert & Hirschberg, 1990). However, what is more controversial is the notion that prominence varies continuously rather than categorically, and that multiple factors, some of which are linked to production-related processes rather than grammatical structure, can influence whether a given word is prominent. I will present work in the three sections below that suggests that prominence is not a unitary phenomenon. In Section 2, I discuss an experiment that suggests that the degree of prominence varies continuously with the amount of discourse information that it conveys, suggesting that prominence may vary continuously. In Section 3, I discuss a set of experiments from my lab that suggests that acoustic correlates of prominence (i.e., duration and intensity) can be independently influenced by the difficulty of production and the importance of the information in the discourse. Finally, in Section 4, I discuss data suggesting that listeners do not use all of the acoustic cues at their disposal when interpreting prominence, which further supports the idea that the set of acoustic features that comprise prominence may be generated by a number of different factors.
2. Continuous Representations of Prominence Most psycholinguists and linguists agree that words can be produced with varying degrees of prominence. Prominence that occurs because of the natural rhythm of a sentence tends to be greater than prominence within a word that signals which syllable is stressed. Prominence that is associated with signaling which information in a conversation is important and new tends to be even greater. However, within these levels, prominence is treated as a phenomenon that is categorical in the linguistics literature:
Understanding Emphasis
167
within a linguistic level such as rhythm, word stress, and discourse structure, words are either prominent or they are not. However, there are reasons to think that the relationship between discourse structure and prominence might be more complex. Work from the literature suggests that speakers’ decisions about which word to use to refer to a given referent vary with subtle changes in discourse (Ariel, 1990; Chafe, 1987; Gundel, Hedberg, & Zacharaski, 1993; see Arnold, 2008 for a review). As speakers engage in a conversation, information and referents shift in how activated or accessible they are. Referents that are in the speaker’s focus of attention, either because they are topics of discussion or because they have been repeated, are more accessible than information that is not in the focus of attention. Referents that are highly accessible are referred to with a pronoun like he, she, or it. Words that are completely new, and not highly activated, are produced with a full referring expression like a dog. Words that have been mentioned before such that they are accessible, but not as accessible as referents that are referred to with a pronoun, are referred to with definite determiners like the dog or that dog. Researchers have proposed a continuum of accessibility that maps on to different types of referring expressions. There is good reason to think that acoustic prominence might play a role similar to word form in coordinating the discourse. For example, different syntactic roles are linked to different levels of accessibility (Gordon, Grosz, & Gilliom, 1993), with syntactic subjects tending to be more accessible than objects, and this is reflected in word form choice. Thus, if prominence is like word form choice, prominence should be linked to shifting syntactic positions over the course of a conversation. Consistent with this, Terken and Hirschberg (1994) found that given words are accented if they change syntactic roles across a connected discourse. Dahan et al. (2002) found that listeners find the accenting of given information acceptable if the accented word has shifted its syntactic position in the discourse. Accessibility varies along a gradient and this is reflected in the numerous word form types that correspond with varying levels of accessibility. Thus, if prominence is linked to accessibility in some way, we might expect gradient changes in the level of acoustic prominence that reflect a referent’s accessibility. In collaboration with Jennifer Arnold and Michael Tanenhaus, this possibility was tested in a controlled discourse using a referential communication task (Watson, Arnold, & Tanenhaus, 2005). Two naı¨ve subjects participated: one was the director and the other was the matcher. The director was presented with a display containing two rows of three objects as in Figure 1. The objects moved to different locations on the screen, and the director’s task was to describe this movement to the matcher whose task was to copy the movement on her own computer screen. The critical production was the target word in the third movement. We manipulated its discourse status by manipulating its previous movements.
168
Duane G. Watson
A 2-Theme condition Put the bed above the flag. Put the bed above the house. Put the bed above the pineapple.
B 1-Theme condition Put the piano above the flag. Put the bed above the house. Put the bed above the pineapple.
C 1-Goal condition Put the piano above the flag. Put the house above the bed. Put the bed above the pineapple.
D New condition Put the piano the flag. Put the house above the bell. Put the bed above the pineapple.
Figure 1 The conditions and display from Watson et al. (2005).
It had either been moved twice before (2-Theme condition), once before (1-Theme condition), been the landmark next to which another object moved (1-Goal condition), or was completely new to the discourse (New). The conditions were designed such that the accessibility of the target word in the discourse decreased from the 2-Theme, 1-Theme, 1-Goal, to New conditions. In the 2-Theme condition, the target referent is highly accessible because as the object that has moved twice before, it is the topic of the discourse. In the 1-Theme condition, it is slightly less accessible because it has only been the topic once before. In the 1-Goal condition, the referent is even less accessible because the referent shifts from being a landmark to being a moved object. The target in the New condition is the least accessible because the referent has not yet been introduced to the conversation. First, the director’s word form choices suggested that participants treated these mini-discourses as connected conversations. They produced more pronouns in the 2-Theme and 1-Theme conditions than in the 1-Goal and New conditions as predicted. In the 2-Theme and 1-Theme conditions, the referents are highly accessible when the target utterance has been produced, so one would expect more pronoun usage than in the 1-Goal and New conditions. The critical measures were F0, intensity, and duration along with prominence ratings by a coder naı¨ve to the purposes of the experiment. The target sentence was spliced out of its context for coding, so that the discourse would not influence the coder’s judgments. We found that duration, F0, and intensity all increased as the amount of accessibility decreased. This was also
169
Understanding Emphasis
5
4.5
Ratings
4
3.5
3
2.5 2-Theme
Figure 2
1-Theme
1-Goal
New
Prominence ratings from Watson et al. (2005).
true of the prominence ratings as can be seen in Figure 2. Critically, the data suggest that discourse-related prominence varies continuously with the degree of accessibility. Of course, because we examined means, which are an aggregate measure, it is difficult to know whether the productions varied continuously in their prominence or whether participants produced prominent and nonprominent words in differing proportions across conditions. To examine this, we investigated the distribution of the acoustic measures and the prominence ratings. The distributions of the prominence ratings and the acoustic data were not bimodal, suggesting that speakers were not producing prominent and nonprominent words in varying proportions across conditions. I have discussed these data in terms of acoustic prominence mapping continuously onto cognitive accessibility, but this is not the only explanation for the data pattern here. It could be the case that any change in discourse structure could signal prominence. The above experiment only investigated prominence in sentences in which the referent shifted to a position of high accessibility. However, in a recent follow-up experiment, the same gradient pattern appears in similar conditions in which the target word occupies a low accessibility position, suggesting that the amount of discourse change rather than the accessibility of the referent plays a role in signaling prominence (Watson, 2008). In either case, the data suggest that
170
Duane G. Watson
prominence is best understood as a continuous property of the speech signal and, critically, it maps onto a continuous discourse representation. In sum, the data here suggest that prominence at the discourse level may not be categorical. The level of prominence associated with a word can vary continuously and this maps onto the degree of change in the discourse. Of course, this raises an interesting question: is the variance we see in prominence due to signaling the relative degree of discourse change for the listener or is it the result of a speaker-centered production process? These possibilities are discussed in the next sections.
3. Acoustic Correlates of Prominence in Production As discussed in Section 1, researchers have found that increased duration, greater intensity, and changes in fundamental frequency correlate with the foregrounding of a word. It has generally been assumed that any of these acoustic factors are sufficient for foregrounding a word (Ladd, 1996). This assumption goes hand in hand with the notion that acoustic prominence is a unitary linguistic phenomenon. However, researchers with different theoretical viewpoints have focused on different aspects of the acoustic signal. For example, researchers in the Information Theoretic Tradition, who are interested in the role of repetition and predictability on acoustic prominence, have primarily measured word duration. Early work by Bolinger (1963) and Lieberman (1963) suggests that words that are predictable given their context tend to be lengthened compared to those that are not. Bolinger (1963) proposed that this lengthening occurs in order to support comprehension: listeners will have more difficulty processing words in unusual contexts, so speakers lengthen them to help listeners decode the signal. This idea has been proposed more recently in information theoretic approaches to word perception (Aylett & Turk, 2004). Aylett and Turk (2004) argue that in communication, the speaker’s goal is to communicate effectively while conserving articulatory effort. These two constraints lead speakers to try to maintain a constant rate of information transmission over time. Words that are high in information content (and unpredictable) are lengthened while words that are low in information content (and predictable) are shortened. Stretching and reducing words as a function of their relative predictability ensures that the rate of information transmission remains relatively constant over the course of an utterance. Aylett and Turk(2004) call this smoothing the information signal and Levy and Jaeger (2007) call this maintaining uniform information density. The temporal nature of the assumptions underlying information theoretic approaches makes duration a particularly important component of acoustic
Understanding Emphasis
171
prominence. Aylett and Turk (2004) argue that prosody, specifically acoustic prominence, is the primary means by which speakers smooth the information signal in English. Duration is also the primary measure of researchers investigating the effects of repetition on prominence. In a classic experiment on word repetition, Fowler and Housum (1987) examined a corpus of the radio show Prairie Home Companion and found that repeated words had lower intensity and shorter duration than nonrepeated words but there was no difference in fundamental frequency. They also found that listeners were sensitive to acoustic differences in repetition, finding that repeated words were less intelligible than nonrepeated words. More recent work by Bard et al. (2000) suggests that this reduction is related to production processes. They found that repeated words were less intelligible than nonrepeated words, even when the listener of the repeated word had not heard the first production. They argue that the reduction is related to priming processes in language production. In contrast, researchers in the Focus Tradition have typically focused on the role of F0 in signaling prominence. In autosegmental approaches to prosody (see Ladd, 1996 for a review), researchers claim that fundamental frequency is the primary cue to acoustic prominence in English, and different F0 patterns have different consequences for information structure (e.g., Pierrehumbert, 1980; Pierrehumbert & Hirschberg, 1990). For example, the most popular system for transcribing prosody in North America is called the tone and break indices (ToBI) labeling system. This system is based on the theoretical model put forth in Pierrehumbert (1980). Within this system, discourse-related prominence is marked by a linguistic construct called a pitch accent. As the name implies, the fundamental frequency on a word (i.e., pitch) serves as a cue to prominence. Different types of pitch accents convey different meaning. Consider the conversations in (3): (3a) (3b) (3c) (3d)
Who babysat Otto? BRIAN babysat Otto. Did Cheri babysit Otto? No, BRIAN babysat Otto.
In (3b), the emphasis on BRIAN signals that Brian is new information: there is typically a rise in F0 on the main syllable of the word. This prominence is called an H* accent in ToBI notation (where ‘‘*’’ donates prominence and ‘‘H’’ signals a rise in F0). In contrast, the prominence associated with Brian in (3d) is contrastive: it signals that Brian and not someone else babysat Otto. This type of prominence is associated with a steeper rise in F0 that is often preceded by a sharp dip. This prominence is called an L þ H* in ToBI notation where the ‘‘L’’ corresponds to the preceding dip and the H corresponds to the rise. In ToBI, there is a wide
172
Duane G. Watson
range of pitch accent types, which correspond with different types of discourse meaning. Other approaches to cataloging prominence use different terms and rely on different theoretical assumptions (e.g., Bolinger, 1986; Cruttenden, 1997; Halliday, 1967; t’ Hart, Collier, & Cohen, 1990), but F0 also plays a special role in these models. Although these researchers would argue that intensity and duration also correlate with prominence, F0 is critical for providing information about discourse structure. Thus, there are two claims that have been made independently in the literature: (1) repetition and predictability are linked to duration and intelligibility and (2) discourse structure is cued by changes in F0. Researchers in both traditions argue that these acoustic changes are markers of prominence, which raises the question: are the prominences that are described in these two traditions really different types of prominence or are they simply two aspects of the same phenomenon? Certainly, researchers in the two traditions argue for the latter, but of course, they differ in which source of prominence is central. Researchers in the information theoretic approach argue that discourse-related prominence falls out of smoothing the information signal for the listener. Prosodic marking of given, focused, and new information occurs in order to maintain constant information density. Aylett and Turk (2004) explicitly argue that discourse marking with prosody is the result of smoothing the information signal. In contrast, researchers in the Focus Tradition argue that the central role of prominence is to signal information status: whether a word is given or new. Rules for using acoustic prominence are encoded in the grammar and are a part of what every speaker knows when they learn their language. Prominence may play a role in smoothing the information signal, but this is a byproduct of the grammatical rules that govern prominence. Part of the difficulty in testing whether these two approaches to prominence describe different phenomena or not is the correlation between predictability and discourse structure in natural language. Words that are marked as discourse new or focused tend to be less predictable than given words (Arnold, 1998). In a series of experiments, we have tried to understand the sources of prominence by breaking the correlation between predictability and informational importance that one finds in natural conversation. Independent effects of predictability and importance would support a framework that I will call the Multiple Source view of prominence. Under this view, factors such as ease of production, predictability, and importance influence whether and how a word is emphasized. Although these factors are often correlated, they have independent effects on how a word is produced. In one study, we used the game of Tic Tac Toe to investigate this question (Watson, Arnold, & Tanenhaus, 2008). The game of Tic Tac Toe is played on a 3 3 grid. Players take turns placing their mark in the grid with the goal of placing three marks in a horizontal, vertical, or diagonal line.
173
Understanding Emphasis
A
B X
O
X
X
O
Figure 3 (A) A Tic-Tac-Toe game state in which the ‘‘O’’ player’s next move is both predictable and important for blocking a win by ‘‘X.’’ (B) A move to the same square is less predictable and less important.
The advantage of using Tic Tac Toe to explore predictability and importance’s effect on prominence is the fact that game moves that are important tend to be highly predictable, which is the exact opposite of the relationship between predictability and importance in natural conversation. Consider the game states illustrated in Figure 3. In Figure 3A, the ‘‘X’’ player has two pieces in a row and will win the game if the ‘‘O’’ player does not place their figure in the upper right hand cell. The ‘‘O’’ player’s move is highly important given the constraints of the game, so one would expect prominence. However, this move is also highly predictable given the game state, so if predictability drives prominence, we would expect this move to be less prominent. Contrast this move with the same move in the game state shown in Figure 3B. The same move is not very important, so importance based explanations of prominence would predict very little acoustic prominence in this move. However, the move is not very predictable, so a predictability account predicts that this move would be produced with more prominence. Of course, because Tic Tac Toe does not require any speaking, the game had to be altered so that players produced measurable responses. Players faced away from each other so that they had to communicate verbally to convey each move. Each player was given a separate game board along with game pieces that corresponded to pictures of objects, which were colored blue or red to indicate each players’ pieces. Locations on the game board were numbered. A typical move was Put the blue flower in five. Their speech was recorded while playing the game, and we measured the duration, F0, and intensity of the entire move as well as the cell number. Measuring the cell number was important because it conveyed the critical location information, and it was this information that varied in its predictability and its importance. Measurements of productions of the cell number revealed a sensitivity to predictability, but not importance: the name of the cell was shorter when the move was predictable/important than when it was unpredictable/ unimportant. There were no differences in F0 or intensity. When the entire
174
Duane G. Watson
utterance was analyzed, moves that were unpredictable/unimportant had longer duration than moves that were predictable/important, just as in the cell production data. Interestingly, moves that were predictable/important had greater intensity. These data suggest two things: one is that predictability seems to drive acoustic prominence of the cell number in this task, not importance and the second is that increases across acoustic measures are not always positively correlated. Moves that were important were louder but moves that were unpredictable were longer. These data show that a break between acoustic correlates of predictability and importance is possible over an utterance, but is the same break possible at the word level? Much of the work on acoustic prominence has argued that prominence and its accompanying discourse information is signaled at the word level, so finding a split between acoustic correlates of prominence at the word level would provide stronger support for the Multiple Source view. The effects of predictability also raise a second question: are effects of predictability the result of smoothing the information signal for the listener, or are they the result of production-centered processes. Speakers may lengthen unpredictable material to help listeners process difficult words, but speakers might also lengthen these words because they are difficult to produce. Bell et al. (2009) argue that low frequency content words are lengthened because lexical access of these words is more difficult for the speaker. As discussed above, Bard et al.’s (2000) work is consistent with this. They found that speakers reduce repeated words independent of whether the listener heard the first production or not. In the context of Tic Tac Toe, both the speaker-centered and the listener-centered explanations can provide an account of the data. To understand whether different sources of prominence correlate with different aspects of the acoustic signal and to determine whether effects of predictability are speaker- or listener-centric, Tuan Lam, Jennifer Arnold, and I conducted a second set of experiments. Using a referential communication task, we again attempted to break the correlation between acoustic markers of prominence. In this study, we manipulated the givenness of a word and its predictability. As mentioned above, repeated words tend to be more predictable. However, in a context in which repetition of a word is unpredictable, is the word still reduced? A listener-centered account of predictability, particularly one associated with information theoretic approaches like the Smooth Signal Hypothesis and the Uniform Information Density hypothesis, predicts that the unpredictable word should be reduced, even if it has been repeated. Because speakers are smoothing the information signal for the listener, the predictability of the information for the listener should govern the duration of the word. A speaker-centered account predicts that a word that has been mentioned should be reduced, even if it is unexpected. Activation associated with the recent production of the word should reduce the difficulty of producing the word.
Understanding Emphasis
175
To this end, a director and a matcher engaged in a picture description task (Lam, Watson, & Arnold, 2008). Two pictures appeared on the participants’ screens. One of the objects shrank and the other object flashed, and the director’s task was to describe these events to the matcher (a confederate), who copied these on her own computer. To manipulate the predictability of the referent, the director’s session was divided into two blocks: a training block and a test block. In the training block, 94% of the time, one of the objects shrank and then the other object flashed. The other 6% of the time, the same object shrank and then flashed. The goal of this imbalance was to create a context in which repetition was not predicted. In the test block, we measured the acoustic properties of the referent in the second event to see how repetition and predictability influenced speakers’ productions. We compared these productions to those of participants who participated in a training block in which the ratio of repeated to nonrepeated referents was 50:50. We found that words that were repeated were reduced independent of their predictability, suggesting that in this task, at least, repetition played the strongest role in determining the acoustic realization of the referent. Interestingly, we also found that when the training block had an equal number of repeated and nonrepeated words, repeated words had lower intensity than nonrepeated words in the test block. This was not true in the test block that followed the biased training block, suggesting that predictability may have played some role in signaling intensity. Given the design of the experiment, this is difficult to know since predictability and repetition were not independently varied. Both predictability and repetition might have had effects on intensity, but these were not detectable when the two factors were placed at odds with one another as they were in the biased training block. To test this question, we conducted a follow-up experiment in which the predictability of a referent and repetition were independently manipulated (Lam, Watson, & Arnold, 2009). The task was much the same as before, except that the director was presented with an array of 12 pictures on a computer display. One of the 12 pictures shrank and one of the pictures flashed. Predictability was manipulated by including a probabilistic cue to which object was going to flash. After the first object shrank, a circle appeared around one of the objects. The circled object flashed on 11 out of 12 trials. Participants were explicitly told that the circled objects would flash on most, but not all trials. Repetition was manipulated by varying whether or not the same object shrank and flashed, just as in the previous experiment. The advantage of this design was that it allowed us to independently manipulate both predictability and repetition. As before, we found effects of repetition. Nouns that shrank and then flashed were produced with shorter duration and with less intensity. However, there was also an effect of predictability. When a picture flashed that had not been circled, speakers produced it with greater intensity.
176
Duane G. Watson
We can draw two conclusions from these findings. One is that repetition-related reduction appears to be at least partly speaker-centered. A repeated word is more activated by virtue of having just been uttered, so it is reduced independent of whether the word is predictable or not. Effects of duration were only related to whether a word was repeated, and were not linked to predictability, though predictability did influence intensity. This is inconsistent with information theoretic accounts of reduction. In light of these two experiments, we can return to the data from the Tic-Tac-Toe study, and reconceptualize that data pattern based on these results. Unpredictable conditions in the Tic-Tac-Toe study may have actually been conditions that were more difficult to produce, and thus resulted in lengthening of words. The important conditions in the TicTac-Toe study, like the unpredictable conditions in the last two studies, were linked to greater intensity. The critical feature underlying both may have been the speakers’ desire to signal important information to the listener. The second conclusion we can draw from these studies is that acoustic correlates of prominence like lengthening and heightened intensity can fractionate under the right circumstances, and that different cognitive sources of prominence can be realized in different ways acoustically, suggesting that the Multiple Source view of prominence may be a useful way of thinking about prominence. The use of a single notion of prominence may obscure the multiple phenomena that interact to create the constellation of acoustic features that comprise prominence.
4. Acoustic Correlates of Prominence in Comprehension The work in the previous section suggests that prominence is potentially influenced by a number of different factors that are realized in different ways in production. One of the claims that I have made above is that intensity and F0 may be linked to marking important or focused information for the listener. Duration, on the other hand, is related to processes related to speaker-centric production processes. If the Multiple Source view is correct, then listeners should be sensitive to acoustic factors that correlate with marking information for the listener like F0 and intensity. In contrast, one would expect duration to be a relatively weak cue to prominence for listeners. A wide range of studies have focused on the cues listeners use to compute acoustic prominence (Beckman, 1986; Cole, Mo, & Hasegawa-Johnson, 2008; Fry, 1955; Gussenhoven, Repp, Rietveld, Rump, & Terken, 1997; Kochanski, Grabe, Coleman, & Rosner, 2005; Lieberman, 1960 to
Understanding Emphasis
177
name a few). This body of work has yielded a set of conflicting results. In general, researchers have found that F0, intensity, and duration contribute to the percept of prominence, but differ in the combination and the extent to which these factors contribute (see Kochanski et al., 2005 for discussion). Part of the confusion lies in conflating different types of prominence. For example, some researchers have focused primarily on prominence at the syllable level in lab speech where single words are produced in isolation (e.g., Fry, 1955; Lieberman, 1960). Other researchers have used corpora of natural speech to investigate acoustic correlates of discourse level prominence, eliciting prominence judgments from naı¨ve listeners (e.g., Cole et al., 2008; Kochanski et al., 2005). If it is the case that prominence is not a unitary phenomenon and that different acoustic correlates of prominence arise from different sources, the differences in results across studies may be the result of differences in methods and controls. For example, Fry (1955) found that intensity was the least important acoustic factor in determining syllabic prominence. However, if intensity is used to mark unexpectedness or importance in a discourse context, one might not expect it to be used in a word produced in isolation. In contrast, Cole et al. (2008) found that duration and intensity were reliable predictors of naı¨ve listeners’ judgments about acoustic prominence in corpora of natural speech. However, if duration and intensity are linked to different, but correlated, factors (like production difficulty and importance), it is difficult to know how prominence was treated by listeners and whether it was treated as a single kind. To complicate matters, other factors like prosodic phrasing and lexical frequency can influence prominence in natural conversation. Words that lie at the boundary of a prosodic phrase tend to be lengthened, and sound more prominent. High frequency words also tend to be reduced. Without controlling for these types of factors, it is unclear which cues are linked to discourse-related prominence. Thus, thinking of prominence as a percept that has multiple sources and multiple possible acoustic correlates raises the possibility that the confusion that exists in the literature may be due to treating prominence as a single phenomenon. One prediction made by the Multiple Source view of prominence is that different acoustic factors will influence prominence only in so far as they mark relevant information for the listener. The experiments from the previous section suggest that speakers use intensity and F0 to mark important or unexpected information for listeners while increased duration appears to be the result of a speaker-centered production process. If this is true, then listeners may treat duration as a relatively weak cue to prominence. In a study with Angela Isaacs, we explore this possibility by looking to see what acoustic factors speakers produced in different discourse contexts and which of these factors predicted listeners’ ratings of prominence. Critically, we controlled for other factors that are known to influence
178
Duane G. Watson
prominence like prosodic phrasing and lexical frequency by ensuring that the target word always appeared in the same location in the sentence and by using a referential communication task in which items (and their lexical frequency) were controlled across conditions. In this study, a participant played the role of a director who described the movements of objects on a video display to an experimenter, who had to copy these movements on their own screen. Each display contained an array of six objects arranged in two rows of three as in the experiment in Section 2. Similar to the experiment described above, we manipulated the discourse status of the word describing the second object to move: (4a) First Utterance Given : The camel moved to the left of the helmet. Shift : The helmet moved to the left of the camel. New : The helmet moved to the left of the toothbrush. (4b) Second Utterance Target: The camel moved above the penguin. The target object (i.e., camel in (4b)) had either moved before (Given condition), been the landmark next to which another object had moved (Shift condition), or was completely new to the discourse structure (New condition). Almost all theories predict that when the target word is new, it will be produced with more prominence than when the target word is given (Bolinger, 1972; Halliday, 1967; Schwarzschild, 1999; Selkirk, 1996). Words that shift in thematic or discourse roles also tend to be prominent (Dahan et al., 2002; Terken & Hirschberg, 1994; Watson et al., 2005). We found that the target word in the New condition was produced with greater intensity and longer duration than the identical word in the Given condition. The target word in the Shift condition fell in between, replicating Watson et al. (2005). Thus, speakers clearly differentiate words in these different discourse conditions acoustically, and this includes altering duration. However, the critical question is whether or not duration provides a cue to prominence for listeners. To determine this, we excised the target utterance from each trial and asked raters to rate how much the target word ‘‘stood out’’ on a scale of 1–7. Naı¨ve participants’ judgments of prominence matched discourse conditions. They rated the new condition as being more prominent than the given condition. Using linear mixed effects models (Baayen, Davidson, & Bates, 2008) to predict ratings, we found that models that included intensity as a predictor performed the best. Duration did not reliably predict prominence nor did it improve model fits for models that included intensity. Thus, even though speakers reliably produce new words with increased duration and intensity, only intensity plays a role in the perception of prominence for listeners. This is consistent with the Multiple Source view
Understanding Emphasis
179
of prominence: duration is linked to difficulty in production processes, so listeners discount it as a source of information about discourse, or at least as a reliable source. Of course, one concern is the metalinguistic nature of Isaacs and Watson’s (2009) task: by asking listeners to rate how prominent a word is, they may have been biased toward weighting a cue like loudness more strongly than a cue like duration. The conclusions of this study rely on what listeners take prominence to mean. In addition, metalinguistic judgments of prominence do not necessarily map onto the cognitive processes that are engaged when a listener interprets an accented word in real-time language processing. To determine whether listeners use duration to detect prominence as they hear sentences in real-time processing, Isaacs and Watson (2008) used eye tracking in a visual world paradigm. Our experiment was a variation of a study by Dahan et al. (2002) that investigated whether listeners interpret accents in real-time language processing. In Dahan et al.’s (2002) task, participants’ task was to move objects around a computer display. On each trial, participants heard instructions like (5) to move two objects: (5a) Move the camel/candle above the necklace. (5b) Now, move the CAMEL/camel above the necklace. Dahan et al. (2002) manipulated whether the second moved object was accented or not and whether it was new or given. Critically, the moved object was one of two objects that shared the same phonetic onset (e.g., camel and candle), making the word temporarily ambiguous. Previous work has shown that when listeners encounter this type of ambiguity, they fixate both potential referents until disambiguating information arrives in the signal (Allopenna, Magnuson, & Tanenhaus, 1998). By using temporarily ambiguous stimuli, Dahan et al. (2002) were able to test whether acoustic prominence could disambiguate the referent. They found that listeners fixated the new referent when the ambiguous syllable was accented, and fixated the given referent when it was deaccented. Isaacs and Watson (2008) followed up Dahan et al.’s (2002) work with the goal of determining whether listeners are able to use duration in the perception of prominence. To this end, they used the exact same task as Dahan et al. (2002) but independently manipulated the duration and F0 of the target word by resynthesizing it. The process of resynthesis allows for the manipulation of natural speech by artificially deleting or adding pulses to the acoustic signal. This has the advantage of controlling the acoustic properties of natural speech while avoiding the artificialness of synthesized speech. We manipulated F0 and duration in a 2 2 experiment. F0 was either high or low and the duration of the word was either short or long. Because the amount of F0 change and the location in the word of the F0 peak (or valley) remained constant across long and short conditions,
180
Duane G. Watson
F0 changed more rapidly in the short conditions than in the long conditions. Thus, we were also able to investigate whether the slope of the F0 inflection played a role in signaling prominence. We found that F0 and duration acted in concert to signal discourse information: when the inflection of the F0 rose sharply listeners fixated the new cohort. When the F0 dropped sharply listeners fixated the given cohort. Critically, there was not a main effect of duration. Thus, even in an online task, the data suggest that duration does not play a critical role, and only does so in so far as it is linked with the rate of F0 change on a word. Longer durations are not interpreted as cues to prominence. This replicates the findings in the offline study. These data make perfect sense if we think of prominence as a complex amalgam of acoustic features that are linked to different underlying sources. If it is the case that duration is not linked to marking importance for listeners, but rather a product of speaker difficulty, one would not expect it to play a role in identifying prominent words in comprehension. Of course, this raises the question of why listeners might ignore duration if speakers consistently produce it when saying informationally important words. It could be the case that duration’s link to general speaker difficulty makes it an unreliable source of information about discourse. Many factors that are unrelated to discourse status can create speaker difficulty such as speaker distraction or planning a complex syntactic structure. Duration also fulfills a wide variety of roles in English. Speakers lengthen words at the edges of prosodic boundaries (Shattuck-Hufnagel & Turk, 1996), they lengthen words when they need to slow their speech rate, and duration plays a role in signaling vowel differences. Although words that are prominent because of their role in the discourse are often lengthened, lengthening does not necessarily mean that a word is prominent in the discourse. Listeners may be sensitive to these acoustic facts and direct attention toward more reliable cues like F0 and intensity.
5. Conclusions The studies discussed in the three sections above suggest that acoustic prominence is a complex phenomenon. It is defined by more than a single source, may have more than one function, and can be realized in a variety of ways. We have found that (1) prominence is not categorical and can vary continuously with discourse structure; (2) the prominence of a word can depend on both the difficulty of production and the importance of the word in conversation and that these can have differing effects; and (3) listeners are sensitive to prominence cues that correlate with marking importance, not with those that correlate with production difficulty. All three of these findings
Understanding Emphasis
181
suggest that prominence is a simple description of a phenomenon that is potentially complex and multifaceted. Although this idea breaks with traditional approaches to prominence, it potentially explains past discrepant findings and may ultimately unify approaches to acoustic prominence across fields. I should emphasize that we cannot make the strong claim that the various acoustic realizations of prominence act as direct transducers of cognitive states. Although across experiments we see that duration correlates with production difficulty, and intensity and F0 correlate with the marking of important information for the listener, more work needs to be done to determine whether these links might change under different conditions. We know that languages vary in whether acoustic information is even used to signal prominence, but the Multiple Source approach may allow us to systematically investigate how languages might differ and examine whether some sources of prominence are universal while others are language specific. For example, discourse prominence is signaled syntactically, not prosodically, in French. One might expect duration effects linked to production difficulty to still occur in French, but not the marking of discourse information with F0 and intensity. Without understanding potential sources of prominence, it is difficult to know the extent to which prominence differs cross-linguistically. The multisource approach does suggest that researchers interested in prosody must take multiple sources of prominence into account when designing experiments, especially when acoustic measures are taken as a proxy measure of prominence. Certainly, this is currently practiced to some extent. For example, researchers do not interpret lengthening that might occur on a word before a prosodic break as a marker of prominence since duration in this context is associated with the word’s position in the sentence’s prosodic structure. However, these data suggest that researchers must take similar precautions when examining the link between properties of the sound wave and discourse structure.
REFERENCES Allopenna, P. D., Magnuson, J. S., & Tanenhaus, M. K. (1998). Tracking the time course of spoken word recognition using eye movements: Evidence for continuous mapping models. Journal of Memory and Language, 38, 419–439. Ariel, M. (1990). Accessing noun-phrase antecedents. New York, NY: Routledge. Arnold, J. (1998). Reference form and discourse patterns. Unpublished doctoral dissertation, Stanford University. Arnold, J. E. (2008). Reference production: Production-internal and addressee-oriented processes. Language and Cognitive Processes, 23, 495–527. Aylett, M., & Turk, A. (2004). The smooth signal redundancy hypothesis: A functional explanation for relationships between redundancy, prosodic prominence, and duration in spontaneous speech. Language and Speech, 47, 31–56.
182
Duane G. Watson
Baayen, R. H., Davidson, D. J., & Bates, D. M. (2008). Mixed-effects modeling with crossed random effects for subjects and items. Journal of Memory and Language, 59, 390–412. Bard, E. G., Anderson, A. H., Sotillo, C., Aylett, M., Doherty-Sneddon, G., & Newlands, A. (2000). Controlling the intelligibility of referring expressions in dialogue. Journal of Memory and Language, 42, 1–22. Bard, E. G., & Aylett, M. (1999). The dissociation of deaccenting, givenness and syntactic role in spontaneous speech. Proceedings of ICPhs-99, San Francisco, CA. Beckman, M. E. (1986). Stress and non-stress accent. Riverton, NJ: Foris. Bell, A., Brenier, J. M., Gregory, M., Girand, C., & Jurafsky, D. (2009). Predictability effects on durations of content and function words in conversational English. Journal of Memory and Language, 60, 92–111. Bolinger, D. (1963). Length, vowel, juncture. Linguistics, 1, 5–29. Bolinger, D. (1972). Accent is predictable (if you’re a mind reader). Language, 48, 633–644. Bolinger, D. L. M. (1986). Intonation and its parts: Melody in spoken English. Stanford, CA: Stanford University Press. Chafe, W. (1987). Cognitive constraints on information flow. In R. Tomlin (Ed.), Coherence and grounding in discourse (pp. 21–51). Amsterdam: John Benjamins. Cole, J., Mo, Y., & Hasegawa-Johnson, M. (2008). Signal-based and expectation-based factors in the perception of prosodic prominence. Paper presented at Laboratory Phonology 11, Wellington, New Zealand. Cruttenden, A. (1997). Intonation (2nd ed.). New York, NY: Cambridge University Press. Dahan, D., Tanenhaus, M. K., & Chambers, C. G. (2002). Accent and reference resolution in spoken-language comprehension. Journal of Memory and Language, 47, 292–314. Fowler, C., & Housum, J. (1987). Talkers signaling of new and old words produced in various communicative contexts. Language and Speech, 28, 47–56. Fry, D. B. (1955). Duration and intensity as physical correlates of linguistic stress. Journal of the Acoustical Society of America, 27, 765–768. Gahl, S., & Garnsey, S. M. (2004). Knowledge of grammar, knowledge of usage: Syntactic probabilities affect pronunciation variation. Language, 80, 748–775. Gee, J., & Grosjean, F. (1983). Performance structures: A psycholinguistic appraisal. Cognitive Psychology, 15, 411–458. Gordon, P. C., Grosz, B. J., & Gilliom, L. A. (1993). Pronouns, names, and the centering of attention in discourse. Cognitive Science, 17, 311–347. Gregory, M. (2001). Linguistic informativeness and speech production: An investigation of contextual and discourse-pragmatic effects on phonological variation. Unpublished doctoral dissertation, University of Colorado. Gundel, J. K., Hedberg, N., & Zacharaski, R. (1993). Cognitive status and the form of the referring expressions in discourse. Language, 69, 274–307. Gussenhoven, C. (1983). A semantic analysis of the nuclear tones of English. Bloomington, IN: Indiana University Linguistics Club. Gussenhoven, C., Repp, B., Rietveld, A., Rump, H., & Terken, J. (1997). The perceptual prominence of fundamental frequency peaks. Journal of the Acoustical Society of America, 102, 3009. Halliday, M. A. K. (1967). Intonation and grammar in British English. The Hague, Paris: Mouton. Isaacs, A. M., & Watson, D. G. (2008). Comprehension and resynthesis of duration and pitch in ambiguous words. Paper presented at Experimental and Theoretical Advances in Prosody, Ithaca, NY. Isaacs, A. M., & Watson, D. G. (2009). Speakers and listeners don’t agree: Audience design in the production and comprehension of acoustic prominence. Poster presented at The 22nd Annual CUNY Conference on Human Sentence Processing, Davis, CA. Kochanski, G., Grabe, E., Coleman, J., & Rosner, B. (2005). Loudness predicts prominence: Fundamental frequency lends little. Journal of the Acoustical Society of America, 118, 1038.
Understanding Emphasis
183
Ladd, D. R. (1996). Intonational phonology. New York, NY: Cambridge University Press. Lam, T., Watson, D. G., & Arnold, J. E. (2008). Effects of repeated mention and predictability on the production of acoustic prominence. CUNY Conference on Human Sentence Processing, Chapel Hill, NC. Lam, T., Watson, D. G., & Arnold, J. E. (2009). Do repeated mention and expectancy independently affect acoustic prominence? The 22nd Annual CUNY Conference on Human Sentence Processing, Davis, CA. Levy, R., & Jaeger, T. F. (2007). Speakers optimize information density through syntactic reduction. Advances in Neural Information Processing Systems (NIPS), 19, 849–856. Lieberman, P. (1960). Some acoustic correlates of word stress in American English. Journal of the Acoustical Society of America, 32, 451–454. Lieberman, P. (1963). Some effects of the semantic and grammatical context on the production and perception of speech. Language and Speech, 6, 172–175. Pierrehumbert, J. (1980). The phonology and phonetics of English intonation. Unpublished doctoral dissertation, MIT. Pierrehumbert, J., & Hirschberg, J. (1990). The meaning of intonational contours in the interpretation of discourse. In P. R. Cohen, J. Morgan & M. E. Pollack (Eds.), Intentions in communication (pp. 271–311). Cambridge, MA: MIT Press. Pluymaekers, M., Ernestus, M., & Baayen, R. H. (2005a). Articulatory planning is continuous and sensitive to informational redundancy. Phonetica, 62, 146–159. Pluymaekers, M., Ernestus, M., & Baayen, R. H. (2005b). Lexical frequency and acoustic reduction in spoken Dutch. Journal of the Acoustical Society of America, 118, 2561–2569. Schwarzschild, R. (1999). Givenness, AVOIDF and other constraints on the placement of accent. Natural Language Semantics, 7, 141–177. Selkirk, E. O. (1984). Phonology and syntax: The relation between sound and structure. Cambridge, MA: MIT Press. Selkirk, E. O. (1996). Sentence prosody: Intonation, stress and phrasing. In J. A. Goldsmith (Ed.), The handbook of phonological theory. Cambridge, MA: Blackwell. Shattuck-Hufnagel, S., & Turk, A. E. (1996). A prosody tutorial for investigators of auditory sentence processing. Journal of Psycholinguistic Research, 25, 193–247. Terken, J., & Hirschberg, J. (1994). Deaccentuation of words representing ‘‘given’’ information: Effects of persistence of grammatical function and surface position. Language and Speech, 37, 125–145. Terken, J., & Nooteboom, S. (1987). Opposite effects of accentuation and deaccentuation on verification latencies for given and new information. Language and Cognitive Processes, 2, 145–163. t’ Hart, J., Collier, R., & Cohen, A. (1990). A perceptual study of intonation: An experimentalphonetic approach to speech melody. New York, NY: Cambridge University Press. Truckenbrodt, H. (1999). On the relation between syntactic phrases and phonological phrases. Linguistic Inquiry, 30, 219–255. Wagner, M. & Watson, D. G. (submitted). Experimental and theoretical advances in prosody: A review. Language and Cognitive Processes. Watson, D., Arnold, J. E., & Tanenhaus, M. K. (2005). Not just given and new: The effects of discourse and task based constraints on acoustic prominence. Poster presented at the 2005 CUNY Human Sentence Processing Conference, Tucson, AZ. Watson, D. G. (2008). Production processes and prosody. Paper presented at ETAP 2008: Experimental and Theoretical Advances in Prosody, Ithaca, NY. Watson, D. G., Arnold, J. E., & Tanenhaus, M. K. (2008). Tic tac toe: Effects of predictability and importance on acoustic prominence in language production. Cognition, 106, 1548. Watson, D. G., & Gibson, E. E. (2004). The relationship between intonational phrasing and syntactic structure in language production. Language and Cognitive Processes, 19, 713–755.
This page intentionally left blank
C H A P T E R
F I V E
Defining and Investigating Automaticity in Reading Comprehension Katherine A. Rawson Contents 186 187 187 188 193
1. Introduction 2. Defining Automaticity in Reading Comprehension 2.1. Property-List Accounts of Automaticity 2.2. Process-Based Theories of Automaticity 3. Investigating Automaticity in Reading Comprehension 3.1. Direct Tests of Memory-Based Automaticity in Reading Comprehension 3.2. Indirect Evidence for Memory-Based Automaticity in Reading Comprehension 4. Investigating Automaticity in Reading Comprehension: Outstanding Issues 4.1. Generality of Memory-Based Automaticity in Reading Comprehension 4.2. Individual Differences in Memory-Based Automaticity 4.3. Further Development of Memory-Based Theories of Automaticity 5. Redefining Automaticity in Reading Comprehension Acknowledgments References
193 206 215 215 218 220 225 226 227
Abstract In the literature on reading comprehension, automaticity has traditionally been defined in terms of properties of performance (e.g., speed, effort). Here, I advocate for a more powerful approach based on contemporary theories that conceptualize automaticity in terms of cognitive mechanisms that underlie practice effects on performance. To illustrate the utility of automaticity theories for understanding reading comprehension, the bulk of the chapter focuses on one particular kind of automaticity theory, which states that practice leads to
Psychology of Learning and Motivation, Volume 52 ISSN 0079-7421, DOI: 10.1016/S0079-7421(10)52005-X
#
2010 Elsevier Inc. All rights reserved.
185
186
Katherine A. Rawson
decreasing involvement of algorithmic processing and increasing involvement of memory-based processing. I review evidence from recent studies specifically designed to diagnose memory-based automaticity in reading comprehension and findings from earlier studies that provide indirect evidence for this account. Finally, I consider directions for future research and theory development to address outstanding issues concerning the nature of automaticity in reading comprehension.
1. Introduction Imagine if we all awoke one morning to find that we were no longer able to process written text—for most, the loss would mean professional paralysis, the entire public and private education system would come to a grinding halt, and many leisure and domestic activities could no longer be performed. Clearly, reading comprehension is an integral part of the lives of most individuals in a literate society. Ironically, most of us take reading comprehension for granted because of the speed, ease, and frequency with which we read, and outlandish thought experiments are needed for us to appreciate the importance of this sophisticated cognitive skill. However, anyone reading this chapter was once a beginning reader struggling to sound out or recognize words, who somehow through years of practice achieved the remarkable accomplishment of becoming a skilled reader for whom reading comprehension now seems largely automatic. Two key questions arise from these observations: What does it mean to say that reading comprehension is automatic? And how does reading comprehension become automatic? Not only are these questions of basic theoretical interest, they also have important implications for the estimated 43% of adults in the U.S. who have only a basic or below-basic level of prose literacy (National Center for Education Statistics, 2006). Importantly, where we search for answers to the second question (how reading comprehension becomes automatic) depends critically on our answer to the first question (what automaticity is). Accordingly, Section 2 of this chapter considers how automaticity has traditionally been defined in the literature on reading comprehension and then advocates for a more useful approach to conceptualizing automaticity. Subsequent sections then focus on theories and empirical investigations of how reading comprehension becomes automatic. Section 3 describes evidence from current and past research, whereas Section 4 focuses on directions for future research and theory development to address outstanding issues concerning the nature of automaticity in reading comprehension. Finally, issues concerning the definition of automaticity are revisited in Section 5.
Automaticity in Reading Comprehension
187
2. Defining Automaticity in Reading Comprehension 2.1. Property-List Accounts of Automaticity Automaticity is a ubiquitous concept in the literature on reading comprehension. General models of comprehension often include assumptions concerning the automaticity of text processing (e.g., Kintsch, 1998; Perfetti, 1988). Similarly, theories of how specific component processes involved in reading comprehension operate (e.g., lexical processing, syntactic parsing, inferencing) also commonly include claims regarding the automaticity of these processes (e.g., Brown, Gore, & Carr, 2002; Flores d’Arcais, 1988; Greene, McKoon, & Ratcliff, 1992; McKoon & Ratcliff, 1992; Noveck & Posada, 2003). Automaticity also plays a prominent role in theoretical accounts of individual differences in comprehension skill within healthy adult populations (e.g., Walczyk, 2000) and those involving individuals with brain damage or disability (e.g., Kilborn & Friederici, 1994). Because most people (researchers and laypersons alike) have an intuitive sense of what it means to say that something is automatic, it is perhaps not surprising that automaticity is not explicitly defined in much of the reading comprehension research in which the concept is invoked. When automaticity is defined, the virtually exclusive approach is to do so in terms of one or more properties of performance (i.e., property-list accounts of automaticity). These accounts generally accord with our intuitive sense of what automaticity refers to, such as ‘‘quick, easy, and outside of conscious awareness.’’ Intuition and convention notwithstanding, problems arise with this approach to defining automaticity. Both within and across areas of research in the reading comprehension literature, a persistent problem is inconsistency in the set of properties considered necessary and sufficient to define automaticity. Rawson (2004) provides more extensive discussion and illustration of this problem, so I only briefly summarize it here. In that study, I sampled 14 papers in which automaticity had been described in terms of properties. These property lists consisted of as few as two and as many as nine properties, and 14 different properties were invoked across lists. Most strikingly, no two property lists were the same, and the properties included on most lists (speed, resource dependence, obligatoriness, and openness to awareness) were explicitly excluded on at least one other list. Inconsistencies of this sort make it difficult to compare theoretical claims about the involvement of automatic processing in reading comprehension and may even give rise to spurious theoretical debates (e.g., for discussion of an extended but ultimately unresolved debate concerning automatic inference processes, see Rawson, 2004).
188
Katherine A. Rawson
Even if agreement could be reached concerning necessary and sufficient properties, the larger problem with property-list accounts is that they are not explanatory. Delimiting the properties associated with nonautomatic versus automatic performance does not explain what underlies automatization or what gives rise to the properties of interest. For example, according to the compensatory-encoding model of individual differences in reading comprehension, less skilled readers ‘‘possess subcomponents that are less automated’’ (Walczyk, 2000, p. 560). In terms of the properties identified with automaticity in this model, the reading processes of less skilled readers presumably are slower, more effortful, require more attention, are more susceptible to strategic control, and are more selective, serial, flexible, error prone, and susceptible to interference. However, exhaustive documentation of these differences would still leave open the question of why less proficient versus more proficient readers differ in these ways and would be largely uninformative with respect to predicting what kind of training would most likely overcome these deficiencies.
2.2. Process-Based Theories of Automaticity What then is the alternative to defining automaticity in terms of properties? Due in large part to limitations of the sort described above, contemporary theories from basic research on automaticity have turned instead to conceptualizing automaticity in terms of underlying cognitive mechanisms that give rise to properties of interest, rather than in terms of the properties themselves. These process-based theories are most concerned with explaining how the representations and processes that underlie task performance change with practice. Note the subtle but important shift in the nature of the question, from automaticity as an end state to automatization as a dynamic process and why properties of performance change as a result. In particular, the robust finding that practice yields a negatively accelerated speed-up in performance time is considered the signature pattern of automatization. Figure 1 depicts hypothetical response time curves to illustrate the qualitative pattern of speed-up with practice, and process-based theories are primarily focused on explaining this effect (to foreshadow, the three hypothetical conditions in Figure 1 are used to illustrate predictions of specific theories below). To explain practice effects on the speed with which a task is performed, process-based theories have postulated several different cognitive mechanisms. A detailed and exhaustive review of these theories is beyond the scope of this chapter, so I limit discussion to a summary of the basic theoretical claims of each kind of theory. Given that the primary goal of this chapter is to illustrate how process-based theories can be used to further our understanding of automaticity in reading comprehension, the bulk of the chapter
189
Automaticity in Reading Comprehension
Novel items, complex algorithm Repeated items, complex algorithm
Response time
Repeated items, simpler algorithm
Practice block
Figure 1 Hypothetical pattern of response times illustrating the basic pattern of speedups with practice, as well as hypothetical differences in practice effects on response time as a function of kind of item and the complexity of the algorithm.
will then focus on applying one of these process-based theories to investigate automaticity in reading comprehension. 2.2.1. Attention-Based Theories The central claim of attention-based theories of automaticity is that performance becomes faster with practice due to changes in the amount or kind of information attended to during task performance. For example, a recent attention-based theory of automaticity is the information reduction hypothesis (Haider & Frensch, 1996, 1999). According to this account, with practice, individuals learn to focus attention on the most task-relevant information and to ignore irrelevant or redundant information. The reduction in the amount of information processed improves the speed of task performance. Several studies have reported results consistent with this theory (Haider & Frensch, 1996; Lee & Anderson, 2001; Schneider, Dumais, & Shiffrin, 1984; Schneider & Shiffrin, 1977). For example, Haider and Frensch presented letter strings containing a number that indicated how many letters were skipped in the sequence (e.g., A [3] E F G H, with 3 indicating that three letters were skipped between A and E). Participants had to indicate as quickly as possible whether the string preserved the consecutive order of letters in the alphabet. For incorrect strings, the error always involved the initial letter-digit-letter portion of the sequence, whereas the remaining letters were always correct (e.g., in A [3] F G H I, A [3] F is incorrect
190
Katherine A. Rawson
whereas G H I is correct). With practice, individuals learned to attend only to the initial letter-digit-letter triplet and to ignore the remaining letters. The extent to which attentional shifts contribute to the automatization of reading comprehension is currently unknown. Intuitively, the contribution of these attention-based mechanisms in reading comprehension would seem minimal, given that most texts are unlikely to include information that is completely irrelevant to the topic or purposefully distracting. However, the finding in eye-tracking research that adult readers are more likely to skip over function words than content words (Rayner, 1998) might be taken as evidence for the plausibility of this mechanism during reading. 2.2.2. Algorithm Efficiency Theories The central claim of algorithm efficiency theories is that practice improves the efficiency of the underlying algorithmic processes that compute interpretations of task stimuli, which in turn speeds task performance. Theorists have proposed several ways in which the efficiency of algorithmic processing may be improved, most notably in versions of ACT theory (e.g., Anderson, 1982, 1987, 1996; Anderson & Lebiere, 1998). Within ACTR, an algorithm involves the execution of a sequence of productions, which function like ‘‘if-then’’ statements that indicate what action or transformation should occur given the current state of the system (e.g., ‘‘IF goal is to find radius of circle and diameter is known, THEN divide diameter by two’’). Algorithm efficiency may improve when two or more productions within a sequence are combined to reduce the overall number of computational steps that must be performed to complete the algorithm (Blessing & Anderson, 1996). Algorithm efficiency may also improve by tuning the strength of productions in procedural memory. The strength of a given production may be incremented when its execution leads to a correct result; likewise, the strength of a production may be decremented when its execution leads to an incorrect result. In this way, the strengthening process may increase the likelihood that effective productions are subsequently selected for execution and decrease the likelihood that ineffective productions are selected. ACT-R also assumes that the strength of a production influences how quickly it can be executed on subsequent processing trials. Importantly, note that algorithms involved in computing interpretations of task stimuli are task-specific but item-general—that is, the algorithms can compute an interpretation for any token of a given stimulus type within a task domain, regardless of whether that token is familiar or novel. Thus, algorithm efficiency gains produce item-general practice effects, where the speed of processing improves across stimuli of a given type, even for particular stimuli that have not been practiced before (as depicted for the hypothetical novel item condition in Figure 1). Recent research has shown item-general effects of practice with linguistic units of various sorts, including syntactic structures and noun–noun conceptual combinations
Automaticity in Reading Comprehension
191
(e.g., Rawson, 2004; Rawson & Middleton, 2009; Rawson & Touron, 2009). This type of practice gain contrasts with item-specific practice effects (discussed further below), which are gains that are limited to specific stimuli that have been practiced before. These two effects are not mutually exclusive, in that both types of gain may be observed with practice of a given task. 2.2.3. Memory-Based Theories The final class of process-based theories considered here are primarily intended to explain item-specific practice effects and are the focus of the remainder of this chapter. The central claim of memory-based theories of automaticity is that practice leads to a qualitative change in underlying processes, with a shift away from algorithmic interpretation of stimuli to retrieval of prior interpretations from memory. The progenitor of contemporary memory-based accounts is instance theory (Logan, 1988, 1992, 1995). According to instance theory, interpretation of a stimulus is based on algorithmic processing on the initial encounters of that stimulus, with each interpretation stored in long-term memory as a separate instance. Upon subsequent encounters of that stimulus, interpretation may be based either on algorithmic processing or on retrieval of a previously stored interpretation. The algorithm and retrieval routes are assumed to race in parallel, with interpretation based on the output of whichever route finishes first. Importantly, each instance is assumed to race against all other instances to be retrieved, with normally distributed finishing times for each instance. As the number of instances increases (through repeated encounters of specific stimuli), the likelihood that an instance is retrieved quickly enough to beat the algorithm also increases. Although the algorithm may still win the race on some proportion of trials, the main claim is that speed-ups with practice are due to increasing involvement of retrieval for repeated stimuli on later trials. For example, if you were asked to solve ‘‘24 7 ¼ ?,’’ you would likely have to use multiplication rules to arrive at the solution. However, if repeatedly asked to solve this same problem, at some point you would presumably forego computation and instead simply retrieve the solution (168) directly from memory. Two offspring of Logan’s instance theory share the same core assumption that speed-ups with practice reflect shifts from algorithmic processing to retrieval, but they hold somewhat different representational and processing assumptions. The component power laws theory (or CMPL; Rickard, 1997, 1999, 2004) assumes that interpretations of a given stimulus are stored as a prototype (rather than as separate instances) that accrues strength in memory with increasing practice with that stimulus. CMPL also assumes that the algorithm and retrieval routes to interpretation do not run in parallel. Rather, only one of the two processes is initially engaged on any processing trial, and the other process is only executed if the initially selected process fails.
192
Katherine A. Rawson
The other recent memory-based theory of automaticity is the exemplarbased random walk theory (or EBRW; Palmeri, 1997, 1999), which extends instance theory in several key ways. Palmeri notes that as originally proposed, instance theory is a pure instance model, in that the same stimulus must be presented to retrieve prior interpretations. In contrast, EBRW allows for retrieval of prior interpretations for stimuli that are similar to the current stimulus, and the speed with which instances are retrieved is a function of the similarity of those instances to the current stimulus. Given this assumption of similarity-based retrieval, a given stimulus may retrieve instances from more than one response category, thus introducing the potential for response competition. For example, encountering the letter string BUNT in a word identification task would presumably retrieve not only instances of ‘‘bunt’’ but also ‘‘bunk’’ and ‘‘runt.’’ EBRW assumes that each retrieved instance directs a random walk toward a response threshold for the particular interpretation contained in that instance (and as result, away from the threshold for a competing response). Eventually, an interpretation for a stimulus is selected when the response threshold for one interpretation is reached. In contrast, instance theory assumes that the first instance retrieved determines the interpretation that is adopted on the current processing trial. Which memory-based theory is best suited to explain automaticity in reading comprehension? Without further research, one can only speculate. However, EBRW may hold the most promise for two reasons. First, EBRW provides an important extension of the original instance theory by postulating how processing proceeds when different responses are possible for a given stimulus, which commonly occurs in reading comprehension. Whereas previous research on memory-based theories has typically involved unambiguous stimuli with only one possible response (e.g., in alphabet arithmetic, A þ 2 always equals C), many of the units of information processed in reading comprehension permit more than one response. For example, the word ‘‘calf’’ may refer to a baby cow on some occasions but a leg muscle on other occasions. To the extent that prior interpretations of ‘‘calf’’ are retrieved during processing, some will include ‘‘baby cow’’ and others will include ‘‘leg muscle.’’ EBRW postulates a random walk mechanism that would explain how competition between these two mutually exclusive meanings would be resolved (see Section 3.2.3). Second, EBRW postulates that the algorithm and retrieval routes operate simultaneously, with interpretation based on the output of whichever process completes first. This assumption of parallel processing may be more reasonable for reading comprehension than CMPL’s assumption that algorithm and retrieval cannot run simultaneously. Reading comprehension involves a complex system of cognitive processes that must operate in a coordinated fashion to interpret many different kinds of information (sounds, letters, words, concepts, propositions, syntax, etc.). Whereas
Automaticity in Reading Comprehension
193
some component processes are likely to heavily involve memory-based processing, other components will be restricted to algorithmic processing in most cases (see Section 4.1). If algorithm and retrieval processes cannot operate simultaneously, it is unclear how all of the various components involved in reading comprehension are able to contribute to the developing text representation in real time. As mentioned earlier, the remainder of the chapter will focus on applying memory-based theories to understanding automaticity in reading comprehension. In the next section, I first discuss empirical evidence from recent studies that were specifically designed to demonstrate memorybased automatization in reading comprehension. I then discuss earlier studies that were not originally designed to explore memory-based automatization but nonetheless provide indirect evidence consistent with memory-based theories. To foreshadow, many of the findings discussed below are consistent with all three memory-based theories. In these cases, the theories will be discussed as a collective. For some findings, however, a greater degree of precision in explanation is desirable. In these cases, discussion will focus on EBRW in particular.
3. Investigating Automaticity in Reading Comprehension Acquiring skill in reading comprehension takes years of practice and involves many small incremental improvements in the speed (and accuracy) of many different component processes. These incremental improvements likely involve several different mechanisms—indeed, the attention-based, algorithm efficiency, and memory-based theories described above are not mutually exclusive. However, the relative contributions of these mechanisms may differ widely as a function of task domain. Thus, positive evidence for memory-based automatization in other cognitive tasks does not guarantee its involvement in reading comprehension. What evidence exists for memory-based automatization in reading comprehension?
3.1. Direct Tests of Memory-Based Automaticity in Reading Comprehension To investigate memory-based automatization in reading comprehension, we must first consider the two main empirical footprints of memory-based processing. The first concerns algorithm complexity effects (i.e., processing time differences due to more vs. less complex algorithmic processing) and how they change with practice. All three memory-based theories predict that algorithm complexity effects will be apparent at the beginning of practice
194
Katherine A. Rawson
with novel stimuli (i.e., when interpretation is based on algorithmic processing) but will diminish with repetition of those items (i.e., as interpretation is increasingly based on retrieval rather than the algorithm). The basic qualitative pattern is illustrated by comparing response times in the two hypothetical repeated conditions in Figure 1. For example, Palmeri (1997) reported patterns of this sort in a dot counting task. Participants were repeatedly presented arrays of 6–11 dots and were asked to report as quickly as possible how many dots each contained. At the beginning of practice, response times were longer for 11-dot arrays than for 6-dot arrays. Upon initial encounters of the arrays, individuals presumably used a counting algorithm to compute each response, and arrays requiring more counting took longer than arrays requiring less counting. However, by the end of practice, response times differed minimally as a function of numerosity. Presumably, individuals shifted to retrieving answers directly from long-term memory, and thus numerosity no longer mattered (for similar effects in alphabet arithmetic, see Klapp, Boches, Trabert, & Logan, 1991; Logan & Klapp, 1991). Second, memory-based theories predict item-specific practice effects, in that speed-ups with practice due to shifting from algorithm to retrieval will be limited to repeated stimuli for which there are prior interpretations stored in memory. Among the memory-based theories, this prediction may be softened somewhat for EBRW, given that it allows for practice effects to accrue to new stimuli if they are sufficiently similar to repeated stimuli. Nonetheless, the general prediction is that speed-ups with practice should be greater for repeated stimuli than for novel stimuli of the same type. The basic qualitative pattern is illustrated by comparing response times in the two complex algorithm conditions in Figure 1. For example, Logan and Klapp (1991) reported faster response times for alphabet arithmetic equations that were repeated during training than for equations only presented once at the end of training. Recent research has examined the extent to which the two empirical footprints of memory-based automatization—decreasing algorithm complexity effects and item-specific practice effects—can be observed during reading comprehension. Of course, reading comprehension is not a process but rather a system of coordinated processes. A nonexhaustive list of component processes involved in reading comprehension includes sublexical processes (e.g., phonological, orthographic, and morphological processing), lexical processes (e.g., word recognition, word meaning processes), supralexical processes (e.g., syntactic and thematic parsing processes that identify the grammatical and semantic relationships between words), and inferential processes (e.g., forward and backward causal inferences, elaborative inferences, pragmatic inferences). Thus, memory-based automatization must be investigated within a particular component process in the comprehension system. Below I describe studies of memory-based automatization in two different component processes.
Automaticity in Reading Comprehension
195
3.1.1. Resolving Syntactic Ambiguity Rawson (2004) presented readers with short narratives containing target sentences that were either syntactically ambiguous or unambiguous. For example, one story contained a sentence that began ‘‘The troops dropped. . ..’’ The sentence is syntactically ambiguous at the verb ‘‘dropped,’’ because more than one syntactic structure is possible at that point. The sentence could continue with ‘‘dropped’’ as the main verb (as in Sentence A) or as a constituent of a reduced relative clause (as in Sentence B); these cases are commonly referred to as main verb/reduced relative (MV/RR) ambiguities. A. The troops dropped from the plane. B. The troops dropped from the plane could overshoot the target. C. The troops who were dropped from the plane could overshoot the target. Previous research has established that readers typically resolve MV/RR ambiguities by assuming that the verb is the main verb of the sentence. However, the ambiguous target sentences in the Rawson (2004) materials instead disambiguated with a relative clause. Because readers initially compute a main-verb interpretation at the point of the ambiguous verb, upon subsequent encounter of the actual main verb in the disambiguating region (e.g., ‘‘could overshoot’’ in Sentence B), reanalysis is necessary to correct the initial misinterpretation. Reanalysis leads to an increase in reading times in the disambiguating region, relative to a control condition involving unambiguous versions of the target sentence, as in Sentence C. In this sentence, the role of ‘‘dropped’’ as part of a relative clause is explicitly indicated by ‘‘who were.’’ Thus, initial interpretation is correct and no reanalysis is needed in the subsequent disambiguating region. To illustrate how materials of this sort are useful for diagnosing memorybased automatization during reading, I briefly summarize methods and results from Experiment 4 of Rawson (2004). In the repeated condition, stories in which the target sentence was ambiguous (as in Sentence B) or unambiguous (as in Sentence C) were each presented eight times (see sample story in Table 1). In the unrepeated condition, stories containing an ambiguous target sentence were only presented once in Practice Block 2, 4, 6, or 8. The primary dependent variable was reading time in the disambiguating region. To revisit, memory-based theories predict two basic patterns (illustrated in Figure 1), diminishing algorithm complexity effects and item-specific practice effects. Predictions concerning algorithm complexity effects involve the repeated ambiguous and repeated unambiguous conditions. On Trial 1, reading times in the disambiguating region (‘‘could overshoot’’) will be longer in Sentence B than in Sentence C due to reanalysis of the initial misinterpretation of ‘‘dropped’’ as the main verb in Sentence B. The
196
Katherine A. Rawson
critical prediction concerns reading times at the end of practice. If readers shift to retrieval of prior interpretations, the correct interpretation of ‘‘dropped’’ will be retrieved in both conditions, avoiding the need for reanalysis in Sentence B by the end of practice. As shown in Figure 2, results confirmed the predicted pattern. Reading times in the disambiguating region on Trial 1 were significantly longer in the repeated ambiguous condition than in the repeated unambiguous condition, but reading times in the two conditions did not differ significantly by the end of practice. Table 1
Sample Text From Rawson (2004).
Jones had been assigned to the press conference at the Capitol. Only recently hired by the paper, it was the most important assignment he had yet been given. Congress announced that the budget for the next fiscal year had been approved. Military officials (who were) delivered the reports were unhappy with the cuts. Jones thought that playing up the controversy might make for a better article. Note: For illustrative purposes, the ambiguous verb is underlined and the disambiguating region is italicized (the phrase in parentheses was presented in the unambiguous version of the target sentence).
Unrepeated ambiguous Repeated ambiguous 1000
Repeated unambiguous
Reading time (ms) in disambiguating region
900 800 700 600 500 400 300 200
1
2
3
4
5
6
7
8
Practice block
Figure 2 Mean reading time (ms) in the disambiguating region of syntactically ambiguous and unambiguous target sentences during practice, as a function of repetition and practice block. Error bars represent standard error of the mean (adapted from Figure 8 and Figure 9 in Rawson, 2004).
Automaticity in Reading Comprehension
197
The prediction concerning item-specific practice effects involves the repeated ambiguous and unrepeated ambiguous conditions. As described above, by the end of practice in the repeated ambiguous condition, readers are presumably retrieving the correct interpretation of ‘‘dropped’’ and thus avoiding reanalysis in the disambiguating region. In contrast, reanalysis will be required for unrepeated ambiguous items throughout practice because initial misinterpretation of ‘‘dropped’’ cannot be avoided via retrieval of prior interpretations. Consistent with this prediction, reading times were significantly faster in the repeated ambiguous condition than in the unrepeated ambiguous condition throughout practice. Using variations of the method described above, Rawson (2004) reported evidence from five experiments demonstrating retrieval of prior interpretations in syntactic parsing. However, this study represented the only direct test of memory-based theories of automaticity in reading comprehension, and several questions remained concerning the generality of memory-based automatization in reading comprehension. First, to what extent does memory-based automatization play a role in comprehension processes other than syntactic parsing? Second, does memory-based processing support interpretation when repeated items are transferred to new contexts? Third, do both practice effects and transfer effects obtain when repeated items are reencountered after a delay? Fourth, do the empirical footprints of memory-based automatization obtain with age groups other than young adult readers? In the next section, I summarize research providing answers to each of these questions. 3.1.2. Conceptual Combination To extend to another component process, Rawson and Middleton (2009) examined the involvement of shifts from algorithm to retrieval during conceptual combination, one of the many semantic processes involved during reading comprehension. Conceptual combination refers to the combination of two or more concepts to elaborate characteristics of the base concept or to create a new concept. In English, conceptual combinations typically involve adjective-noun combinations (e.g., ‘‘prickly arguments’’) or noun–noun combinations (e.g., ‘‘banana fight’’), both of which are quite common in natural language (see examples in Table 2). Paralleling the syntactic ambiguity materials used by Rawson (2004), Rawson and Middleton developed novel noun–noun combinations (e.g., ‘‘bee spider’’) that afforded more than one possible interpretation. Several norming studies were conducted to establish the normatively preferred or dominant meaning of each combination, and then a plausible alternative or subordinate meaning was also selected. For example, without any disambiguating context, most individuals assume that ‘‘bee spider’’ refers to a spider that looks like a bee, whereas a plausible alternative meaning is a spider that eats bees. Each combination was then embedded in a short story containing
198 Table 2
Katherine A. Rawson
Examples of Conceptual Combinations.
‘‘Redsled, on the west slope of the divide, was fissured with thermal springs which attracted tourists, snowmobilers, skiers, hot and dusty ranch hands, banker bikers dropping fifty-dollar tips.’’ (Annie Proulx, Close Range: Wyoming Stories, p. 48) ‘‘Basically, we’re the bullet sponge.’’ (First Lt. Daniel Wright, executive officer of an American unit in Afghanistan, whose function is to draw insurgents away from more populated areas, creating security elsewhere, New York Times, 11/10/2008) They play in cemeteries now, he thought, and tried to imagine a world where children had to play in cemeteries—death parks. (Stanley Elkin, The Living End, p. 80) ‘‘From attic trapdoors, blind robot faces peered down with faucet mouths gushing green chemical. . . But the fire was clever. It had sent flames outside the house, up through the attic to the pumps there. An explosion! The attic brain which directed the pumps was shattered into bronze shrapnel on the beams... The house shuddered, oak bone on bone. . .’’ (Ray Bradbury, The Martian Chronicles, p. 254) Note: Above are examples of conceptual combinations (underlined) from natural language samples and the literary contexts in which they appeared.
two critical sentences. The first sentence introduced the novel combination, and the second target sentence included a disambiguating region that stated the intended meaning of the combination (either the dominant meaning or the subordinate meaning; see the sample text at the top of Table 3). Using these materials, Rawson and Middleton conducted three experiments involving various manipulations to further explore the generality of memory-based processing. Across experiments, several different findings converged to provide further evidence for memory-based automatization during reading comprehension. All of the important manipulations and critical effects reported across experiments by Rawson and Middleton (2009) were replicated in a recent unpublished study using their materials, which I summarize here to more concisely illustrate the key findings. During the first session, stories containing either the dominant version or the subordinate version of the target disambiguating sentence were presented once in each of four blocks of practice (i.e., the repeated dominant and repeated subordinate conditions, respectively). Two days later, each of the repeated stories was presented four more times. In both practice sessions, stories containing subordinate target sentences were each presented only once at the beginning or end of practice (i.e., the unrepeated subordinate condition).
Automaticity in Reading Comprehension
Table 3
199
Sample Materials From Rawson and Middleton (2009).
Professor Dennison consistently had a high level of enrollment in his physics course not only because he was a fabulous teacher but because he was a peculiar, funny sort of man, and the students loved him. His physics demonstrations were clever and entertaining. For example, once he plopped down cross-legged on a platform with wheels, firmly grasped a large fire extinguisher, and shot himself across the room with it to demonstrate principles of momentum and inertia. His other extremely entertaining feature was his wardrobe. One day he came to class in bunny slippers. On another occasion, he wore a beret to complement his garish zebra tie. It was a tie [with black and white zebra stripes/with cartoon zebras on it]. And, according to rumor, he never fails to don, for one day a year in late September, his authentic Bavarian lederhosen, which is his way of commemorating the German Oktoberfest. Without a doubt, Professor Dennison is an extremely free-spirited individual. Target sentence used in transfer task: Jonathon put on his zebra tie in protest of the wedding his wife was dragging him to, but she was not amused by the tie [with black and white zebra stripes/with cartoon zebras on it]. Note: For illustrative purposes, the novel combination is underlined and the disambiguating region is italicized (the dominant version of the disambiguating region is the first phrase in the bracket and the subordinate version is the second phrase in the bracket).
Figure 3 reports reading times in the disambiguating region of target sentences in each condition during the two practice sessions. First, reading times were significantly longer for repeated subordinate items than for repeated dominant items on Trial 1 but then converged by Trial 8 (i.e., diminishing effects of algorithm complexity). Second, reading times in the disambiguating region were faster for repeated subordinate items than for unrepeated subordinate items (i.e., item-specific practice effects). These two findings parallel those reported by Rawson (2004), thus extending the evidence for memory-based automatization in to semantic processing. Concerning the question of whether memory-based processing supports interpretation when repeated items are encountered in new contexts, a transfer task was administered in a third session 2 days after practice. To mask the real purpose of the transfer task, participants were given cover task instructions explaining that they would complete a series of measures of basic reading comprehension ability. One of the tasks was a sentence sensibility judgment task, in which participants read a list of sentences one at a time and were asked to indicate if each one was sensible or insensible. A minority of the sentences in the list were actually new sentence frames for the combinations that had been repeatedly presented in the stories during the practice sessions (see example transfer sentence in Table 3). Importantly,
200
Katherine A. Rawson
Unrepeated subordinate Repeated subordinate
2600
Repeated dominant
Reading time (ms) in disambiguating region
2200 1800 1400 1000 600 200
1
2
3
4 5 Practice block
6
7
8
Figure 3 Mean reading time (ms) in the disambiguating region of target sentences during practice, as a function of the meaning of the conceptual combination and practice block. Error bars represent standard error of the mean.
in each sentence, the combination was followed by a disambiguating region that contained either the same meaning as during practice or the nonpracticed meaning. Figure 4 reports reading times in the disambiguating region of target sentences in each condition of the transfer task. If encountering repeated items in a new context minimizes the involvement of memory-based processing, interpretation of the combinations in the transfer sentences would revert to algorithmic processing. Given that the algorithm is most likely to generate the dominant meaning of the combinations, one would predict only a main effect of transfer meaning, with reading times significantly faster in the dominant versus subordinate transfer sentences regardless of the meaning of the combinations during practice. In contrast, the observed crossover interaction implicates retrieval of prior interpretations. Consider the subordinate transfer condition. For items that had been practiced with their subordinate meaning, these stored interpretations were presumably retrieved upon encounter of the combination in the transfer sentence. When the subsequent disambiguating region contained the subordinate meaning, no reanalysis was required. In contrast, if the disambiguating region contained the dominant meaning, reanalysis would be needed and reading times would increase as a result. The same logic applies to the pattern in the dominant transfer condition.
201
Automaticity in Reading Comprehension
2600 Subordinate in practice Dominant in practice
Reading time (ms) in disambiguating region
2400
2200
2000
1800
1600
1400 Subordinate in transfer
Dominant in transfer
Figure 4 Mean reading time (ms) in the disambiguating region of target sentences during the transfer task, as a function of the meaning of the conceptual combination during practice and during transfer. Error bars represent standard error of the mean.
Concerning the question of memory-based processing across delays, note that both practice effects (Figure 3) and transfer effects (Figure 4) obtained across a delay. Regarding the transfer effects, 2 days separated the second practice session and the transfer task. Regarding the practice effects, 2 days separated Block 4 and Block 5 of practice. Although some loss occurred across the delay (reading times were significantly slower in Block 5 than in the Block 4), much of the speed-up gained in Session 1 was preserved across the delay (reading times were significantly faster in Block 5 than in Block 1). Additionally, reading times in Block 5 were still significantly faster for repeated subordinate items than for unrepeated subordinate items, suggesting interpretation of the repeated subordinate items was primarily based on retrieval of interpretations stored during Session 1. Thus, the key patterns implicating memory-based automatization obtained when items were reencountered after a delay. Finally, concerning the question of whether evidence for memory-based automatization in reading comprehension obtains with age groups other than young adult readers, Rawson and Touron (2009) presented younger and older adult readers with stories similar to those developed by Rawson and Middleton (2009), containing a novel conceptual combination followed by a subsequent disambiguating sentence. Stories containing either the dominant version or the subordinate version of the disambiguating sentence were presented once in each of several blocks of practice (i.e., the repeated dominant and repeated subordinate conditions). Another set of
202
Katherine A. Rawson
stories containing subordinate disambiguating sentences were each presented only once at some point in practice (i.e., the unrepeated subordinate condition). Both age groups showed the two empirical footprints of memory-based automatization: Reading times in the disambiguating region were significantly longer for repeated subordinate items than for repeated dominant items at the beginning of practice but then converged by the end of practice (i.e., diminishing effects of algorithm complexity), and reading times were faster for repeated subordinate items than for unrepeated subordinate items (i.e., item-specific practice effects). The primary age difference was that older adults required more trials of practice to shift to retrieval of prior interpretations. Interestingly, a second experiment indicated that older adults’ slower shift was not due to memory deficits but rather to some reluctance to rely on retrieval. Most important, the results across both experiments provided initial evidence that older adults also exhibit memory-based automatization during reading comprehension. In all of the studies discussed to this point, the primary dependent variable was reading times in the disambiguating region of target sentences, because memory-based theories make a priori predictions for how reading times in this region should change as a function of practice. In contrast, the memory-based theories do not make strong a priori predictions for the pattern of reading times in the region of the sentence that contains the conceptual combination (e.g., the underlined portion of the sentence, ‘‘With its advanced technology, the cheetah bike was the most impressive of all the toys’’).1 Nonetheless, post hoc analyses of reading times in this region revealed a relatively consistent and informative pattern of results across several experiments. Specifically, for each participant in each practice block, mean reading time for the combination region in the repeated dominant condition was subtracted from mean reading time for the combination region in the repeated subordinate condition. The mean difference across participants in each practice block is reported in Figure 5; given the exploratory nature of these analyses, results are reported from eight different experiments. Although the magnitudes of the effects differ somewhat across experiments, all data sets show the same basic pattern. First, on Trial 1, the difference in reading times between the repeated dominant and repeated subordinate conditions was not significant. The memory-based processing theories assume that the first time a to-be-repeated combination is encountered, no prior interpretations are available to be retrieved and thus an 1
During the experiments, the region containing the combination (‘the cheetah bike’) and the spillover region (‘was the most impressive’) were presented separately. However, the effects described below tended to manifest in both regions, so I report analyses in which reading times are collapsed across these two regions for sake of simplicity.
203
Automaticity in Reading Comprehension
180
140
A
Mean difference in reading time in combination region, subordinate condition-dominant condition
140 100
60
60
20
20 −20
−20
−60
−60 1
240
B
100
2
3
4
5
6
7
8
9
10
C
1 120
160
80
80
40
2
3
4
5
6
7
8
D
0
0
−40
−80 1
2
3
4
5
6
1
2
3
4
5
6
2
3
4
5
6
140 100
180
E
F
140
60
100
20
60
−20
20
−60
−20 1
2
3
4
G1
120
5
6
G2
1
G3
80
80
40
40
0
0
−40
−40 1
2
3
4
5
6
7
8
9
H1
120
1
2
3
4
H2
5
6
7
8
Practice trial
Figure 5 Mean difference in reading time (ms) in the region of the sentence containing the conceptual combination in the subordinate versus dominant condition (positive values indicate longer reading times in the subordinate condition). Error bars represent standard error of the mean. Values along x-axes indicate practice trial. Panels A and B report data from Rawson and Touron (2009), Experiments 1 and 2. Panels C and D report unpublished data. Panels E and F report data from Rawson and Middleton (2009), Experiments 1 and 2. Panels G1–G3 report data from Sessions 1–3 of Rawson and Middleton (2009), Experiment 3. Panels H1 and H2 report data from Sessions 1 and 2 in the unpublished study described in Section 3.1.2.
204
Katherine A. Rawson
interpretation must be computed. All combinations are likely to be initially interpreted with the dominant meaning, and thus no difference in reading times in the two conditions would be expected on the first trial. However, on Trial 2, reading times in the combination region were significantly greater in the repeated subordinate condition than in the repeated dominant condition in all eight experiments. How might the memory-based theories explain this finding? One possibility is that the elevated reading time in the subordinate condition on Trial 2 reflects competition between the algorithm and retrieval routes. On Trial 2, the correct interpretation stored on the previous trial is now available to be retrieved. If the algorithm and retrieval routes operate in parallel, the two processes may output an interpretation at about the same time, because retrieval is still relatively slow early in practice. In the repeated dominant condition, both algorithm and retrieval would arrive at the dominant meaning. In contrast, in the repeated subordinate condition, computation would generate the dominant meaning but retrieval would produce the subordinate meaning. The elevated reading time in the repeated subordinate condition thus may reflect some form of interference between the two competing interpretations. Competition between the two routes also provides a plausible explanation of the pattern of reading times later in practice. In contrast to the significant difference in reading times on Trial 2, differences in reading times on subsequent trials within an initial practice session (Panels A–F, G1, and H1 in Figure 5) tended to be minimal in all experiments. According to instance theory and EBRW, as the number of stored interpretations increases across subsequent trials, retrieval speed continues to improve. As a result, retrieval is increasingly likely to output an interpretation prior to the completion of the algorithm, avoiding interference. Finally, consider the pattern of reading times in later sessions of practice that took place after a delay from an initial practice session (Panels G2, G3, and H2 in Figure 5). On the first trial in each of these sessions, reading times were again elevated in the subordinate condition. This pattern also follows from the rationale above. If some forgetting occurs over the delay between sessions, retrieval will be slower at the outset of the next session. Thus, algorithm and retrieval are once again more likely to output an interpretation concurrently, which would result in some interference in the subordinate condition. Thus, the idea of competition between the algorithm and retrieval routes provides a plausible account of the pattern of reading times across trials and across sessions. Note that this account assumes that the algorithm and retrieval routes were operating concurrently in order for their outputs to have interfered with one another. Accordingly, this explanation is only afforded by instance theory and EBRW, given that CMPL assumes that the algorithm and retrieval routes do not run in parallel. However, neither EBRW nor instance theory explicitly describes how outputs from the two
Automaticity in Reading Comprehension
205
processes might interact with one another, and thus their account of the pattern of interference observed here would still be somewhat incomplete. An alternative to the idea of competition between the two routes is that the elevated reading time on Trial 2 in the subordinate condition reflects competition within the retrieval route only. To revisit, all theories assume that the first time an item is encountered, interpretation is based on algorithmic processing. Thus, on Trial 1, combinations are presumably interpreted initially with the dominant meaning. Although subsequent disambiguating information corrects this initial misinterpretation in the subordinate condition, the initial dominant interpretation, although ultimately incorrect, may nonetheless persist in memory. In other words, the possibility is that two instances are stored for the combination: the interpretation from initial processing of the combination and the interpretation from reanalysis in the disambiguating region. This possibility is consistent with findings from research on syntactic parsing, showing that initial incorrect interpretations of syntactic ambiguities persist in memory in addition to the final correct syntactic interpretation that is stored after reanalysis (Christianson, Hollingworth, Halliwell, & Ferreira, 2001; Ferreira, Christianson, & Hollingworth, 2001). If so, on Trial 2, both the initial dominant interpretation and the ultimate subordinate interpretation stored on Trial 1 would be available for retrieval. Neither instance theory nor CMPL currently includes assumptions that would allow for competition between interpretations during retrieval. In contrast, EBRW assumes iterative retrieval of instances, with each retrieved instance directing a random walk toward the particular interpretation contained in that instance until one response threshold is reached. On Trial 2 in the subordinate condition, extra retrieval steps would be needed to reach threshold because of the competing interpretations. Assuming that the subordinate meaning was more likely to ultimately win the retrieval competition (as suggested by the pattern of reading times in the subsequent disambiguating region described earlier), an increasing number of subordinate interpretations would be encoded. As a result, the dominant interpretation would be less competitive with increasing practice, which would explain the minimal elevation of reading times in the subordinate condition on later trials. The one finding that poses difficulty for this account concerns the reemergence of elevated reading times in the subordinate condition on the first trial of later practice sessions. To explain this pattern, one would need to assume that the instances containing subordinate interpretations were more prone to forgetting than the instance containing the dominant interpretation, which would functionally increase the competitiveness of the dominant instance. On one hand, if primacy effects obtain in retrieval of this sort, they would favor the dominant instance. On the other hand, they would likely be offset by frequency and recency effects that would favor the
206
Katherine A. Rawson
subordinate interpretation. Thus, this retrieval competition account does not provide a straightforward explanation for the reemerging elevation of subordinate reading times in later sessions. Regardless of whether the dualroute competition account or the retrieval competition account is considered more viable, EBRW is the best equipped of the memory-based automaticity theories to accommodate both of these possibilities. The preceding discussion highlights how consideration of automaticity theories in reading comprehension will advance our understanding of both. Automaticity theories provide relatively detailed explanations of how processing during reading comprehension can change with practice. In turn, the testing of automaticity theories in reading comprehension provides rich data sets that can constrain those theories. Extending automaticity theories to reading comprehension also reveals how the assumptions of these theories will need to be expanded to accommodate more complex tasks with less discrete stimuli than have previously been examined in prior automaticity research (a point that will be revisited in Section 4.3).
3.2. Indirect Evidence for Memory-Based Automaticity in Reading Comprehension Taken together, findings from the studies described above provide foundational evidence for memory-based automatization of reading comprehension processes. Memory-based theories of automaticity—EBRW in particular—accrue further support to the extent that they provide plausible explanations for many empirical findings that have been reported in previous research. I discuss several illustrative examples below. 3.2.1. Verb Bias Effects Another kind of syntactic ambiguity that is common in English concerns the case in which a noun phrase following a verb could be either the direct object of that verb (as in Sentence D) or the subject of a sentence complement (as in Sentence E), commonly referred to as DO/SC ambiguities. If readers initially assume ‘‘the valuable book’’ is a direct object in Sentence E, reanalysis will be necessary upon reaching the disambiguating region (‘‘had been stolen’’). As a result, reading times in this region would be longer in Sentence E than in its unambiguous counterpart, Sentence F, in which the syntactic role of ‘‘the valuable book’’ is explicitly signaled by ‘‘that.’’ Across all DO/SC ambiguities in English, verbs are more likely to be followed by a direct object than by a sentence complement. However, individual verbs differ in the likelihood of being followed by a direct object versus a sentence complement (referred to as verb bias). For example, ‘‘read’’ is more likely to be followed by a direct object, whereas ‘‘believed’’ is more likely to be followed by a sentence complement. To what extent does this verb-specific information factor into readers’ initial parsing of DO/SC ambiguities?
Automaticity in Reading Comprehension
207
D. The librarian read the valuable book. E. The librarian read the valuable book had been stolen from the archives. F. The librarian read that the valuable book had been stolen from the archives. G. The librarian believed the valuable book had been stolen from the archives. H. The librarian believed that the valuable book had been stolen from the archives. Trueswell, Tanenhaus, and Kello (1993) presented readers with ambiguous and unambiguous versions of sentences containing a sentence complement. Importantly, the main verb in each sentence was either DO-biased or SC-biased. For sentences with DO-biased verbs, reading times in the disambiguating region were significantly longer in the ambiguous condition (Sentence E) than in the unambiguous condition (Sentence F). In contrast, for sentences with SC-biased verbs, reading time differences were minimal (Sentences G and H). Findings across three experiments converged on the conclusion that ‘‘subcategorization information is accessed as soon as a verb is encountered, and it has immediate effects on the processing of information that follows the verb’’ (p. 548). More recently, work by Hare, McRae, and Elman (2003, 2004) has suggested even finer-grained item specificity in verb bias effects. Hare et al. noted that many verbs have more than one sense (referred to as polysemy). For example, ‘‘found’’ can refer to the locating of some entity (‘‘He found the baseball’’) or to achieving some realization or understanding (‘‘He found that his watch had stopped’’). Results of corpus analyses and norming studies revealed sense-specific biases for polysemous verbs (Hare et al., 2003). For example, for contexts in which ‘‘found’’ meant locating, it was more likely to be followed by a direct object than by a sentence complement (i.e., the ‘‘locating’’ sense of ‘‘found’’ is DO-biased). In contrast, for contexts in which ‘‘found’’ meant realizing, it was more likely to be followed by a sentence complement than by a direct object (i.e., the ‘‘realizing’’ sense of ‘‘found’’ is SC-biased). Hare et al. then proceeded to embed the polysemous verbs in contexts that were semantically more consistent with one sense than the other. For example, the context in Item I favors the sense of ‘‘admit’’ that refers to confessing or conceding (which is SC-biased), whereas the context in Item J favors the sense that refers to permitting entry (which is DO-biased). In all cases, the target sentences contained a sentence complement that was unambiguously marked with ‘‘that’’ on half of the trials. I. For over a week, the trail guide had been denying any problems with the two high-school kids walking the entire Appalachian Trail. Finally, he admitted (that) the students had little chance of succeeding.
208
Katherine A. Rawson
J. The two freshman on the waiting list refused to leave the professor’s office until he let them into his class. Finally, he admitted (that) the students had little chance of getting into the course. Hare et al. found that reading times in the disambiguating region (e.g., ‘‘had little chance’’) were significantly longer in the ambiguous condition than in the unambiguous condition when the preceding semantic context supported the sense of the verb that was DO-biased (Item J) but not when the preceding context supported the SC-biased sense (Item I). In subsequent work, Hare et al. (2004) performed detailed analyses of materials used in earlier verb bias studies that had reported conflicting results and found that differences in the sense-specific biases of the polysemous verbs that had been used reconciled much of the apparent inconsistency. In the terminology of automaticity theories, the interpretation of DO/ SC ambiguities may be accomplished by item-general algorithmic processing or by item-specific memory-based processing. To the extent that interpretation is driven by algorithmic processing, DO/SC ambiguities would be resolved initially in favor of the direct object structure, regardless of the particular verb used in the sentence. However, the item-specific effects reported by Trueswell et al. (1993) implicate retrieval of specific information from prior encounters of particular verbs. Furthermore, the sense-specific effects reported by Hare et al. (2003) suggest that retrieval of information from prior encounters may be guided by semantic overlap between the current context and the previous contexts in which those verbs were encountered. This possibility is most easily conceptualized in terms of EBRW. Assuming that each prior instance contains information about the meaning of the verb as well as the syntactic frame in which it participated on that encounter (an assumption that will be revisited in Section 4.3.2), those instances with greater similarity to the current stimulus (i.e., those that share the same meaning of the verb in the current context) would be retrieved more quickly and thus would direct the random walk toward the threshold for the particular syntactic frame those instances contain. 3.2.2. Object-Relative and Subject-Relative Clauses The syntactic parsing research discussed thus far has concerned the potential role of memory-based processing in resolving syntactic ambiguities. What role might memory-based processing play in the interpretation of unambiguous structures? In psycholinguistic research, among the most widely studied unambiguous structures include sentences containing object-relative clauses (‘‘that the secretary disliked’’ in Sentence K) or subject-relative clauses (‘‘that disliked the secretary’’ in Sentence L). Research has shown that readers generally have greater difficulty interpreting object-relative (OR) clauses than subject-relative (SR) clauses (e.g., King & Just, 1991;
Automaticity in Reading Comprehension
209
Traxler, Morris, & Seely, 2002). Theoretical explanations for the increased difficulty of interpreting OR clauses commonly appeal to the notion of greater demands on working memory, either because more constituents must be maintained before syntactic assignments can be made or because of interference between the two noun phrases that must be maintained simultaneously (for a summary of these theories, see Reali & Christiansen, 2007a). K. The accountant that the secretary disliked lost some important files. L. The accountant that disliked the secretary lost some important files. Almost all research on OR/SR clauses has used sentences such as the examples above, in which the relative clause contained a common noun (secretary) rather than a proper name (‘‘that Sally disliked’’) or a pronoun (‘‘that you disliked’’). However, recent research has suggested that the increased difficulty of parsing OR clauses versus SR clauses may in part be due to differences in the frequencies with which OR versus SR clauses containing common nouns are encountered in the language. Reali and Christiansen (2007a) conducted corpus analyses revealing that 66% of OR clauses contained pronouns, whereas only 35% of SR clauses contained pronouns. Furthermore, OR clauses were much more likely than SR clauses to contain first-, second-, or third-person personal pronouns (e.g., I, you, she, they), whereas SR clauses were much more likely than OR clauses to contain third-person impersonal or nominal pronouns (e.g., it, someone). In subsequent experiments, Reali and Christiansen (2007a) presented readers with sentences containing either an OR clause or an SR clause for self-paced reading. Importantly, the target clauses either contained a first-person pronoun, a second-person pronoun, a third-person personal pronoun, or a third-person impersonal pronoun (in Experiments 1–4, respectively). In contrast to the normative pattern of longer reading times for OR clauses than for SR clauses, reading times were faster for OR clauses than for SR clauses when they contained a first-, second-, or third-person personal pronoun. Reading times were only longer for OR versus SR clauses when they contained a third-person impersonal pronoun, consistent with the pattern of co-occurrences in the corpus analyses. Reali and Christiansen (2007a) suggested that these biases may reflect the encoding of ‘‘schematized relative clause representations formed by sequential material with shared parts, such as (Relativizer) I VERB’’ (p. 19), as a result of frequent encounters of OR clauses containing pronouns. Memorybased theories of automaticity would suggest an even finer-grained unit of encoding may be at work here (a possibility also acknowledged by Reali and Christiansen, in addition to the more abstract OR clause template suggested). In terms of EBRW, each time a relative clause is encountered, the interpretation of that clause is stored as an instance. Presumably, the instance
210
Katherine A. Rawson
contains the specific words in the clause along with their syntactic and thematic role assignments on that particular encounter. If that particular sequence of words is encountered again, interpretation may be based on retrieval of the prior interpretation for that specific sequence, rather than on algorithmic processing at a more abstract or item-general level. Of course, for this account to be viable, readers would need to have encountered the specific word sequences contained in OR clauses enough times for retrieval to be fast and reliable enough to beat the algorithm. On the face of it, this assumption might seem tenuous. However, an informal analysis of the materials used by Reali and Christiansen (2007a) suggests that the specific word sequences contained in OR clauses may be encountered more frequently than might be assumed on intuition alone. Table 4 reports the mean (and range) of page counts in Google2 across the OR and SR clauses used in the target sentences for each of Reali and Christiansen’s experiments. For example, the clause ‘‘that you visited’’ appeared on 133,000 Google pages; the average page count across all OR clauses used in Experiment 1 was 49,622. Likewise, ‘‘that visited you’’ appeared on 1060 Google pages; the average page count across all SR clauses used in Experiment 1 was 1813. Although Reali and Christiansen’s clauses all began with ‘‘that,’’ I also included Google counts for clauses beginning with ‘‘who’’ and ‘‘whom,’’ given that EBRW allows for retrieval of highly similar instances. Consistent with Reali and Christiansen’s finding that reading times were faster for OR clauses than for SR clauses when they contained a first-, second-, or third-person personal pronoun, Google counts for the particular tokens used in the materials for Experiments 1–3 were greater for the OR clauses than for the SR clauses. Also consistent with the finding that reading times were longer for OR clauses than for SR clauses when they contained a third-person impersonal pronoun, Google counts were lower for the OR clause tokens than for the SR clause tokens used in Experiment 4. Obviously, the average adult reader has experienced much less language input than is accessible via Google. Is it plausible that Reali and Christiansen’s participants could have acquired enough stored instances of these specific phrases from lifetime language exposure for memory-based processing of these items to manifest during the experiment? Assume that a typical undergraduate research participant has experienced 15 years of reading (including magazines, newspapers, books, letters, etc.) for 1 hour a day at an average reading rate across years of 200 words per minute, and 20 years encoding spoken language (including conversation, television, radio, music, etc.) for 3 hours a day at 150 words per minute. Based on these estimates, an average college student has processed more than 260 million words of input. 2
Keller and Lapata (2003) demonstrated high correlations between page counts for bigrams in Google and frequency counts from standard corpus analyses.
211
Automaticity in Reading Comprehension
Table 4 Mean Counts in Google for Object-Relative and Subject-Relative Clauses in Target Sentences used by Reali and Christiansen (2007a).
Experiment 1 That you visited/ visited youa Who you visited/ visited you Whom you visited/ visited you Total Experiment 2 That I hated/hated me Who I hated/hated me Whom I hated/hated me Total Experiment 3 That they liked/liked them Who they liked/ liked them Whom they liked/ liked them Total Experiment 4 That it followed/ followed it Who it followed/ followed it Whom it followed/ followed it Total
OR clause
SR clause
49,622 (102–213,000)
1813 (33–9700)
9167 (2–44,300)
16,194 (82–139,000)
14,038 (8–105,000)
81 (0–348)
72,827 (112–306,300)
18,088 (115–149,048)
1,041,705 (525– 10,900,000) 85,931 (2–876,000)
7866 (107–38,200)
40,718 (312–234,000)
222 (6–1050)
1,168,353 (2414– 11,864,100)
62,842 (1583–425,250)
84,460 (606–541,000)
5245 (7–35,300)
4024 (5–24,800)
19,743 (209–124,000)
28,932 (10–338,000)
78 (0–696)
117,416 (621–903,800)
25,067 (216–159,996)
145,630 (621–891,000)
151,452 (138–833,000)
997 (0–13,000)
118,162 (164–749,000)
643 (0–3030)
197 (0–1680)
147,271 (485–907,030)
269,811 (361–1,317,259)
54,755 (905–386,000)
a
Values for one item removed as outliers. Note: Values in parentheses report range of counts contributing to each mean. Google counts were performed in June, 2009.
Comparing the number of page counts in Google to frequency counts in British National Corpus (2007) (which contains about 100 million words from spoken and written language) for the 48 constituent words used in the
212
Katherine A. Rawson
target clauses (you, I, me, they, them, it, that, who, whom, and the 39 verbs) suggests about a 3900:1 ratio between the number of times an item occurs in Google versus in a 260-million-word corpus. If so, scaling down the ‘‘total’’ values reported in Table 4 would suggest that the typical undergraduate research participant had encountered the particular pronoun þ verb OR clauses used in Experiments 1–3 an average of 116 times. By comparison, the research described in Section 3.1 showed that readers shifted from algorithm to retrieval-based interpretation after only 2–7 encounters of ambiguous verbs or conceptual combinations. Consistent with this analysis, Reali and Christiansen (2007b) constructed sentences that were all of the same OR-clause type (pronoun þ verb) that contained either a high-frequency phrase token (‘‘I met’’ in Sentence M) or a low-frequency phrase token (‘‘I distrusted’’ in Sentence N), based on Google counts. When sentences were presented in a difficulty rating task, sentences with high-frequency tokens were rated as less difficult than those with low-frequency tokens. Note that because the two sentences contain the exact same words, the difference in difficulty is most reasonably attributed to the phrases in which the words occur. Similarly, when sentences were presented for self-paced reading, reading times were faster for sentences with high-frequency tokens than for those with low-frequency tokens. Across sentences, regression analyses further revealed that as the difference in the frequency of the two tokens increased, the difference in reading times for the two versions of the sentence increased. M. The attorney who I met distrusted the detective who sent a letter on Monday. N. The attorney who I distrusted met the detective who sent a letter on Monday. As Reali and Christiansen noted, ‘‘Distributional properties of language are often described without considering the differences between type and token frequencies’’ (p. 168). Consistent with memory-based theories of automaticity, their results clearly indicate that token frequencies in the language are predictive of reading performance. 3.2.3. Resolving Lexical Ambiguity As Duffy, Morris, and Rayner (1988) summarized, ‘‘Research on lexical ambiguity has focused on two general questions. First, what meaning or meanings are retrieved during the initial lexical access process for ambiguous words? Second, what effect does preceding sentence context have on the access process?’’ (p. 429). To address these questions, Duffy et al. developed sentences that each contained two clauses. One clause contained either a target word with more than one meaning (e.g., ‘‘pitcher’’ can refer to a container for liquids or to a baseball player) or a matched control word with
Automaticity in Reading Comprehension
213
only one meaning (‘‘whiskey’’). The other clause contained information indicating which meaning of the ambiguous target word was intended in the current sentence. The design included two key manipulations. First, half of the ambiguous target words were biased (i.e., one of their meanings is much more frequent in the language than the other; e.g., ‘‘port’’ refers to a place where boats dock much more frequently than to an alcoholic beverage) or nonbiased (i.e., two meanings have similar frequencies in the language; e.g., ‘‘pitcher’’ refers to containers and baseball players with similar frequencies). Second, the disambiguating clause appeared either after the clause containing the target word (Sentences O and Q) or before the clause containing the target word (Sentences P and R). For biased words, the disambiguating clause always supported the subordinate meaning. O. Of course the pitcher (whiskey) was often forgotten because it was kept on the back of a high shelf. P. Because it was kept on the back of a high shelf, the pitcher (whiskey) was often forgotten. Q. Last night the port (soup) was a great success when she finally served it to her guests. R. When she finally served it to her guests, the port (soup) was a great success. Because of the complexity of the design, Duffy et al.’s results are most easily understood if described in terms of the example sentences above, although of course the pattern reported was based on mean reading times across items in each condition. First, for Sentence O, reading times were longer on the word ‘‘pitcher’’ than on ‘‘whiskey.’’ Given that ambiguous targets and unambiguous control words were matched for word frequency, EBRW would assume that both words have a similar number of instances stored from prior encounters. However, about half of the instances for ‘‘pitcher’’ include the container meaning, whereas the other half include the baseball player meaning, so each next instance retrieved has the potential to direct the random walk toward a different threshold. Thus, more retrievals would be needed to reach one of the thresholds for ‘‘pitcher’’ than for ‘‘whiskey,’’ in which all instances direct the random walk toward the same response threshold. This analysis also suggests that the random walk would only reach the contextually appropriate threshold about half of the time for ‘‘pitcher’’; consistent with this idea, reading times in the subsequent disambiguating region of the sentence were longer after ‘‘pitcher’’ versus ‘‘whiskey,’’ presumably reflecting reanalysis. In contrast, for Sentence P, reading times did not differ significantly on the word ‘‘pitcher’’ versus ‘‘whiskey.’’ To revisit, EBRW assumes that the speed with which instances are retrieved is a function of the similarity of those instances to the current stimulus. A reasonable additional assumption is that each prior instance contains not only the interpretation of the target word but also information about the context in which it was encountered
214
Katherine A. Rawson
(an assumption that will be revisited in Section 4.3.2). If so, contexts in which ‘‘pitcher’’ referred to a container are likely to be more similar to one another—and importantly, to the current context preceding the ambiguous word—than contexts in which ‘‘pitcher’’ referred to a baseball player. Thus, ‘‘container’’ instances would outrun ‘‘baseball player’’ instances on average, functionally minimizing competition from the ‘‘baseball player’’ instances and expediting the walk toward the threshold for the contextually appropriate container meaning.3 Similar to the pattern in Sentence P, reading times in Sentence Q did not differ significantly on the word ‘‘port’’ versus ‘‘soup,’’ but presumably for a different reason. Given that ‘‘port’’ is biased, most of its instances include the boat dock meaning, whereas relatively few include the alcoholic beverage meaning. Given that the preceding context does not favor either meaning, all instances presumably have the same average retrieval speed. Thus, ‘‘boat dock’’ instances are much more likely to be retrieved based on frequency alone, directing the random walk quickly toward the dominant meaning (cf. the unambiguous control word ‘‘soup,’’ in which all instances direct the random walk toward the same response threshold). In this case, the dominant meaning turns out to be contextually incorrect, and in fact reading times in the subsequent disambiguating region are elevated relative to the control condition. Finally, in Sentence R, reading times were longer for ‘‘port’’ than for ‘‘soup.’’ Although ‘‘alcoholic beverage’’ instances now have the advantage of faster retrieval speed due to greater similarity with the current context, it is presumably not enough to completely overcome their disadvantage with respect to frequency. Thus, ‘‘alcoholic beverage’’ and ‘‘boat dock’’ instances both remain competitive in this race, the former due to similarity and the latter due to frequency. Elevated reading times for ‘‘port’’ presumably reflect the greater number of retrievals needed to overcome competition from ‘‘boat dock’’ instances in the walk toward the ‘‘alcoholic beverage’’ threshold. Of course, the random walk process allows for the possibility that the contextually inappropriate ‘‘boat dock’’ threshold is reached on some trials, in which case elevated reading times may also partially reflect reanalysis to reconcile with the preceding context. To explain this pattern (which subsequently has been termed the subordinate bias effect), Duffy et al. (1988) proposed a reordered access model, according to which ‘‘prior disambiguating context serves to increase the 3
A potential problem for EBRW (and memory-based theories in general) is the finding that reading times on ‘pitcher’ and ‘whiskey’ were similar. Although the context in Sentence P would increase retrieval speed for ‘container’ instances over ‘baseball player’ instances of ‘pitcher’, there would still be about half as many ‘container’ instances for ‘pitcher’ as the total number of instances for ‘whiskey’. Thus, EBRW would expect reading times to be faster for ‘whiskey’ than for ‘pitcher’ based on frequency alone. Although resolution of this issue must await further investigation, one possible explanation is that the contexts used in Duffy et al.’s sentences are more consistently similar to contexts in which the ambiguous words appear than to contexts in which the unambiguous words appear. If so, the frequency disadvantage could be offset by a similarity-based advantage in retrieval speed for ‘container’ instances of ‘pitcher’.
Automaticity in Reading Comprehension
215
availability of the appropriate meaning without inhibiting the inappropriate meaning’’ (p. 437). However, they state that ‘‘we are deliberately avoiding a specification of the mechanisms by which context affects access’’ (p. 431). The theoretical account that EBRW provides is largely consistent with the reordered access model, in that it assumes that context increases the retrieval speed of instances with the appropriate meaning. However, EBRW provides a specification of the mechanism that underlies the interplay between frequency and contextual information.
4. Investigating Automaticity in Reading Comprehension: Outstanding Issues To summarize, several recent studies have directly tested predictions of memory-based theories of automaticity in reading comprehension and have provided foundational evidence for the role of memory-based automatization in syntactic and semantic processes. Further support for the viability of memory-based automatization in reading comprehension comes from the facility with which memory-based theories—EBRW in particular— provide relatively straightforward explanations for findings reported in earlier research on syntactic and lexical processes. Taken together, the direct and indirect evidence suggests that memory-based processing may play a significant role in the automatization of reading comprehension processes. Below, I outline several key directions for future research and further theory development to address outstanding issues concerning the nature of automaticity in reading comprehension.
4.1. Generality of Memory-Based Automaticity in Reading Comprehension On one hand, the work summarized above makes significant strides toward establishing the generality of memory-based automatization in reading comprehension. My colleagues and I have found evidence for shifts from algorithm to retrieval in two different comprehension processes, across delays, and when linguistic units are transferred to new contexts (Rawson, 2004; Rawson & Middleton, 2009; Rawson & Touron, 2009). On the other hand, the available evidence represents only the first of many steps that must be taken to establish the extent to which memory-based processing underlies automaticity in reading comprehension, when one considers the sizeable number of component processes involved in reading comprehension. Furthermore, a given component may process several different kinds of input. For example, Rawson focused on the resolution of MV/RR syntactic ambiguities, but this represents only one of many syntactic structures processed by the syntactic parsing system.
216
Katherine A. Rawson
As discussed in Section 2.2, several different mechanisms are thought to contribute to practice effects on the speed and accuracy of cognitive task performance. Automaticity theorists are forthcoming in acknowledging that any given mechanism may play a prominent role in some cognitive tasks but contribute minimally in others (e.g., Blessing & Anderson, 1996; Haider & Frensch, 1996; Logan, 1988). Thus, evidence for the role of memory-based processing in interpreting one type of syntactic structure (MV/RR ambiguity) and one type of conceptual combination (noun–noun combinations) does not ensure its involvement in other components of reading comprehension. Fortunately, a strength of memory-based theories of automaticity is that they afford straightforward predictions concerning components in which memory-based automatization is more versus less likely to play a role during reading comprehension. To illustrate, consider the factors that determine the involvement of memory-based processing according to EBRW. First, the extent to which interpretation is based on retrieval depends on the likelihood that retrieval beats the algorithmic route in the race to produce an interpretation. Thus, the role of memory-based processing will depend both on retrieval speed and on algorithm speed. As currently formulated, EBRW assumes that retrieval speed is largely determined by three factors: the number of stored instances (increasing the number of runners in the race increases the likelihood that one will finish quickly), the similarity of those instances to the current stimulus (more similar instances run more quickly), and the number of mutually exclusive interpretations stored across the various instances (in EBRW, any step toward one response threshold is a step away from others, so the number of competitors will tend to increase the time it takes for the walk to reach any one threshold). Accordingly, EBRW first predicts that memory-based processing will play a role in a given reading component to the extent that particular tokens of the type processed by that component repeat frequently in the language. Logically, smaller units of information repeat more frequently than larger units composed of those constituents. For example, a token phoneme repeats more frequently than any of the token words that contain it. Likewise, a token word repeats more frequently than any of the token phrases that contain it. More generally, this leads to the prediction that the involvement of memory-based processing will be negatively related to the grain size of the informational unit. However, EBRW suggests two qualifications to this basic prediction. First, the advantage of frequency may be offset to the extent that the interpretations stored in those instances are inconsistent with one another. For example, the relative clause ‘‘that he visited’’ is encountered much less frequently than the pronoun ‘‘he.’’ However, retrieved instances of the syntactically unambiguous ‘‘that he visited’’ will consistently direct the walk toward the OR-clause threshold. In contrast, ‘‘he’’ has been used to refer to a functionally infinite
Automaticity in Reading Comprehension
217
number of different people or entities, and thus retrieval of instances containing interpretations of ‘‘he’’ are unlikely to reach one response threshold. As a result, memory-based processing is more likely to contribute to interpretation of the larger syntactic unit than the smaller pronoun constituent within it (more generally, EBRW would predict that memory-based processing plays a negligible role in establishing the referents of pronouns). The second qualification to the prediction of a negative relationship between memory-based processing and information grain size concerns algorithm speed. A smaller set of stored instances may be more competitive against a slow algorithm than a larger set of stored instances would be against a fast algorithm. For example, consider conceptual combination. Prior research in which novel conceptual combinations are presented for speeded judgments of sensibility have reported response times between 1000 and 1500 ms (e.g., Gagne´ & Shoben, 1997; Storms & Wisniewski, 2005), suggesting that algorithmic processing of conceptual combinations is relatively slow (at least as compared to the speed of sublexical and lexical processes). It is perhaps not surprising then that the automaticity research involving conceptual combinations showed that retrieval was fast enough to reliably beat the algorithm after only 2–7 repetitions of unfamiliar conceptual combinations (Rawson & Middleton, 2009, Rawson & Touron, 2009, and the unpublished study reported in Section 3.1.2). Thus, EBRW supports predictions regarding differences in the involvement of memory-based automatization across components of the reading comprehension system. Likewise, EBRW also predicts systematic variability in the role of memory-based automatization within a particular component, given differences between tokens in their frequency and the degree of similarity across encounters. EBRW’s account of variability within a system may also potentially provide some explanation for mixed results within areas of reading comprehension research. For example, several studies have replicated the subordinate bias effect described in Section 3.2.3 (e.g., Binder & Rayner, 1998; Folk & Morris, 2003; Kambe, Rayner, & Duffy, 2001). However, other lexical ambiguity studies have failed to find subordinate bias effects, instead reporting findings that appear to suggest selective activation of subordinate meanings of ambiguous words (e.g., Martin, Vu, Kellas, & Metcalf, 1999; Vu, Kellas, Metcalf, & Herman, 2000; Vu, Kellas, & Paul, 1998). Subtle differences between studies in the absolute and relative frequencies of target words and their meanings as well as strength of contextual support have been implicated in these apparent inconsistencies, and EBRW provides a relatively straightforward account of how these factors would produce different patterns of performance. Finally, to the extent that memory-based automatization is less likely to contribute directly to higher level comprehension components (those that process larger units of information that repeat less frequently), it may still make an indirect contribution to the operation of these processes. Previous
218
Katherine A. Rawson
research has shown that cognitive resource demands decrease with increasing shifts from algorithmic processing to memory-based interpretation of stimuli (Klapp et al., 1991). Given evidence that the various component processes involved in the reading comprehension system share processing resources (Rawson, 2007), the shift from algorithmic processing to retrieval in lower level components may free processing resources for faster and more accurate operation of higher level components. Additionally, the outputs of lower level components are often the input to higher level components, and thus retrieval-based processing at lower levels will benefit higher level processing by providing their input more quickly.
4.2. Individual Differences in Memory-Based Automaticity Another issue concerns the extent to which memory-based theories (and other process-based theories of automaticity more generally) can explain individual differences in reading comprehension skill, including differences between age groups as well as proficiency differences between readers within an age group. Currently, only one study has directly examined age differences in memory-based automatization in reading comprehension (Rawson & Touron, 2009), and no study has directly examined the role of memory-based automaticity in proficiency differences between same-age readers. Regarding age differences, Rawson and Touron (2009) found relatively minimal differences between younger and older adults in memory-based automaticity. To revisit, both age groups showed the two empirical footprints of memory-based automatization (i.e., diminishing effects of algorithm complexity and item-specific practice effects). Although older adults initially appeared to require more trials of practice to shift to memory-based processing, results from the second experiment indicated that older adults’ slower shift was due to reluctance to rely on retrieval rather than to memory deficits. This pattern is perhaps surprising, given earlier research establishing age differences in simple associative learning during both encoding and retrieval (e.g., Dunlosky, Hertzog, & Moman-Powell, 2005; Old & Naveh-Benjamin, 2008). These findings support the expectation that older adults would have poorer encoding of instances during reading and/ or greater difficulty retrieving those instances on subsequent encounters of repeated stimuli. However, in Rawson and Touron’s experiments, all practice was contained within one practice session and no transfer task was administered. To the extent that older adults experience greater forgetting across delays (Giambra & Arenberg, 1993; Jenkins & Hoyer, 2000), the possibility remains that older adults will show reduced memory-based processing due to memory deficits after a delay. Likewise, age differences in the involvement of memory-based processing may emerge when items are reencountered in new contexts.
Automaticity in Reading Comprehension
219
At the other end of the age continuum, developmental research is clearly needed to directly investigate the role of memory-based automatization in reading comprehension for younger children. An increasing amount of developmental research has documented associations between the fluency of young children’s language processing and statistical regularities in the language (e.g., Bannard & Matthews, 2008; Kidd, Brandt, Lieven, & Tomasello, 2007; Saffran, 2003). Accordingly, memory-based theories show promise for providing a unified theoretical framework for understanding how beginning readers transition to skilled reading. Regarding proficiency differences between readers within an age group, an intriguing possibility is that higher-skill and lower-skill readers differ in the involvement of memory-based processing during reading. Greater reliance on retrieval versus algorithmic processing would explain faster reading speeds for higher-skill versus lower-skill readers (e.g., Bell & Perfetti, 1994; Jackson, 2005; Jenkins, Fuchs, van den Broek, Espin, & Deno, 2003). Additionally, the idea that memory-based processing in lower level reading components indirectly benefits higher level components (see Section 4.1) would explain better performance on comprehension tests for higher-skill versus lower-skill readers. Memory-based processing may also lead to faster and more accurate processing to the extent that interpretations based on item-specific biases are more likely to be correct than interpretations based on item-general biases (e.g., for the DO/SC ambiguities described in Section 3.2.1, memory-based interpretation for verbs biased toward SC structures would avoid errors and reanalysis costs from adopting the item-general bias toward the DO structure). Of course, this proposal begs the question of why higher-skill readers would be more likely to rely on retrieval. The most straightforward answer from EBRW would be that higher-skill readers have more stored instances, which would make retrieval more competitive against algorithmic processing. Intuitively, readers likely differ in the number of stored instances due to differences in the amount of practice (i.e., reading experience). Indeed, Stanovich and colleagues have consistently shown that measures of print exposure predict performance on various indices of reading skill, including phonological coding, orthographic knowledge, spelling, vocabulary, verbal fluency, and comprehension performance (e. g., Cunningham & Stanovich, 1991; Stanovich & Cunningham, 1992; Stanovich & West, 1989; West & Stanovich, 1991). Although suggestive, findings involving coarse-grain measures of this sort do not uniquely support memory-based theories of automaticity (e.g., higher levels of practice would presumably also yield more efficient item-general algorithms) and do not diagnose the relative contributions of the various automaticity mechanisms to these associations between experience and reading skill. Nonetheless, EBRW (and memory-based theories more generally) provides a detailed specification of how the representations
220
Katherine A. Rawson
and processes that underlie reading comprehension change with practice and why the observed associations would arise. Beyond individual differences in the number of stored instances, other nonexclusive factors may also underlie individual differences in the involvement of memory-based automatization in reading comprehension. First, readers may differ in the kind or quality of stored instances, due to differences in the integrity of encoding or in the information stored in each instance. In other words, two readers exposed to the same stimuli the same number of times will not necessarily (or even likely) come away with identical sets of stored instances. Second, readers may differ in the efficiency of the item-general algorithms against which the retrieval route must compete. Given the same set of stored instances, retrieval would presumably be less likely to contribute to interpretation for a reader with a highly efficient algorithm versus a reader with an inefficient algorithm. To the extent that higher-skill readers have more efficient algorithms, it is thus possible that higher-skill readers may actually show weaker evidence for memory-based processing during reading comprehension. We are currently conducting a large individual-differences study in my lab to begin exploring these possibilities.
4.3. Further Development of Memory-Based Theories of Automaticity In large part, memory-based theories of automaticity—particularly EBRW— were successful in explaining findings discussed throughout Section 3. However, several aspects of the results considered above also revealed limitations in the theoretical assumptions of these theories. Other findings in the extant literature on reading comprehension further suggest that the assumptions of these theories will require revision if they are to provide a complete account of the role of memory-based automatization in reading comprehension. Below, I consider two key ways in which some modification of the memory-based theories would improve their explanatory power. 4.3.1. Forgetting Several of the experiments described in Section 3.1 found evidence for forgetting across delays. For example, as shown in Figure 3, reading times were significantly slower at the outset of the second practice session than at the end of the first practice session (see also Rawson, 2004; Rawson & Middleton, 2009). Rickard (2007) has also demonstrated slowing of response times across delays between practice sessions in other cognitive tasks. The finding of forgetting across a delay is not surprising. What is surprising is that this fundamental aspect of memory has been largely ignored by memorybased theories of automaticity. As Rickard rightly noted, ‘‘a complete theory of human skill acquisition must account for the effects of the delays between
Automaticity in Reading Comprehension
221
sessions on learning and performance’’ (p. 297). Forgetting is likely to be a significant factor influencing the role of memory-based automaticity in reading comprehension in particular, given the typical time intervals between repetitions of linguistic units in natural language. Given that memory-based theories assume that basic memory processes underlie speed-ups with practice, the addition of a forgetting parameter to reflect negatively accelerated loss of prior interpretations over time would seem straightforward (cf. the robust pattern found in accuracy measures of explicit memory; Rubin & Wenzel, 1996). Nevertheless, findings from skill acquisition research suggest that capturing the magnitude and rate of loss of memory-based automaticity across delays may not be so simple. For example, in recent research involving an alphabet arithmetic task (Wilkins & Rawson, 2009), we found markedly greater loss across a delay for itemspecific practice effects (indicative of memory-based automaticity) versus item-general practice effects (indicative of gains in algorithm efficiency), which has implications for the competitiveness of retrieval against the algorithm after a delay. Regarding rate of loss, Anderson, Fincham, and Douglass (1999) proposed that the overall strength of an item in memory is the sum of individual trace strengths that each decays as a function of the age of the trace. In their model, the age of a trace is defined in terms of number of blocks of practice (e.g., the trace for an item presented in Block 2 has an age of 6 blocks in Block 8). Importantly, to fit data both within and across sessions, the estimate of age contained two components, age ¼ x þ m * H, where x is the age of the trace in blocks at the end of a practice session, m is the number of days between the previous session and the current practice session, and H is the number of blocks equivalent to the elapse of 1 day. Across experiments, estimates of H ranged from 4.6 to 14.0, with an average of 9.8. So, the trace of an item presented in Block 1 of a 20-block practice session would be 19 blocks old at the end of the session but would only age another 19.6 blocks across the next 2 days. Based on these parameter estimates, Anderson et al. proposed that ‘‘clock time is not the right way to think of the critical variable. It might indicate that the critical variable is the number of intervening events and that there are fewer of these after the experimental session ends’’ (p. 1123). This interpretation highlights why developing more complete models of memory-based automaticity in reading comprehension may not be a simple matter. In general, more powerful memory-based theories would not only assume that loss occurs but would also include explicit assumptions concerning why loss occurs (e.g., interference from intervening events). 4.3.2. What’s in an Instance? This question represents perhaps the greatest challenge for fully understanding the role of memory-based automatization in reading comprehension. Virtually all prior research on memory-based theories has involved simple
222
Katherine A. Rawson
cognitive tasks (e.g., alphabet arithmetic, dot counting). In these tasks, the stimulus is discrete (e.g., A þ 2), there is only one processing goal for that stimulus (find the sum), there is only one possible solution (A þ 2 always equals C), and the context in which the problem is encountered does not matter. Although a small number of automaticity studies have examined other aspects of the information encoded in an instance (e.g., Logan, 1990; Logan, Taylor, & Etherton, 1996), the implicit or explicit assumption in most research on memory-based theories is that instances simply contain information about a discrete, unambiguous stimulus and a response. In contrast, in reading comprehension, the functional stimulus is much less obvious—when reading a textbook chapter, is a ‘‘stimulus’’ the chapter, paragraph, sentence, clause, phrase, word, morpheme, phoneme, or alphanumeric character? Additionally, for a stimulus at any given grain size, there is usually more than one processing goal—for a given word, processing goals include word identification, syntactic role assignment, meaning assignment, and thematic role assignment.4 Furthermore, as repeatedly illustrated in discussion of prior research in Section 3, many linguistic stimuli afford more than one possible interpretation. Moreover, the context in which a linguistic stimulus is encountered clearly does influence processing. Thus, the answer to ‘‘What’s in an instance?’’ seems much less straightforward in reading comprehension. A reasonable starting assumption consistent with memory-based theories is that the stimulus is defined by the grain size of the linguistic unit handled by a particular interpretive process, and what is stored in an instance is the stimulus plus the interpretation generated by that process. Consider the simple sentence, ‘‘The veterinarian found the calf.’’ For meaning assignment, syntactic role assignment, and thematic role assignment, the grain size of the stimulus is presumably the word. Assuming that these are three separate processes, applying them to the three content words in ‘‘The veterinarian found the calf’’ would yield nine instances, as illustrated in Panel A of Figure 6. However, this simplistic account is ill-equipped to handle many findings in the literature. For example, various sources of evidence suggest that the language processing system is sensitive to probabilistic information about associations between word meanings, syntactic roles, and thematic roles (e.g., thematic agents are most commonly subjects, and thematic patients are most commonly direct objects). Similarly, syntactic category information can influence the resolution of word meaning (e.g., the noun ‘‘duck’’ refers to a waterfowl, whereas the verb ‘‘duck’’ refers to a crouching motion; Folk & Morris, 2003). Furthermore, not all reading comprehension theories assume
4
In lay terms, thematic roles concern who did what to whom, when, where, and with what.
223
Automaticity in Reading Comprehension
A
The veterinarian found the calf.
Stimulus = veterinarian Meaning = animal doctor
Stimulus = veterinarian Syntactic role = subject
Stimulus = veterinarian Thematic role = agent
Stimulus = found Meaning = located
Stimulus = found Syntactic role = main verb
Stimulus = found Thematic role = action
Stimulus = calf Meaning = baby cow
Stimulus = calf Syntactic role = direct obj
Stimulus = calf Thematic role = patient
B The veterinarian found the calf. Stimulus = veterinarian Meaning = animal doctor Syntactic role = subject Thematic role = agent
Stimulus = found Meaning = located Syntactic role = main verb Thematic role = action
Stimulus = calf Meaning = baby cow Syntactic role = direct obj Thematic role = patient
C The veterinarian found the calf. Stimulus = veterinarian found calf Veterinarian = animal doctor Found = located Calf = baby cow D
Stimulus = veterinarian found calf Veterinarian = subject Found = main verb Calf = direct object
Stimulus = veterinarian found calf Veterinarian = agent Found = action Calf = patient
The veterinarian found the calf.
Stimulus = veterinarian found calf Veterinarian = animal doctor/subject/agent Found = located/main verb/action Calf = baby cow/direct object/patient E
The veterinarian found the calf. Stimulus = veterinarian Syntactic role = subject Stimulus = found Meaning = located
Stimulus = found Syntactic role = main verb
Stimulus = found Thematic role = action
Stimulus = calf Syntactic role = direct obj
Figure 6 Panels A–E each represent the number and content of instances stored from processing the example sentence based on different theoretical assumptions about what information is stored in an instance (see Section 4.3.2 for further explanation).
224
Katherine A. Rawson
that meaning assignment, syntactic role assignment, and thematic role assignment are separate processes (MacDonald, Pearlmutter, & Seidenberg, 1994). If so, another possibility is that instances for a given word contain multiple kinds of interpretive information (see Panel B in Figure 6). However, even these more information-rich instances are insufficient to account for some of the findings discussed in Section 3. For EBRW to explain the verb bias effects described in Section 3.2.1, I added the assumption that instances stored from prior encounters include not only information about the syntactic role of the verb itself (e.g., ‘‘found’’ ¼ main verb) but also about the larger syntactic structure in which it participated. Of course, syntactic role assignment is just one aspect of syntactic processing, which also involves representing the grammatical relationships between words in a sentence. Given that the functional stimuli for syntactic parsing processes include multiword phrases, the instances generated would thus include the syntactic structure information assumed to underlie verb bias effects (as in Panel C of Figure 6, either in addition to or instead of the more atomic instances depicted in Panels A and B). Supralexical instances of this sort would also support EBRW’s account of the OR/SR clause research described in Section 3.2.2. Hare et al.’s (2003) demonstration of sense-specific verb biases (e.g., ‘‘found’’ is more likely to take a direct object when it means ‘‘locate’’ vs. ‘‘realize’’) further suggests that verbs participate in instances containing both meaning and syntactic structure information (as in Panel D of Figure 6). Superordinate instances of this sort may also support EBRW’s interpretation of the subordinate bias effect described in Section 3.2.3, which required the assumption that prior instances of ambiguous words contain information about the context in which the word was encountered along with the word’s meaning in that context. However, increasingly hefty instances begins to strain the starting assumption that what is stored in an instance is the stimulus plus the interpretation generated by a particular process—what process would generate instances of this sort? An alternative to assuming superordinate instances would involve revising the assumption currently held by memory-based theories that instances are independent of one another, to permit associations between atomic instances that are generated concurrently by different processes (Panel E of Figure 6). More generally, the preceding discussion suggests that theoretical assumptions about what information is represented in an instance should be constrained by theories of reading comprehension. Further interplay between automaticity theories and reading comprehension theories will likely benefit both domains, to the extent that it leads to further specification of what information is ultimately represented in memory as a consequence of language processing.
Automaticity in Reading Comprehension
225
5. Redefining Automaticity in Reading Comprehension Although much of the discussion above was focused on evaluating the viability of one class of process-based account (memory-based theories, with particular emphasis on EBRW) for explaining automaticity in reading comprehension, the intent was also to illustrate the promise of process-based theories of automaticity for advancing our understanding of reading comprehension more generally. First, given that process-based theories are concerned with explaining how the representations and processes that underlie task performance change with practice, they promote research questions that treat reading comprehension as a dynamic and probabilistic system rather than questions about static, dichotomous end states. For example, an issue of heated debate for several years within the reading comprehension literature concerned which kinds of inference were made automatically and which were not (McKoon & Ratcliff, 1992; Singer, Graesser, & Trabasso, 1994), with particular focus on which inferences were fast and obligatory. In contrast to the dichotomous questions that often arise from property-based debates (e.g., Are causal inferences fast? Are causal inferences obligatory?), process-based theories of automaticity motivate more dynamic, probabilistic questions (e.g., How does the speed and/or likelihood of causal inferences change with experience, and why?). Second, process-based theories of automaticity may provide unified accounts of many different language processing phenomena both within and across research areas. Within an area of research on a particular reading comprehension process (e.g., conceptual combination), some theories may focus on the processing of novel stimuli (e.g., Estes, 2003; Wisniewski & Middleton, 2002), whereas other theories focus on the processing of familiar stimuli (e.g., Andrews, Miller, & Rayner, 2004; Juhasz, Inhoff, & Rayner, 2005). Process-based theories of automaticity can provide the missing link by describing how stimuli transition from novel to familiar. Process-based theories of automaticity also provide theoretical architectures that can unify accounts of language processing across areas of research on different component processes. For example, as illustrated here, EBRW provides a unitary theoretical account of item-specific practice effects in lexical ambiguity resolution, syntactic parsing, and conceptual combination. Third, process-based theories change the grain size at which automaticity is conceptualized. For example, although one or more component processes within the reading comprehension system may increasingly rely on memory-based processing with practice, at least some of the component processes in the system are unlikely to do so. Thus, according to memorybased theories, it is a misnomer to describe reading comprehension as automatic. Indeed, it is likely a misnomer to describe a particular
226
Katherine A. Rawson
component process as automatic, except as a matter of degree. According to memory-based theories, performance is automatic when based on retrieval of prior interpretations for particular stimuli. If so, then automaticity is most precisely defined at the level of a processing episode for a particular item, and a component can only be described as automatic in relative terms (the proportion of processing episodes for which interpretation was based on retrieval of prior interpretations). This chapter began with two key questions: What does it mean to say that reading comprehension is automatic? And how does reading comprehension become automatic? In the literature on reading comprehension, automaticity has traditionally been defined in terms of properties of performance. In contrast, I have advocated for conceptualizing automaticity in terms of underlying cognitive mechanisms that give rise to properties of interest rather than in terms of the properties themselves. Concerning what it means to say that reading comprehension is automatic, process-based theories of automaticity arguably provide more powerful and precise answers than property-list accounts. For example, according to a memory-based account, ‘‘Automaticity is memory retrieval: Performance is automatic when it is based on singlestep direct-access retrieval of past solutions from memory’’ (Logan, 1988, p. 493). According to the algorithm efficiency account provided by ACT-R, ‘‘To an approximation, we may say that a production is automatic to the degree that it is strong’’ (Anderson, 1992, p. 170). Of course, a more radical answer may involve doing away with the ‘‘A’’ word altogether. To revisit, one perennial problem with property-list accounts has been ambiguity in the necessary and sufficient properties of automaticity. Whereas one solution involves explicit declaration of definitional properties (e.g., ‘‘Is syntactic parsing automatic, where automatic refers to fast and obligatory?’’), another solution is to skip the middleman altogether (‘‘Is syntactic parsing fast and obligatory?’’). The same logic applies to process-based theories of automaticity (e.g., ‘‘Is syntactic parsing automatic, where automatic refers to processing based on the retrieval of past interpretations of specific stimuli’’ vs. ‘‘To what extent does syntactic parsing involve retrieval of past interpretations of specific stimuli?’’). In closing, although process-based theories of automaticity will require further specification to realize their potential for more completely explaining the dynamic and complex comprehension system, these theories will promote systematic exploration of how reading comprehension processes and representations change with experience.
ACKNOWLEDGMENTS Preparation of this chapter was supported by a Collaborative Award from the James S. McDonnell Foundation 21st Century Science Initiative in Bridging Brain, Mind and Behavior. Thanks to John Dunlosky for helpful comments on an earlier version of this chapter.
Automaticity in Reading Comprehension
227
REFERENCES Anderson, J. R. (1982). Acquisition of a cognitive skill. Psychological Review, 89, 369–406. Anderson, J. R. (1987). Skill acquisition: Compilation of weak-method problem solutions. Psychological Review, 94, 192–210. Anderson, J. R. (1992). Automaticity and the ACT* theory. American Journal of Psychology, 105, 165–180. Anderson, J. R. (1996). ACT: A simple theory of complex cognition. American Psychologist, 51, 355–365. Anderson, J. R., Fincham, J. M., & Douglass, S. (1999). Practice and retention: A unifying analysis. Journal of Experimental Psychology: Learning, Memory, and Cognition, 25, 1120–1136. Anderson, J. R., & Lebiere, C. (1998). The atomic components of thought. Mahwah, NJ: Erlbaum. Andrews, S., Miller, B., & Rayner, K. (2004). Eye movements and morphological segmentation of compound words: There is a mouse in mouse trap. European Journal of Cognitive Psychology, 16, 285–311. Bannard, C., & Matthews, D. (2008). Stored word sequences in language learning. Psychological Science, 19, 241–248. Bell, L. C., & Perfetti, C. A. (1994). Reading skill: Some adult comparisons. Journal of Educational Psychology, 86, 244–255. Binder, K. S., & Rayner, K. (1998). Contextual strength does not modulate the subordinate bias effect: Evidence from eye fixations and self-paced reading. Psychonomic Bulletin & Review, 5, 271–276. Blessing, S. B., & Anderson, J. R. (1996). How people learn to skip steps. Journal of Experimental Psychology: Learning, Memory, and Cognition, 22, 576–598. British National Corpus. Version 3 (BNC XML Edition, 2007). Distributed by Oxford University Computing Services on behalf of the BNC Consortium. URL:http:// www.natcorp.ox.ac.uk/. Brown, T. L., Gore, C. L., & Carr, T. H. (2002). Visual attention and word recognition in Stroop color naming: Is word recognition ‘‘automatic’’? Journal of Experimental Psychology: General, 131, 220–240. Christianson, K., Hollingworth, A., Halliwell, J. F., & Ferreira, F. (2001). Thematic roles assigned along the garden path linger. Cognitive Psychology, 42, 368–407. Cunningham, A. E., & Stanovich, K. E. (1991). Tracking the unique effects of print exposure in children: Associations with vocabulary, general knowledge, and spelling. Journal of Educational Psychology, 83, 264–274. Duffy, S. A., Morris, R. K., & Rayner, K. (1988). Lexical ambiguity and fixation times in reading. Journal of Memory and Language, 27, 429–446. Dunlosky, J., Hertzog, C., & Moman-Powell, A. (2005). The contribution of mediatorbased deficiencies to age-related differences in associative learning. Developmental Psychology, 41, 389–400. Estes, Z. (2003). A tale of two similarities: Comparison and integration in conceptual combination. Cognitive Science, 27, 911–921. Ferreira, F., Christianson, K., & Hollingworth, A. (2001). Misinterpretations of garden-path sentences: Implications for models of sentence processing and reanalysis. Journal of Psycholinguistic Research, 30, 3–20. Flores d’Arcais, G. B. (1988). Automatic processes in language comprehension. In G. Denes, C. Semenza & P. Bisiacchi (Eds.), Perspectives on cognitive neuropsychology (pp. 91–114). Hillsdale, NJ: Lawrence Erlbaum.
228
Katherine A. Rawson
Folk, J. R., & Morris, R. K. (2003). Effects of syntactic category assignment on lexical ambiguity resolution in reading: An eye movement analysis. Memory & Cognition, 31, 87–99. Gagne´, C. L., & Shoben, E. J. (1997). Influence of thematic relations on the comprehension of modifier-noun combinations. Journal of Experimental Psychology: Learning, Memory, and Cognition, 23, 71–87. Giambra, L. M., & Arenberg, D. (1993). Adult age differences in forgetting sentences. Psychology and Aging, 8, 451–462. Greene, S. B., McKoon, G., & Ratcliff, R. (1992). Pronoun resolution and discourse models. Journal of Experimental Psychology: Learning, Memory, and Cognition, 18, 266–283. Haider, H., & Frensch, P. A. (1996). The role of information reduction in skill acquisition. Cognitive Psychology, 30, 304–337. Haider, H., & Frensch, P. A. (1999). Eye movement during skill acquisition: More evidence for the information-reduction hypothesis. Journal of Experimental Psychology: Learning, Memory, and Cognition, 25, 172–190. Hare, M., McRae, K., & Elman, J. L. (2003). Sense and structure: Meaning as a determinant of verb subcategorization preferences. Journal of Memory and Language, 48, 281–303. Hare, M., McRae, K., & Elman, J. L. (2004). Admitting that admitting verb sense into corpus analyses makes sense. Language and Cognitive Processes, 19, 181–224. Jackson, N. E. (2005). Are university students’ component reading skills related to their text comprehension and academic achievement? Learning and Individual Differences, 15, 113–139. Jenkins, J. R., Fuchs, L. S., van den Broek, P., Espin, C., & Deno, S. L. (2003). Sources of individual differences in reading comprehension and reading fluency. Journal of Educational Psychology, 95, 719–729. Jenkins, L., & Hoyer, W. J. (2000). Instance-based automaticity and aging: Acquisition, reacquisition, and long-term retention. Psychology and Aging, 15, 551–565. Juhasz, B. J., Inhoff, A. W., & Rayner, K. (2005). The role of interword spaces in the processing of English compound words. Language and Cognitive Processes, 20, 291–316. Kambe, G., Rayner, K., & Duffy, S. A. (2001). Global context effects on processing lexically ambiguous words: Evidence from eye fixations. Memory & Cognition, 29, 363–372. Keller, F., & Lapata, M. (2003). Using the web to obtain frequencies for unseen bigrams. Computational Linguistics, 29, 459–484. Kidd, E., Brandt, S., Lieven, E., & Tomasello, M. (2007). Object relatives made easy: A cross-linguistic comparison of the constraints influencing young children’s processing of relative clauses. Language and Cognitive Processes, 22, 860–897. Kilborn, K. W., & Friederici, A. D. (1994). Cognitive penetrability of syntactic priming in Broca’s aphasia. Neuropsychology, 8, 83–90. King, J., & Just, M. A. (1991). Individual differences in syntactic processing: The role of working memory. Journal of Memory and Language, 30, 580–602. Kintsch, W. (1998). Comprehension: A paradigm for cognition. Cambridge, UK: Cambridge University Press. Klapp, S. T., Boches, C. A., Trabert, M. L., & Logan, G. D. (1991). Automatizing alphabet arithmetic: II. Are there practice effects after automaticity is achieved? Journal of Experimental Psychology: Learning, Memory, and Cognition, 17, 196–209. Lee, F. J., & Anderson, J. R. (2001). Does learning a complex task have to be complex?: A study in learning decomposition. Cognitive Psychology, 42, 267–316. Logan, G. D. (1988). Toward an instance theory of automatization. Psychological Review, 95, 492–527. Logan, G. D. (1990). Repetition priming and automaticity: common underlying mechanisms? Cognitive Psychology, 22, 1–35.
Automaticity in Reading Comprehension
229
Logan, G. D. (1992). Shapes of reaction-time distributions and shapes of learning curves: A test of the instance theory of automaticity. Journal of Experimental Psychology: Learning, Memory, and Cognition, 18, 883–914. Logan, G. D. (1995). The Weibull distribution, the power law, and the instance theory of automaticity. Psychological Review, 102, 751–756. Logan, G. D., & Klapp, S. T. (1991). Automatizing alphabet arithmetic: I. Is extended practice necessary to produce automaticity? Journal of Experimental Psychology: Learning, Memory, and Cognition, 17, 179–195. Logan, G. D., Taylor, S. E., & Etherton, J. L. (1996). Attention in the acquisition and expression of automaticity. Journal of Experimental Psychology: Learning, Memory, and Cognition, 22, 620–638. MacDonald, M. C., Pearlmutter, N. J., & Seidenberg, M. S. (1994). Lexical nature of syntactic ambiguity resolution. Psychological Review, 101, 676–703. Martin, C., Vu, H., Kellas, G., & Metcalf, K. (1999). Strength of discourse context as a determinant of the subordinate bias effect. Quarterly Journal of Experimental Psychology, 52A, 813–839. McKoon, G., & Ratcliff, R. (1992). Inference during reading. Psychological Review, 99, 440–466. National Center for Education Statistics. (2006). National assessment of adult literacy (NAAL): A first look at the literacy of America’s adults in the 21st century. Institute of Education Sciences, U.S. Department of Education NCES 2006-470. Noveck, I. A., & Posada, A. (2003). Characterizing the time course of an implicature: An evoked potentials study. Brain and Language, 85, 203–210. Old, S. R., & Naveh-Benjamin, M. (2008). Differential effects of age on item and associative measures of memory: A meta-analysis. Psychology and Aging, 23, 104–118. Palmeri, T. J. (1997). Exemplar similarity and the development of automaticity. Journal of Experimental Psychology: Learning, Memory, and Cognition, 23, 324–354. Palmeri, T. J. (1999). Theories of automaticity and the power law of practice. Journal of Experimental Psychology: Learning, Memory, and Cognition, 25, 543–551. Perfetti, C. A. (1988). Verbal efficiency in reading ability. In M. Daneman, G. E. MacKinnon & T. G. Waller (Eds.), Reading research: Advances in theory and practice (Vol. 6), (pp. 109–143). San Diego, CA: Academic Press. Rawson, K. A. (2004). Exploring automaticity in text processing: Syntactic ambiguity as a test case. Cognitive Psychology, 49, 333–369. Rawson, K. A. (2007). Testing the shared resource assumption in theories of text processing. Cognitive Psychology, 54, 155–183. Rawson, K. A., & Middleton, E. L. (2009). Memory-based processing as a mechanism of automaticity in text comprehension. Journal of Experimental Psychology: Learning, Memory, and Cognition, 35, 353–369. Rawson, K. A., & Touron, D. R. (2009). Age differences and similarities in the shift from computation to retrieval during reading comprehension. Psychology and Aging, 24, 423–437. Rayner, K. (1998). Eye movements in reading and information processing: 20 years of research. Psychological Bulletin, 124, 372–422. Reali, F., & Christiansen, M. H. (2007a). Processing of relative clauses is made easier by frequency of occurrence. Journal of Memory and Language, 57, 1–23. Reali, F., & Christiansen, M. H. (2007b). Word chunk frequencies affect the processing of pronominal object-relative clauses. Quarterly Journal of Experimental Psychology, 60, 161–170. Rickard, T. C. (1997). Bending the power law: A CMPL theory of strategy shifts and the automatization of cognitive skills. Journal of Experimental Psychology: General, 126, 288–311.
230
Katherine A. Rawson
Rickard, T. C. (1999). A CMPL alternative account of practice effects in numerosity judgment tasks. Journal of Experimental Psychology: Learning, Memory, and Cognition, 25, 532–542. Rickard, T. C. (2004). Strategy execution in cognitive skill learning: An item-level test of candidate models. Journal of Experimental Psychology: Learning, Memory, and Cognition, 30, 65–82. Rickard, T. C. (2007). Forgetting and learning potentiation: Dual consequences of between-session delays in cognitive skill learning. Journal of Experimental Psychology: Learning, Memory, and Cognition, 33, 297–304. Rubin, D. C., & Wenzel, A. E. (1996). One hundred years of forgetting: A quantitative description of retention. Psychological Review, 103, 734–760. Saffran, J. R. (2003). Statistical language learning: Mechanisms and constraints. Current Directions in Psychological Science, 12, 110–114. Schneider, W., Dumais, S. T., & Shiffrin, R. M. (1984). Automatic and control processing and attention. In R. Parasuraman & R. Davies (Eds.), Varieties of attention (pp. 1–27). New York: Academic Press. Schneider, W., & Shiffrin, R. M. (1977). Controlled and automatic human information processing: I. Detection, search, and attention. Psychological Review, 84, 1–66. Singer, M., Graesser, A. C., & Trabasso, T. (1994). Minimal or global inference during reading. Journal of Memory and Language, 33, 421–441. Stanovich, K. E., & Cunningham, A. E. (1992). Studying the consequences of literacy within a literate society: The cognitive correlates of print exposure. Memory & Cognition, 20, 51–68. Stanovich, K. E., & West, R. F. (1989). Exposure to print and orthographic processing. Reading Research Quarterly, 24, 402–433. Storms, G., & Wisniewski, E. J. (2005). Does the order of head noun and modifier explain response times in conceptual combination? Memory & Cognition, 33, 852–861. Traxler, M. J., Morris, R. K., & Seely, R. E. (2002). Processing subject and object relative clauses: Evidence from eye movements. Journal of Memory and Language, 47, 69–90. Trueswell, J. C., Tanenhaus, M. K., & Kello, C. (1993). Verb-specific constraints in sentence processing: Separating effects of lexical preference from garden-paths. Journal of Experimental Psychology: Learning, Memory, and Cognition, 19, 528–553. Vu, H., Kellas, G., Metcalf, K., & Herman, R. (2000). The influence of global discourse on lexical ambiguity resolution. Memory & Cognition, 28, 236–252. Vu, H., Kellas, G., & Paul, S. T. (1998). Sources of sentence constraint on lexical ambiguity resolution. Memory & Cognition, 26, 979–1001. Walczyk, J. J. (2000). The interplay between automatic and control processes in reading. Reading Research Quarterly, 35, 554–566. West, R. F., & Stanovich, K. E. (1991). The incidental acquisition of information from reading. Psychological Science, 2, 325–329. Wilkins, N. J., & Rawson, K. A. (2009). Loss of cognitive skill across delays: Constraints for theories of cognitive skill acquisition. Manuscript under review. Wisniewski, E. J., & Middleton, E. L. (2002). Of bucket bowls and coffee cup bowls: Spatial alignment in conceptual combination. Journal of Memory and Language, 46, 1–23.
C H A P T E R
S I X
Rethinking Scene Perception: A Multisource Model Helene Intraub Contents 1. Introduction 2. Scene Perception as an Act of Spatial Cognition 2.1. Definitions: What is a Scene? 2.2. An Illustrative Anecdote 2.3. A Multisource Model of Scene Representation 2.4. Boundary Extension as a Source Monitoring Error 2.5. Effects of Divided Attention and Stimulus Duration on Boundary Extension 3. Multisource Scene Representation: Behavioral and Neuroimaging Picture Studies 3.1. Denoting a Location: The Importance of View-Boundaries 3.2. Boundary Extension and Scene-Selective Regions of the Brain 4. Multisource Scene Representation: Exploring Peripersonal Space 4.1. Haptic Exploration: Sighted Observers and a Deaf and Blind Observer 4.2. Cross-Modal Boundary Extension 4.3. Monocular Tunnel Vision and Boundary Extension 4.4. Possible Clinical Implications 5. Summary and Conclusions Acknowledgment References
232 235 235 237 238 240 242 244 244 247 248 251 255 256 258 259 261 261
Abstract Traditional approaches to scene perception begin with the visual input and track its progress through a series of very short-term memory buffers. While providing explanations for errors of omission (e.g., change blindness), such models are not as well suited for explaining rapid errors of commission, such as boundary extension [Intraub, H., & Dickinson, C. A. (2008). False memory 1/20th of a second later: What the early onset of boundary extension reveals about perception. Psychological Science, 19, 1007–1014]. I will present a multisource
Psychology of Learning and Motivation, Volume 52 ISSN 0079-7421, DOI: 10.1016/S0079-7421(10)52006-1
#
2010 Elsevier Inc. All rights reserved.
231
232
Helene Intraub
model of scene perception that begins instead with an egocentric reference frame. Even when the primary input is visual, the content that fills out this framework is derived from multiple sources (e.g., visual sensory, amodal, conceptual, and contextual). The multisource framework provides a novel explanation of boundary extension as a source monitoring error [Johnson, M. K., Hashtroudi, S., & Lindsay, D. S. (1993). Source monitoring. Psychological Bulletin, 114, 3–28] that arises when one attempts to discern which portion of the entire scene representation matches the visual sensory source. Behavioral and neuroimaging research with pictures and research on visual and haptic exploration of peripersonal space will be discussed. In the multisource view, scene perception is an act of spatial cognition that subserves many modalities—one of which is vision.
1. Introduction We sit in rooms, walk through forests, work at desks, and cook in kitchens. How well can we remember specific views of the world we interact with every day? Research on visual scene perception and memory provides us not only with examples of stunning recognition memory performance for massive numbers of photographs (Standing, 1973; Standing, Conezio, & Haber, 1970), but also with stunning failures to recognize even sizeable changes in a photograph across a brief transient (Rensink, O’Regan, & Clark, 1997) or while a change slowly unfolds right in front of the viewer’s eyes (Simons, Franconeri, & Reimer, 2000). Contrasts such as these have fueled debates about the extent to which we can retain the details of what we see (e.g., Henderson & Hollingworth, 2003; Rensink, 2000). Boundary extension (memory beyond the edges of a view; Intraub & Richardson, 1989; see Michod & Intraub, 2009 for an overview) raises a different set of challenges for understanding scene representation because it reflects neither the retention of detail nor the loss of detail; instead it is an error of commission. People remember seeing what was not visible, under conditions that are not expected to induce false memory. Over the past 20 years, boundary extension has been reported under conditions that would normally be expected to support excellent memory, for example, low memory load (as few as 1–3 pictures; Bertamini, Jones, Spooner, & Hecht, 2005; Dickinson & Intraub, 2008; Intraub & Dickinson, 2008; Intraub, Gottesman, Willey, & Zuk, 1996; Intraub, Hoffman, Wetherhold, & Stoehs, 2006), distinctive pictures, and instructions that, following Intraub and Richardson (1989), are worded to draw as much attention to the background and layout of the picture as to the main object(s). Boundary extension has been reported for observers ranging in age from 6 to 84 years old (Seamon, Schlegel, Hiester, Landau, & Blumenthal, 2002, including children with Asperger’s syndrome,
Rethinking Scene Perception: A Multisource Model
233
Chapman, Ropar, Mitchell, & Ackroyd, 2005). There is also evidence to suggest that infants as young as 3–4 months of age are subject to the same anticipatory spatial error (Quinn & Intraub, 2007). How soon after the picture is gone does boundary extension occur? Generally speaking, errors of commission are thought to require heavy cognitive loads (e.g., long retention intervals, large stimulus sets, or stimuli that are confusable because they bear semantic or physical similarities) or inattention (cf. Koriat, Goldsmith, & Pansky, 2000). With this in mind, our first formal tests of boundary extension (Intraub & Richardson, 1989) were administered after relatively long retention intervals. The recall/drawing task was administered after retention intervals of 35 min or 2 days (e.g., see Figure 1, left column). Because of reports of excellent picture recognition
Figure 1 Top row shows close-up views of scenes, middle row shows representative participants’ drawings from memory, and the bottom row shows a more wide-angle view of the same scenes. Left column includes part of Figure 1 in Intraub and Richardson (1989) and right column includes part of Figure 1 in Intraub et al. (1996).
234
Helene Intraub
memory, we administered our recognition/rating test (in which observers reported if the test picture was the same or showed more or less of the scene on a five-point scale) 2 days after participants had studied 20 pictures for 15 s each. Boundary extension occurred in all of these tests. Subsequent research revealed that our caution was misplaced— boundary extension was evident in recognition/ratings within minutes of observers viewing 18 pictures for 15 s each (Intraub, Bender, & Mangels, 1992). It was evident in drawings made minutes after viewing seven pictures for 250 ms each (at a rate of 1 every 5 s; see Figure 1, right column). In other experiments, observers’ ratings revealed boundary extension when on each trial, three briefly presented pictures (325–333 ms each) were presented in succession followed by a 1-s mask and a test picture that the observer immediately rated (Bertamini et al., 2005; Intraub et al., 1996). The same outcome was observed when the masked interval was decreased to as little as 42 ms (Dickinson & Intraub, 2008): an interval commensurate with a saccade. In other research, single pictures were presented for 250 ms, masked for 2 s, and then participants adjusted the boundaries of a test picture to recreate the remembered view, and their adjustments resulted in boundary extension (Intraub et al., 2006). Finally, in the most rapid test to date, using the recognition/rating task, Intraub and Dickinson (2008) presented a single picture that was briefly interrupted by a distracting visual noise mask for either 250 ms or 42 ms before reappearing to be rated. On trials where no change at all was made to the picture, observers tended to rate the same view as showing less of the scene than before. A disruption of visual sensory input for less than 1/20th of a second was sufficient for boundary extension to occur in memory for a single picture on each trial. This poses a vexing problem for current models of scene processing. In general, these models work as follows. Given a brief presentation of a picture (e.g., 250 ms, a ‘‘fixation’s worth’’) with no subsequent mask, a visual sensory representation will be briefly maintained in the visual sensory register (e.g., Loftus, Johnson, & Shimamura, 1985). If the stimulus presentation is masked, aspects of the visual input will immediately be retained in one or more (depending on the specific model) very short-term memory buffers. Candidate buffers include transsaccadic memory (Irwin, 1991, 1993), visual short-term memory (VSTM; Phillips, 1974), conceptual short-term memory (CSTM, which momentarily stores the general concept of the scene; Potter, 1976, 1999), and ultimately, if attention is maintained, some of this information may be consolidated in long-term memory. These models are well established in the field of visual cognition and have motivated critical questions about how quickly a picture of a scene can be identified (within 150 ms; Potter, 1976; e.g., Thorpe, Fize, & Marlot, 1996; VanRullen & Thorpe, 2001), what elements in the picture trigger rapid scene categorization (global layout characteristics are sufficient,
Rethinking Scene Perception: A Multisource Model
235
e.g., Biederman, 1981; Greene & Oliva, 2009), how much of the scene can be retained once the stimulus is gone (a controversy that persists; Henderson & Hollingworth, 2003; O’Regan & Noe¨, 2001; Simons & Rensink, 2005), and what poststimulus factors influence memory for a scene (e.g., visual masking vs. conceptual masking; Intraub, 1984; Loftus & Ginn, 1984; Potter, 1976). However, this type of model neither predicts nor provides a good explanation for a rapidly occurring error of commission. To explain boundary extension within this framework, we must either propose the existence of yet another very short-term buffer (e.g., a ‘‘scene extrapolation buffer’’) or add this computational feature to a limited capacity buffer already in the model (e.g., transsaccadic memory)—a solution that would incorporate boundary extension into the model, but in an ad hoc manner that does not have explanatory power.
2. Scene Perception as an Act of Spatial Cognition I would like to present an alternative framework for consideration—a multisource model of scene perception (Intraub & Dickinson, 2008) in which scene representation is not conceptualized as a visual representation, even when the stimulus is a picture. At its core, scene representation is an act of spatial cognition. An egocentric spatial framework serves as the ‘‘bones’’ of scene perception, that are then ‘‘fleshed out,’’ in the case of a picture by visual representation, amodal perception, and associations to both general and specific contexts. The approach I will describe bridges three subfields of cognition that are generally studied in isolation from one another: visual cognition, spatial cognition, and models of long-term memory (specifically, source monitoring; Johnson, Hashtroudi, & Lindsay, 1993; Lindsay, 2008). It also has the benefit of providing a foundation for thinking about scene perception in a way that is not necessarily tied to the visual modality. The model requires a fresh look at some of the definitions and assumptions that have guided research on visual scene perception since the 1970s. I will discuss a new take on some old terminology that I believe can benefit discussion of scene perception in general, but that are required to lay a foundation for explaining the multisource model.
2.1. Definitions: What is a Scene? Is the photograph in Figure 1 (top of left column) a scene? The expected reply would be a resounding ‘‘yes’’ because scenes are most frequently described in the literature as views of the world. For example, in their working
236
Helene Intraub
definition of a scene, Henderson and Hollingworth (1999) captured the implicit characterization that is embraced by many: In research on high-level scene perception, the concept of scene is typically defined (though often implicitly) as a semantically coherent (and often nameable) view of a real-world environment comprising background elements and multiple discrete objects arranged in a spatially licensed manner. Background elements are taken to be larger-scale, immovable surfaces and structures, such as ground, walls, floors, and mountains, whereas objects are smaller-scale discrete entities that are manipulable (e.g., can be moved) within the scene (p. 244).
The authors themselves were quick to point out that this working definition has shortcomings; in particular, they raised the problem of scope, for example, at what point is a picture too close-up to be considered a view of a scene (e.g., the content of a drawer) or too encompassing (an aerial view)? The problem of scope, they pointed out is typically avoided in studies ‘‘. . .by using views of environments scaled to a human size. So an encompassing view of a kitchen . . . would be considered a good scene. . .’’ (p. 244). This working definition has lead to other problems as well, notably disagreements about whether or not a particular set of pictures can ‘‘count’’ as being a set of scenes. For example, if multiple objects are required before a picture can be a scene, then the photograph in Figure 1 (top of right column), which contains a single object against a background, is not a scene. A tricky claim, because it was in fact a view of the world snapped by a photographer. Requirements such as ‘‘semantic coherence’’ and ‘‘spatially licensed’’ organizations and the expectation of ‘‘encompassing views’’ have also raised problems. For example, Zelinsky and Loschky (2005) needed to justify their choice of photograph of toys in a crib for a visual search experiment that they argued was relevant to search in the world. This was because nothing constrained the placement of the toys (in the way, for example, the placement of large appliances are constrained in a kitchen). However, again, the close-up view of the crib is a plausible view that a parent or baby might frequently see. Intraub (2004) needed to justify the use of objects on desktops and on portions of the floor in a 3D boundary extension experiment, because these were not ‘‘encompassing’’ views. However, they are normal views during day-to-day perception. Why should these stimuli be deemed unsuitable for studying aspects of scene perception and memory? I believe that many of these arguments arise because the definition of a scene throughout the field frequently focuses on acceptable characteristics of the displays that we use in our research, rather than on characteristics of the 3D world. Our view of a kitchen is sometimes an encompassing view, but not always. In a small kitchen, a view of the stove may disallow a view of the refrigerator because it is on the opposite wall, behind the observer. In the world, viewers are embedded within the scenes they observe. We do stand
Rethinking Scene Perception: A Multisource Model
237
by a desk and look down at the desktop, or kneel on the floor to pick up a child’s scattered toys. The ultimate goal of scene perception research, after all, is to understand how we perceive our surroundings, not how we understand pictures. This goal is clearly laid out in most papers, but then later on, the same term, ‘‘scene’’ that is used to refer to the world, is also used to refer to the pictures that serve as stimuli. What I will argue is that a photograph can never be a scene nor can it show an entire scene; any more than an observer can see his or her surroundings all at once (e.g., Hochberg, 1986; Intraub, 2007). Eye movements, head movements, and body movements are necessary to explore scenes in the world; we are embedded within the scenes with which we interact. I suggest that the term scene be used only to refer to surrounding environments in the world. The pictures we use in our research are not in themselves scenes. Throughout this chapter I will refer to them as proxy views to remind the reader (and myself) that the pictures we use in our research are only 2D surrogates for actual views of the scenes that surround us in the world. I suggest that scene representation be reserved for referring to the mental representation of the surrounding 3D world elicited by a view. If we ask people to remember the proxy view we showed, consider that we are only asking them about one part of their scene representation—their memory for the pictured view. These definitions will allow us to clearly separate four distinct concepts in discussing research on scene perception and memory: (a) the surrounding 3D scene in the world where the picture was taken, (b) the 2D proxy for a view of that world (a photograph, computer-generated image, or line-drawing), (c) the mental representation of the surrounding 3D scene elicited by the view, and (d) representation of the details originally contained in the proxy view [e.g., textures, colors, objects (identity, orientation, and so forth)]. By accepting the idea that a scene refers to the world surrounding an observer, then any view (a wide-angle view of a kitchen or a close-up view of a desk top) is a view of a scene. A view can contain a single object, a pile of objects bearing a scrambled relation (e.g., a picture of a junk yard), and the inside of a draw. Carmela Gottesman and I have suggested that a more fundamental characterization of what a picture must include to be understood as a view of a scene is the depiction of a location that is cropped by the picture’s edges (Gottesman & Intraub, 2002; Intraub, Gottesman, & Bills, 1998). Thus, even an aerial photograph (as in Zelinsky& Schmidt, 2009) is a view: it is a view from a satellite that continues beyond the edges of the given photograph.
2.2. An Illustrative Anecdote To illustrate the concept of a multisource scene representation, I will provide the following anecdote. Recently, while preparing a talk, I looked closely at the picture at the top of the left column in Figure 1 and was
238
Helene Intraub
shocked to realize that I may have misunderstood it for the past 20 years! I had always perceived the picture to be a view of trash cans awaiting trash pick-up at the curb in a suburban neighborhood. I thought that the tripod and camera were positioned in the street, with a nonspecific suburban house behind the camera. In fact, I had acted on this interpretation, requesting that subsequent student photographers refrain from setting tripods in the street for safety reasons. What I had now focused upon were the crossbeams of the fence, noticing that they were the type that usually (but not always) signify the ‘‘inside’’ of a fence. The picture may not have been taken on the street at all! My curiosity piqued, I decided to ask a colleague if based on his memory for the picture, he had a sense of what was behind the camera. Without hesitating he immediately replied, ‘‘The owner’s house.’’ When queried, he said that he had always thought of it as trash cans in the owners’ backyard, opposite their house. Was the picture taken in a backyard or out on the street? I asked my graduate student what she thought was behind the camera in the hope of breaking the tie. Without hesitating she replied, ‘‘The other side of the alley.’’ I was nonplussed—what alley? The ensuing conversation enlightened us both as to the trash storage methods in different suburban areas of our country. In the southwest where she had lived all her life, trash cans were kept in fenced alleyways between suburban homes— something that neither I nor my colleague had experienced in the eastern and Midwestern suburbs with which we were familiar. We all saw the same proxy view, we all remembered the key content of the proxy (the trash cans, the lid, and the wooden fence), but in addition, we all adamantly claimed to have had a scene representation that captured our understanding of the view and the space in which the camera had been embedded (i.e., ‘‘I always thought of it that way’’). The representation went beyond the picture and beyond the region that is included when boundary extension takes place. It included the world behind the viewer. Yet, it is important to note that none of us believed that we had seen what was behind the camera.1
2.3. A Multisource Model of Scene Representation As mentioned earlier, traditional approaches to scene processing have focused on memory for the content of the proxy view. The representation is thus based on a single source—the visual input. Intraub and Dickinson (2008) proposed that instead scene representation might be best conceptualized as a multisource representation, even when the observer is simply viewing a picture. The ‘‘starting point’’ in this approach is not the visual sensory input as in the traditional approach to scene perception, but is an underlying spatial structure (frame of reference) that the human observer 1
I thank James E. Hoffman and Kristin O. Michod for sharing these descriptions.
Rethinking Scene Perception: A Multisource Model
239
brings to the event (whether the experience involves exploring the 3D world, watching a movie, or looking at a picture). Researchers have explored many different kinds of spatial reference frames and different terminologies are used to describe them. Three general categories and typical terminologies include egocentric (e.g., right now, the desk is front of me), allocentric (e.g., the armchair in on the wall opposite the desk, adjacent to the door), and geographic (e.g., the room is in a house, at the southwestern corner of Pennsylvania; see Allen, 2004, for discussions of different ways of conceptualizing frames of reference). Thus, in this view, at its core, scene perception is an act of spatial cognition. In the case of a single novel view, the most prominent framework would be an egocentric frame of reference that includes the observer’s sense of ‘‘in front of me,’’ ‘‘on my left,’’ ‘‘on my right,’’ ‘‘above me,’’ ‘‘below me,’’ and ‘‘behind me’’ (for a discussion of imagined space, see Bryant, Tversky, & Franklin, 1992; Franklin & Tversky, 1990; Tversky, 2009). Although I will focus on the egocentric reference frame here, it is expected that other frames of reference can also be activated by a proxy view (e.g., a geographic reference frame, as when asked the location of a familiar store in a picture; Epstein & Higgins, 2007). The key assumption is that a proxy view will trigger a number of mental activities: (a) visual processing of the proxy view, (b) amodal perception of the objects (Kanizsa, 1979) and surfaces (Nakayama, He, & Shimojo, 1995; Yin, Kellman, & Shipley, 2000) just beyond the boundaries of the view, (c) categorization of the view (e.g., basic-level categories of natural landscapes such as desert, field, forest, lake, mountain, ocean, and river; Greene & Oliva, 2009) and contextual associations elicited by objects in the view (e.g., Bar, 2004). These multiple sources of input are organized within the egocentric spatial structure surrounding the viewpoint taken by the observer. In the case of a photograph, the observer takes the viewpoint of the camera (as is also true in viewing films; Hochberg, 1986; Intraub, 2007). Just as the visual field is graded, decreasing in resolution from the fovea outward (see O’Regan, 1992), scene representation is also graded, extending well beyond the visual information in the proxy view. For example, consider a briefly presented picture with the observer fixating the center. The best visual resolution would be at the point of fixation, shading off toward the boundaries of the picture. Where the visual sensory input abruptly ends at the picture’s boundaries, perception does not end. Amodal perception allows the viewer to perceive the scene beyond the edges of the view by completing any objects cropped by the picture’s boundaries (e.g., Kanizsa, 1979) and by continuing cropped surfaces (Fantoni, Hilger, Gerbino, & Kellman, 2008; Yin et al., 2000). Although this is amodal perception (not perception of the visual input), it is nonetheless crucial for comprehending the view. Without it, observers would interpret the proxy view in Figure 1 (top of column 1) as a view of broken trash cans
240
Helene Intraub
(i.e., chopped in half) and a fragment of a broken fence (with the top of the pickets chopped off). But that is not what we perceive. The visual input at the picture’s edges tightly constrains amodal perception just beyond the boundaries of the view (cf. Nakayama et al., 1995). World knowledge would support these constraints as well (one knows the expected shape of a trash can). Indeed, all of the participants’ drawings of this proxy view in Intraub and Richardson (1989) depicted unbroken (whole) trash cans and the continuation of the fence at the top and side boundaries, as well as the lid at the bottom (as shown in Figure 1). Constraints become less specific for regions that are farther away from the visual information (e.g., the ‘‘nonspecific suburban house’’ behind the camera mentioned earlier, or in the case of a nonspecific location, such as a close-up of a cup on a table, merely ‘‘a wall in the room,’’ behind the camera, without a particular type of room being specified). Thus, many aspects of scene representation that were not visually present in the proxy view will be shared across observers: the continuation of the view just beyond the boundaries (as in boundary extension), the categorization of the view as an outdoor scene (that must have a sky above), and at least for viewers familiar with these types of trash cans and fences in the United States, the interpretation of the view as a suburban scene. However, individual experiences will cause divergences (e.g., as when different observers interpreted the trash cans and fence as being in an alley, in front of the house, or in the back yard). When I suggest that scene perception is an act of spatial cognition I mean that all of these sources of information are organized within an egocentric spatial framework, resulting in a scene representation that is laid out in terms of the space around the viewer. While the proxy view is visible, viewers have no difficulty discriminating the currently present visual sensory input from the amodally perceived continuation of objects and surfaces beyond the viewboundaries. Put simply, one can simultaneously perceive that these are normal, intact trash cans, yet report that the picture only permits us to see a part of each one. Given the ease of discriminating these sources of information when the picture is present, why then do observers falsely remember having seen beyond the edges of the view when the visible stimulus is interrupted for less than 1/20th of a second before reappearing at test (i.e., boundary extension; Intraub & Dickinson, 2008)?
2.4. Boundary Extension as a Source Monitoring Error According to the multisource model, once the sensory input is gone, even for a moment, the observer’s only recourse is to rely upon memory of the experience. That is, memory for information that had originally been visually perceived (which itself varies in terms of resolution), memory for information that had originally been generated through amodal perception, and memory for the contextual layout that had been elicited. Although we ask the
Rethinking Scene Perception: A Multisource Model
241
observer to remember the proxy view and compare it with the test item, it is not simply memory for the proxy view that the observer has. Instead, the observer has in mind the representation of a scene and must now determine which part of that scene representation had been contained within the boundaries of the proxy view—that is, had a visual source. Thus, although unintended, the boundary memory task may in fact be a source monitoring task ( Johnson, 2006; Johnson et al., 1993; Lindsay, 2008), more specifically a reality monitoring task ( Johnson & Raye, 1981), because we are asking people to decide where the externally presented information ended and the amodally generated information began. The fundamental insight of Johnson and her colleagues is that the source of a memory is not stored in the form of a tag that specifies where the information came from. In most cases, determining the source of a memory is an attribution that is based upon the type and quality of details (perceptual, contextual, semantic, or emotional) that characterize that representation ( Johnson, 2006; Johnson et al., 1993; Lindsay, 2008). Different sources are associated with different profiles of characteristics, and source monitoring is a process by which the individual determines which profile the memory best fits. Most of the time, memory may fall squarely within a category (e.g., ‘‘I saw that with my own eyes’’). But sometimes, it may not. So, for example, if memory for a dream includes characteristics than are unusual for dreams (very high levels of perceptual detail, a well-integrated context, strong emotional response, and clearly defined co-temporal events) but are hallmarks of perceptually based experiences, one might mistakenly attribute the experience to perception. This also explains mundane mental puzzles such as a common rumination that plagues travelers after leaving for a trip: did I actually turn off the stove before leaving, or did I just think about turning it off? Source monitoring has been able to provide an account of many wellknown long-term memory errors (see Lindsay, 2008 for a review). What we propose is that the same monitoring process can explain boundary extension. In this view, the boundary extended region people erroneously remember having seen is not computed in a very short-term buffer after the stimulus is masked. The continuation of the view was always part of the scene representation (i.e., the trash cans were always perceived as whole, and the background was always perceived as continuing beyond the edges of the view). Although readily distinguishable while the visual sensory input is available, distinguishing the difference between memory for the visual information at the periphery of the picture and memory for the amodallygenerated information just beyond it is much more difficult. The individual must decide at what point the image is just not detailed enough to have been visual. This results in a source monitoring error, in which memory for amodally perceived information that had been tightly constrained by the visual information (and context) is now attributed to having been seen— that is, boundary extension. Although boundary extension is an error with
242
Helene Intraub
respect to the proxy view, it is typically a good prediction of upcoming layout just beyond the view and as such may assist in the integration of views (Intraub, 1997, 2002).
2.5. Effects of Divided Attention and Stimulus Duration on Boundary Extension If boundary extension is a source monitoring error, then we should be able to draw on the source monitoring model to predict ways of influencing the size of the boundary extension error. How might this be accomplished? Once again, while the visual information is present, there is a very sharp and discontinuous delineation between the current visual information and the amodal continuation of the scene. After a brief interruption, the sharpness decreases (‘‘flattens out’’) in the remembered representation because visual memory is not a photographic representation of the sensory input. Thus, memory for details in the periphery of the picture and memory for the highly constrained amodal continuation of those details will share many characteristics causing a source monitoring error. Perhaps, if we were to flatten the gradient between the two further, the threshold for deciding where the visual information ended would be lowered, and the observer would accept a slightly greater swath of amodally perceived space as having been seen before. The outcome of a series of experiments that tested the effect of divided attention on boundary extension is consistent with this interpretation. Instead of divided attention resulting in an increase in random errors when rating the remembered boundaries of a proxy view, when visual attention was divided in a dual-task situation, boundary extension increased, that is, more ‘‘nonvisually derived’’ information was attributed to vision than when attention was not divided. In the initial experiments, Intraub, Daniels, Horowitz, and Wolfe (2008) presented participants with close-up or wider-angle versions of simple scenes, similar to those used in previous boundary extension experiments. However, superimposed on each picture were randomly positioned block 2s and 5s (as shown in Figure 2). Visual
Figure 2 Example of a pair of close-up and wide-angle views with superimposed 2’s and 5’s. From Intraub et al. (2008).
Rethinking Scene Perception: A Multisource Model
243
attention was manipulated with a search task in which observers had to report the number of 5’s (there could be zero, one, or two on any trial). This is a very difficult search task that was made all the more challenging by limiting the display time to only 750 ms on each trial and presenting the numerals on a photograph. There were three independent conditions: memory only, dual task (giving the search task priority), or search only. In the memory-only and dual-task conditions, after a masked interval, a test picture appeared and participants rated it on the standard five-point boundary recognition/rating scale (as ‘‘same,’’ ‘‘more close-up,’’ or ‘‘more wide-angle’’ than before). Although the search task was extremely difficult, participants performed above chance in both the search-only condition and the dual-task condition. Critically, search performance was the same across those conditions, demonstrating that dual-task participants had indeed given priority to the search task. When boundary ratings were compared between dual-task and memory-only conditions, both yielded significant boundary extension; however, boundary extension was greater when attention was divided. In both conditions on trials on which the stimulus and test pictures were the same close-up view, they rated the test picture as looking too close-up (indicating that they remembered the original with extended boundaries). When the stimulus and test pictures were different, the typical distractor asymmetry indicative of boundary extension occurred; the stimulus and test pictures were rated as more similar when the close-up was the stimulus than vice versa. This asymmetry signifies boundary extension because when the closer view is the stimulus, boundary extension in memory would cause a wider view at test to be a fairly good match, whereas boundary extension would have the opposite effect were the wider angle view presented first. Clearly, dividing attention did not introduce random error, but instead (in terms of the multisource framework) increased the acceptance of amodally generated information as having been seen before. Results could not be attributed to observers in the memory-only condition capitalizing on their less demanding task to develop strategies for ‘‘beating’’ the expected rating task (e.g., verbalizing, ‘‘the shoe is 0.5 cm from the right edge’’) because the same results were replicated under conditions in which instead of being tested after each trial, the memory test was deferred until the end of the experiment and all participants were naı¨ve as to the nature of the test until it was administered. Again, search was above chance and did not differ between the baseline-search condition and dual-task condition boundary extension was greater in the dual-task than memory-only condition, and the rating asymmetry in response to distractors was obtained. Additional tests of the source monitoring hypothesis will have to be conducted. Some tentative support comes from a comparison of boundary extension for the same pictures shown at different stimulus durations.
244
Helene Intraub
If we accept that the overlap in similarity between visually-generated and amodally-generated memory might be greater if visual detail were reduced relative to the highly constrained amodal information, then reducing stimulus duration might be another way to ‘‘flatten’’ the difference between information from the visual source and that derived from amodal perception. Intraub et al. (1996) reported greater boundary extension for 250-ms pictures than for 4.5-s pictures when they were shown at the same rate (1 every 5 s). Similar to the divided attention experiment, rather than introducing more random error into the ratings, the briefer stimulus duration resulted in greater boundary extension. However, this was only a single test with a very small stimulus set (seven stimuli). In ongoing research, Christopher Dickinson and I found in one experiment that boundary extension increased as stimulus duration decreased from 500 to 250 to 100 ms (with rate of presentation held constant). This occurred for multiobject scenes. However, in other research, using tight close-ups of single objects, no difference was obtained as a function of stimulus duration. In this case, the amount of boundary extension was very great in all conditions, so that it may have swamped any detectable differences at the durations tested, but the results at this point are unclear. Future research will explore this further and also explore other means of increasing or decreasing the difference between memory for the internally generally amodal perception and memory for the visually presented information.
3. Multisource Scene Representation: Behavioral and Neuroimaging Picture Studies In the visual cognition literature, pictorial stimuli have often been referred to as if they were endpoints on a scale of simple to complex visual stimuli. For example, in early research on scene perception, Potter (1976) described the photographs in her study as ‘‘complex, meaningful visual events.’’ However, as research has progressed, we find that views of scenes may instead be a distinctive class of stimuli, that differ in important ways from other types of visual stimuli.
3.1. Denoting a Location: The Importance of View-Boundaries Beginning with behavioral research, boundary extension is not elicited by all picture boundaries. If a display does not depict a location—a view of an otherwise continuous scene—boundary extension does not occur. This is not to say that observers’ memories are error free, but that instead of reflecting boundary extension, they tend to be bidirectional indicating an
Rethinking Scene Perception: A Multisource Model
245
Figure 3 Example of close-up and wider angle views of an object on a blank background versus a meaningful background presented to participants in Intraub et al. (1998).
averaging effect—regression to the mean object size (Intraub et al., 1998; also see Gottesman & Intraub, 2002; Legault & Standing, 1992). Figure 3 shows an example of pictures in which the same sized objects are presented both with and without a location specified (i.e., a blank background or a background other observers had described as an asphalt road; Intraub et al., 1998). A pattern of errors consistent with boundary extension occurred for pictures containing backgrounds (i.e., that showed a partial view of a continuous scene). However, the error pattern was different when only a blank was present in the background—in this case, regression to the mean object size occurred instead (i.e., large objects remembered as smaller, and small objects remembered as larger). This suggests that depiction of a background is important for a multisource scene representation to be elicited. However, we found that if we recruited participants’ scene knowledge, and required them to imagine the specified background (e.g., an asphalt road with a shadow of the cone) while viewing pictures of objects without backgrounds, then boundary extension occurred. In fact, the ratings were indistinguishable from those obtained when the background was visually presented. We demonstrated that this was not an artifact of requiring an imagination task, because when another group of participants was required to imagine colors on the outline objects with blank backgrounds (and no mention of a scene was made), then the error pattern shifted back to one that reflected regression to the mean object size.
246
Helene Intraub
In the same vein, Gottesman and Intraub (2002) demonstrated that either boundary extension or regression to the mean object size would be observed depending on whether people were led to interpret a blank background as being the location on which an object was photographed (i.e., being part of a scene), or an unrelated background. The latter was achieved placing a cutout of a photographed object on a white background in front of the observer. Ultimately, the two conditions were the same, an object on a white background. When that background was understood as being unrelated to the picture, a bidirectional averaging error occurred (big objects were remembered as smaller, and small objects as larger), but when it was understood as depicting the location at which the picture was taken, boundary extension occurred. In terms of the model, a multisource scene representation had been elicited. Interpreting the edge of the picture’s, white background as being a view-boundary appeared to be a critical factor in eliciting boundary extension. The distinctive nature of view-boundaries has been illustrated in other research in which different ‘‘types’’ of surrounding boundaries were compared. In Figure 4 are two of several proxy views that were presented to participants in Gottesman and Intraub (2003). Both pictures yielded boundary extension beyond their edges. Both also include a surrounding border within the picture. In the picture of the sandal and towel on the grass, the edges of the towel surround the sandal; these edges parallel those of the picture’s view-boundaries, but are not themselves view-boundaries—they are the edges of an object (the towel), not the edges of a view. In the picture of the desktop, however, among the objects on the desk is a framed photograph and that photograph has its own view-boundaries. Participants did not remember seeing a greater expanse of the towel around the sandal, but did remember having seen a greater expanse of the background around the fork (in the picture within the picture). Thus, a boundary
Figure 4 The edges of both pictures provide a view-boundary: inside the picture on the left is an object boundary (the edges of the towel that surround the sandal) and inside the picture on the right is another view-boundary (the edges of the picture on the desk, i.e., the picture within a picture). Based upon two figures in Gottesman and Intraub (2003).
Rethinking Scene Perception: A Multisource Model
247
inside a picture appears to elicit boundary extension only when it is a viewboundary. Similarly, DiCola and Intraub (2004) demonstrated that when an object was occluded by a view-boundary, participants remembered having seen more of the object than was visible. However, in contrast, when the same object was occluded by another object within the scene (identical amount of occlusion), this unidirectional error did not occur.
3.2. Boundary Extension and Scene-Selective Regions of the Brain There are parallel observations in neuroimaging studies that suggest special properties of views of scenes. Similar to the presence or absence of boundary extension as a function of whether the background depicts a view of a scene or is blank, the parahippocampal place area (PPA) was reported to respond most strongly to views of locations in space (e.g., a room with objects, or an empty corner of a room with no objects), but respond poorly to objects that were not depicted in a location. Similar to the behavioral work showing that imagining a background when looking a picture with a blank background results in boundary extension (Intraub et al., 1998), PPA responded strongly when participants were required to imagine a location (O’Craven & Kanwisher, 2000). Whereas PPA is thought to respond most strongly to the local spatial layout in a specific view, the retrosplenial cortex (RSC) is thought to be involved in integrating a local view within a broader spatial context— perhaps being related to navigation and recognition of places (Epstein & Higgins, 2007). To determine if these scene-selective areas would respond to boundary extension in memory, Park, Intraub, Yi, Widders, and Chun (2007) presented observers with series of photographs in which there were repetitions. In the critical conditions, the repetition was not an identical view; a close-up view would later be followed by a wider view of the same scene or vice versa. Observers were simply instructed to remember the photographs. The use of repetition was a critical feature of the design; reduction in the neural response in a predefined area of the brain the second time a stimulus is presented is thought to indicate that the brain area has treated the two stimulus events as being the same (a habituation effect; Turk-Browne, Scholl, & Chun, 2008). Put simply, novel items should elicit greater activity than repeated items. The rationale was that if PPA and RSC are not responsive to boundary extension in memory, then repetition of the same scene (whether a slightly wider view was shown first or second) should result in similar reductions in the neural response. However, if these brain areas respond to boundary extension (i.e., if the extended region is accepted as having been seen before) then, following the asymmetrical response pattern described earlier, a closer picture followed by a wider picture should result in greater
248
Helene Intraub
attenuation than a wider picture followed by a closer picture. Why? Because if the closer picture is first and is remembered with extended boundaries, then when the wider view is presented it should look somewhat similar to what the observer remembers (as in ‘‘seen that before’’), whereas if the wider view is first, the close-up would appear to be a very different view, with any boundary extension exaggerating the difference (as in ‘‘that’s new’’). As shown in Figure 5, the PPA and the RSC responses yielded an asymmetry. When a closer view was followed later by a wider view, the neural response to the wider view decreased, but when the wider view was followed by the closer view, the neural response to the second view was just as strong as it was to the first (indicating that the region responded to this picture as if it were new). This asymmetry was not observed in the lateral occipital cortex (LOC), which is associated with object recognition. In this area, the size of the object is not expected to matter, just its identity, so whereas PPA and RSC showed the asymmetry, LOC clearly did not; the same habituation of the neural response occurred irrespective of which view was presented first (as in ‘‘seen that before’’). These results suggest that both PPA and RSC are sensitive to boundary extension. However, a subtle difference between the two conditions supported the distinction described earlier between PPA and RSC. Although the PPA showed some sensitivity to boundary extension in these critical conditions, it also responded as if it was retaining some of the specific layout information from the local proxy view because neural attenuation occurred in the two identity conditions (i.e., a close-up followed by the same close-up or a wideangle view followed by the same wide-angle view). In other words, repetition of the close-up was recognized as a repetition of the same view. So the results were mixed for PPA. However, the RSC responded to the identity conditions differently, showing no habituation; there was no attenuation of the level of response when the same pictures were repeated. This provides converging evidence for the idea that RSC responds to the integration of an individual view within a larger scenic structure. The lack of attenuation in the identity conditions suggests that the first presentation had been remembered within a larger scene context, and thus its repetition was responded to as novel. Results suggest that both scene-specific areas responded to boundary extension to some degree, but that the RSC was more attuned to placement of the specific view within a more expansive framework.
4. Multisource Scene Representation: Exploring Peripersonal Space Pictures do not surround the observer, usually subtend a relatively small visual angle (as compared with the scope of the visual field), are 2D representations, and in many ways differ dramatically from the experience of
249
Rethinking Scene Perception: A Multisource Model
Close-wide
Wide-close
Initial Repeated ns
*
z = −15
B
FMRI signal change (%)
A
0.4 0.3 0.2 0.1 0 0
4
8
12
0
4
8
12
8
12
8
12
−0.1 0.4 0.3
ns
*
0.2 0.1 0 z = 21
C
−0.1
0
4
0.4
8
12
0
4
*
*
0.3 0.2 0.1 0 z = −6
D
Farther away
0
4
8
12
0
4
−0.1 0.4 0.2
Identical
0 −0.2 −0.4
ns
−0.6 Closer up
−0.8 −1
*
Figure 5 Boundary extension in the PPA and RSC but not in the LOC: a representative participant’s PPA, RSC, and LOC. ROIs are shown on a Talairach-transformed brain. Examples of the close-wide and wide-close viewing conditions are presented in the top row. Hemodynamic responses for close-wide and wide-close conditions are shown for each ROI. Error bars indicate standard error of mean ( S.E.M.). Bottom row shows the same asymmetry in behavioral responses of these participants in a test outside the scanner. Based on Figure 2 in Park et al. (2007).
250
Helene Intraub
viewing the world. Still they have proved to be valuable tools that can allow us to learn about scene perception. It is generally assumed that the similarities between viewing pictures and viewing the world outweigh the differences. I have always embraced this position, although as I began to think about boundary extension in more spatial (than visual) terms, I became concerned about the validity of this assumption. This is because in real space when looking at an occluded view (e.g., through a window), rich spatial information derived from stereopsis, motion parallax, and the relation of the edges of that view to one’s body could serve to constrain scene representation, and prevent an error like boundary extension from occurring. If boundary extension is fundamental to scene perception, then if we were to set up views in the real world that bear similarity to those presented in pictorial form, we would expect people remember seeing beyond the edges of the view in the world as well. If it is picture memory error, then people may be able to remember the location of the edges of a view through a window that is directly in front of them. If boundary extension did occur in real space, then this would also provide an opportunity to test another key assumption of the multisource model. If at its core, scene perception begins with a surrounding spatial framework that is then filled in by various sources of information, then we should be able to see some similarities between scene representations that are initiated through vision and scene representations initiated through touch (more specifically touch and movement, i.e., haptic exploration). If a person were to feel a space within the boundaries of a window-like opening, would they experience boundary extension, that is, would they remember having felt beyond the boundaries of the view? Unlike vision, which is a distal sense, with a very small foveal region and large low-acuity periphery, touch is a contact sense with multiple high acutely regions (i.e., the five fingertips, which can be thought of as five ‘‘foveae’’ on each hand) and a relatively small low-acuity periphery (e.g., the size of a hand). The span of the hands is much smaller than the span of the eyes. It is possible that these differences would make it more likely that people could correctly retain the expanse of the ‘‘view.’’ These differences may allow for memory beyond the view in the case of vision but not in the case of haptic exploration. In fact, the notion ‘‘touch teaches vision’’ has a long history in psychology—including the sense that haptic input can serve to test the reliability of certain types of visual cues (see Atkins, Fiser, & Jacobs, 2001). However, in spite of the many differences between vision and haptics, the brain is presented with a common challenge: in the case of both modalities, a coherent continuous representation of the world must be established based upon successive, discrete, sensory inputs. Whether we are viewing a room or haptically exploring a room in the darkness, we can never explore it all at once—the environment must be sampled a part at a time.
Rethinking Scene Perception: A Multisource Model
251
This point is one that William James drew upon in his seminal argument against the theory that blind individuals must represent the spatial world in a radically different way than sighted individuals. He pointed out that the apparent piecemeal nature of haptic input is no different than the ‘‘innumerable stoppings and startings of the eyeballs’’ during visual perception, and that these two examples of successive inputs are likely integrated in similar ways ( James, 1890).
4.1. Haptic Exploration: Sighted Observers and a Deaf and Blind Observer To begin to explore these questions, Intraub (2004) set up the following conditions. Six small regions in two adjacent rooms were set up on table tops and on the floor, with a window-like apparatus around each. Semantically related objects were placed in reasonable relation to one another within each, as shown in Figure 6. The ‘‘windows’’ used in the visual condition were attached to an expanse of cloth to block participants’ view of the background outside the window, and the window frame was very flat so that it would not occlude participants’ view of the surface within the window. In addition the window frame was placed directly on the background surface so that no matter how the viewer shifted his or her position, we would know for certain that he or she could not see beyond the edges of the view (which is not the case for a typical window which is positioned between the viewer and the viewed surface). In the haptic condition the window frame provided the same sized ‘‘view,’’ but was made of wood that was high enough to prevent people from accidently feeling outside the stimulus area when they explored the regions while blindfolded. Examples of the two types of windows are shown in Figure 7. Similar to the proxy views (close-up pictures) used in boundary extension research, observers were positioned so that they were very close to the stimulus areas (as shown in Figure 7). Thus, these stimuli allowed natural views in real space that were similar to views already tested in other research (e.g., Gottesman & Intraub, 2003). Close viewing also provided a conservative test of whether or not boundary extension would occur when viewing real spaces because in peripersonal space (where viewed objects are close enough to grasp) one would expect distance and area judgments to be more accurate than if one had studied a distant view through a typical window (see Previc, 1998, for a review of theories regarding perception of near versus far space). All observers were blindfolded and escorted to each stimulus region so that they would experience it only from the experimenter’s designated viewpoint. In the vision condition, the blindfolds were removed for 30 s while observers studied the view. In the haptic condition, observers’ hands were placed at the center of the region and they were instructed to feel
252
Helene Intraub
Desk: 22⬙ ⫻ 16⬙ (56 cm ⫻ 41 cm)
Toys: 24⬙ ⫻ 24⬙ (61 cm ⫻ 61 cm)
Bureau: 20⬙ ⫻ 17⬙ (51 cm ⫻ 43 cm)
Bedroom: 19⬙ ⫻ 14⬙ (48 cm ⫻ 36 cm)
Sink: 15⬙ ⫻ 19⬙ (38 cm ⫻ 48 cm)
Gym: 18⬙ ⫻ 18⬙ (46 cm ⫻ 46 cm)
Figure 6 Stimulus regions and their dimensions (photographs were cropped to approximate the view through the ‘‘window’’). Based on Figure 2 in Intraub (2004).
everything up to but not outside the wooden boundaries during the 30-s inspection interval. In both cases, they were told to remember the areas in as much detail as possible. In all conditions, participants named the objects and described a title for the view (so that we could check that identification and interpretation of the regions were the same across conditions—they were). With the blindfold in place, after having studied each of the six regions, participants were escorted to a waiting area, while the windows were removed by the experimenters.
Rethinking Scene Perception: A Multisource Model
253
A
B
Figure 7 Visual exploration (A) and haptic exploration (B) of the ‘‘toys’’ scene (all borders were removed prior to test). Based on Figure 1 in Intraub (2004).
Upon returning to the regions, participants, using the same modality as during study, were asked to use a fingertip to show where each boundary had been located. Experimenters set the borders down at those locations. Participants were then allowed to make any adjustments necessary to the four boundaries to make the region the same as before. In spite of their proximity to the windows and to the graspable objects, vision participants placed the boundaries out farther than they had been placed originally. The mean area remembered for each scene in each condition is shown in Figure 8. Vision participants increased the area of these views such that the mean area increase across scenes was 53% (which reflected boundary extension in both the length and width of the window). Haptic exploration
254
Helene Intraub
220
Vision Touch KC
Mean % area remembered
200 180 160 140 120 100 80
Desk
Toys
Bureau Bedroom Scene
Sink
Gym
Figure 8 Mean percentage of the area of each region remembered by sighted participants in the visual and haptic conditions and the percentage of each region remembered by KC, who has been deaf and blind since early life. Error bars show the 0.95 confidence interval around each mean. (Boundary extension occurs when the mean remembered area is significantly greater than 100%, i.e., when 100% is not included in the confidence interval.) From Intraub (2004).
without vision occurred in five of the six scenes for the blindfolded-sighted participants (the scene that did not yield boundary extension was remembered in a distorted manner because of an alignment illusion in the blindfolded condition; see Intraub, 2004 for details). Although robust, and sizeable, the amount of boundary extension was clearly less than in the visual condition: on average, they increased the area by 17%. Did boundary extension truly reflect haptic exploration, or because we were not testing ‘‘haptic experts’’ but people who were momentarily blindfolded, might this smaller sized boundary extension be the result of participants using visual imagination to support their exploration (as discussed earlier, there is evidence that visual imagination can induce boundary extension; Intraub et al., 1998). To address this issue, there was a third condition in the experiment that included a single participant. KC was a 25-year-old student who had been both deaf and blind since early life. Her natural mode of exploring the world is through haptic exploration, and as shown in Figure 8, her performance was very similar to the blindfolded-sighted observers. As shown, sometimes her area increase was greater than the area increase of the group, and sometimes the same; her increase was significantly smaller than the group mean for only one region. She experienced the same alignment error on the ‘‘bureau region’’ as the
Rethinking Scene Perception: A Multisource Model
255
blindfolded-sighted participants, and when ranked among the other haptic participants, she fell among the top extenders, but was not an outlier. Although she was among the top extenders in the haptic condition, her boundary extension was always significantly smaller than the mean of the vision group (something, I might add, that KC found quite amusing). Clearly, boundary extension occurred in real space following visual inspection, haptic exploration (without vision), and exploration by a ‘‘haptic expert.’’ Clearly, those who explored the scene using the visual modality made the largest errors. Why?
4.2. Cross-Modal Boundary Extension There are several possible explanations of why visual exploration might lead to more expansive boundary extension. One possibility is that the difference does not reflect a difference in the representation, but instead reflects a bias during testing; people might simply have been more conservative in setting the boundaries at test manually when they could not see. In Intraub, Morelli, and Daniels (2009), we sought to determine if the difference between modalities observed in Intraub (2004) could be replicated using seven new stimulus regions; and then if so, to determine if the difference is due to the mode of exploration during perception, the mode of exploration at test, or both We tested 80 participants in a 2 (input modality) 2 (test modality) design. Participants perceived the regions using either vision or haptic exploration (without vision) and were then tested either using the same modality or the other modality. Boundary extension occurred for all seven regions, in all four conditions. Results revealed an effect of input modality, no effect of test modality and no interaction between the two. When the regions were explored visually, boundary extension was greater than when they were explored manually (without vision) irrespective of the test modality. This shows that boundary extension was unaffected by a crossmodal transfer at test. The decision about boundary location (perhaps as discussed earlier, a source monitoring decision) was influenced the modality used to originally perceive the region. The direction of the effect (greater boundary extension in vision) can be explained in terms of the source monitoring account of boundary extension in the following way. The contact involved in manual exploration involves several high-acuity areas (five fingertips per hand) and a smaller peripheral region than vision. Perhaps as a result, memory for felt space differs more sharply from the amodally generated continuation of the region beyond the boundaries. If the difference is more discontinuous than in vision (single point of fixation and a very large periphery), this might result in a higher threshold for accepting amodally generated space as having been experienced though sensory input. Thus, this would result in less boundary
256
Helene Intraub
extension for the contact sense. This analysis certainly does not prove that source monitoring is involved, but is offered simply as a hypothesis that would be consistent with the multisource framework. This suggests that boundary extension might be constrained, in part, by the nature of the input modality used to explore the world. During visual scanning, a small shift of the head and/or the eyes will bring a much larger new region into view than will a small shift in hand position during haptic exploration. In terms of the possible usefulness of boundary extension in anticipating upcoming layout, it would be more likely to help provide a sense of a continuous world if the extended region were large enough to facilitate integration with adjacent regions, but not so large as to be confusing or misleading given the characteristics of the input modality. Related to this, we found in another experiment in this series that when we allowed participants to use both vision and haptics simultaneously (bimodal input), boundary extension did not differ significantly from that obtained following haptic exploration alone. The haptic input apparently tempered the decision about where the edges of the region had been located.
4.3. Monocular Tunnel Vision and Boundary Extension In two other experiments in this series, we sought to determine if the difference between the modalities reflected fundamental differences, or if the differences observed might actually be mediated by something relatively simple—the scope of each perceptual sample (i.e., the visual field is relatively large in comparison to the field of ‘‘view’’ associated with haptics). If the difference is simply related to the scope of each sample, then if we were to restrict the observer’s field of view during visual exploration of the stimulus regions, we might be able to reduce the amount of boundary extension they experience. That is, restricted viewing might make memory following visual exploration more similar to that observed following haptic exploration. To test this, we created vision blocking goggles, shown in Figure 9. There were two monocular tunnel vision conditions (large tunnel and small tunnel) in which vision was restricted, causing participants to have to move their heads to inspect each of three stimulus regions. A binocular viewing condition (as in the previous experiments) served as a baseline control. An illustration of presentation and test conditions for the small monocular tunnel condition is shown in Figure 10. Except for the fact that the two tunnel vision groups wore vision blocking goggles with peepholes during study and test, the procedure was the same as in the previous experiments. Observers with the large monocular tunnel view could position their heads so that they could see all or almost all of the region at one time (depending on window size), but still they had to move their heads around when they wanted to gain a high-acuity view of different parts of the region. Observers
Rethinking Scene Perception: A Multisource Model
257
Figure 9 Large monocular tunnel vision goggles (3 cm peephole) and small monocular tunnel goggles (0.6 cm peephole) from the tunnel vision experiments in Intraub, Morelli, and Daniels (2009).
Figure 10 An illustration of the presentation (left) and test procedure with the tunnel vision goggles (right) in the tunnel vision experiments in Intraub, Morelli, and Daniels (2009).
with the small monocular tunnel view could only see a fraction of the stimulus region at a time. They could see an entire object at once, only in the case of the smallest objects (miniature toy cars). These participants made frequent extreme shifts of head position to study the stimulus regions. Study time was 30 s in all conditions. The results were somewhat surprising in that introduction of these extremely unnatural viewing had no effect on memory. All three scenes were remembered with extended boundaries in all three conditions. On average, observers increased the areas of the regions by about one-third their original size and no significant difference was obtained across the groups; the mean area increase for the binocular group, large monocular tunnel group, and small tunnel group was 32%, 31%, and 37%, respectively. We conducted one more tunnel vision experiment for two reasons. First, we wanted to determine if a factor that affected boundary extension
258
Helene Intraub
with pictures would have a similar effect on boundary extension for these 3D views. We sought to determine if memory for more ‘‘wide-angle views’’ of the same objects and backgrounds (i.e., slightly larger window sizes) would yield less boundary extension. It is well established in the literature that wider-angle pictures that include more continuous background between the objects and the edges of the view elicit less boundary extension (e.g., Intraub et al., 1992). Second, to test the unlikely possibility that the three viewing conditions yielded the same amount of boundary extension because at test there was simply a favored location for placing the borders, we set the boundaries of the stimulus views in the new experiment to equal the mean positions provided by the previous participants. If this was a ‘‘favored position’’ then in setting the boundaries this time, no boundary extension would be expected. Given the lack of a difference between the two tunnel conditions we tested only the two extremes (binocular viewing and small monocular tunnel viewing). Although the boundaries were set at the mean location chosen by the previous participants, boundary extension occurred in both conditions. Consistent with the results of picture studies, compared with the previous experiment, boundary extension was reduced for these ‘‘wider views.’’ The mean area increase was 7% in the binocular condition and 14% in the small monocular tunnel condition. Again, there was no significant difference between the two groups, and if anything, as in the previous experiment, the means favored greater boundary extension in the small monocular tunnel group rather than less (which would be expected if the limited spatial scope had made vision more like haptics).
4.4. Possible Clinical Implications Simply reducing the size of each sample during exploration did not reduce boundary extension. The results suggest that it is the nature of the modality rather than the spatial scope of each sample that will affect memory. This outcome is interesting in light of a clinical observation made by ophthalmologists treating retinitis pigmentosa (a progressive eye disease in which the patient loses peripheral vision). There are some patients who when first reporting that they have an eye problem are surprisingly unaware that the problem involves major losses of peripheral vision—a phenomenon that has sometimes been attributed to denial (D. Lindsey, personal communication, December 9, 2005). Our results suggest a possible alternative. The lack of sensitivity to this dramatic deficit in the sensory input might in part reflect an intact multisource scene representation that masks the true nature of the patient’s problem. Like our small tunnel participants, although of course much more gradually, these patients begin to search differently (increasing their head movements), but are still able to perceive and remember a
Rethinking Scene Perception: A Multisource Model
259
coherent, continuous scene; what is missing from peripheral vision is augmented by nonvisual sources in their multisource representation.
5. Summary and Conclusions The multisource model presented here provides a possible framework for rethinking visual scene representation. According to this view, at its core, scene perception is an act of spatial cognition that builds upon a deeply maintained sense of surrounding space—as discussed here, an egocentric frame of reference (although other frameworks can be implemented as well; Epstein & Higgins, 2007). This reference frame is filled-in by vision, amodal perception (Kanizsa, 1979; Nakayama et al., 1995; Yin et al., 2000), and contextual information that is garnered through categorization of the global layout of a view (see Greene & Oliva, 2009) and through contextual associations that are triggered by the objects (Bar, 2004). The idea that associations evoke probable contexts is consistent in a broad sense with Bar’s (2004) multiplexer model, although in the multisource model described here, these associations, along with the other sources of input, are organized within a surrounding spatial framework. The detailed quality and specificity of the representation is graded, just as the visual field itself is graded. The graded acuity present in visual and haptic input may serve to enhance incorporation of new sensory inputs into an active scene representation. Thus, our representation of a scene always goes beyond the sensory input. In the case of a single view, as discussed here, the representation becomes less constrained by specific visual information the farther from the boundary a location is. Thus, the boundary extension error is relatively small—only the most highly constrained region will be mistaken for having been seen (or touched in the case of haptics without vision). Given the multisource nature of the representation, boundary extension can be explained by adopting, without change, a model designed to explain many long-term memory phenomena—the source monitoring model ( Johnson et al., 1993; Lindsay, 2008). What is interesting is that in this context source monitoring can provide an explanation of a memory error that occurs following a very brief break in the sensory input ranging from a second or two (Bertamini et al., 2005; Intraub et al., 1996, 2006) down to intervals lasting less than 1/20th of a second (Dickinson & Intraub, 2008; Intraub & Dickinson, 2008). The notion here is that scene ‘‘extrapolation’’ does not take place after the stimulus is gone. Instead, the boundary extended region was already present in the scene representation while the visual input was available—in the form of amodal perception and contextual associations. It is only after a break in the sensory input, when the observer is
260
Helene Intraub
forced to monitor memory and to make a source attribution that highly constrained amodal information is attributed to having been seen. Current visual processing models are by their nature single-source models and thus provide no predictions about scene representation based upon other modalities. In contrast, the multisource model has at its core a spatial frame of reference (in our discussion, an egocentric frame of reference) that is filled-in by multiple sources of input, with or without the inclusion of vision. This raises the expectation that there will be similarities in scene perception and memory across modalities. Such similarities have been observed with respect to boundary extension. Memory beyond the boundaries occurs following visual and haptic exploration (without vision) of the same 3D regions. Boundary extension also occurs when a background is imagined (no sensory modality; Intraub et al., 1998). A key point of the multisource framework is that whichever modality may be in the fore, scene representation will be a multisource representation that captures the continuity of layout in the world. This can be thought of as an example of situated perception (Barsalou, 1999) in which a view (a part of a scene) instantiates an encompassing scene representation. Cognitive neuroscience research has provided interesting support for the notion of a multisource scene representation. This can be seen in research on the neural response to different types of questions about a picture that draw on different frames of reference (e.g., the neural activity associated with what is happening in the picture, e.g., ‘‘a party,’’ the layout of the immediate view, or the integration of that view within a larger geographic framework; Epstein & Higgins, 2007). Evidence for the role of expected contexts caused by associations with objects has also been reported (Bar, 2004). This has resulted in an interesting controversy about whether or not PPA and RSC should be conceptualized as scene-selective regions as originally thought (e.g., Epstein, 2009; Epstein & Kanwisher, 1998) or as part of a network of more abstract conceptual associations (Bar, 2004). These are controversies that are yet to play out, and they raise interesting questions about scene perception and its relation to other aspects of cognition. However, in the context of exploring objects in locations with the eyes or hands, I suggest that the underlying organizing structure that allows us to understand our world is not so much an abstract schema (Hochberg, 1986; Intraub, 1997) as a concrete sense of surrounding space in relation to the observer. The world is continuous, and surrounds us, we are embedded within an environment and navigate through it. Sensory input can never provide access to the details of our surroundings all at once—creating one of the classic puzzles that challenge theories of perception (e.g., Hochberg, 1986; O’Regan, 1992; Rensink, 2000). A multisource scene representation provides one possible explanation of how observers perceive a continuous world that they can sample only a part at a time; and why observers tend to remember seeing beyond the physical boundaries of a view just moments after that view is gone.
Rethinking Scene Perception: A Multisource Model
261
ACKNOWLEDGMENT This work was supported in part by NIMH Grant MH54688.
REFERENCES Allen, G. (2004). Human spatial memory: Remembering where. Mahwah, NJ: Erlbaum. Atkins, J. E., Fiser, J., & Jacobs, R. A. (2001). Experience-dependent visual cue integration based on consistencies between visual and haptic percepts. Vision Research, 41, 449–461. Bar, M. (2004). Visual objects in context. Nature Reviews: Neuroscience, 5, 617–629. Barsalou, L. W. (1999). Perceptual symbol systems. Behavioral and Brain Sciences, 22, 577–609. Bertamini, M., Jones, L. A., Spooner, A., & Hecht, H. (2005). Boundary extension: The role of magnification, object size, context, and binocular information. Journal of Experimental Psychology: Human Perception and Performance, 31(6), 1288–1307. Biederman, I. (1981). On the semantics of a glance at a scene. In M. Kubovy & J. R. Pomerantz (Eds.), Perceptual organization (pp. 213–253). Hillsdale, NJ: Erlbaum. Bryant, D. J., Tversky, B., & Franklin, N. (1992). Internal and external spatial frameworks for representing described scenes. Journal of Memory and Language, 31, 74–98. Chapman, P., Ropar, D., Mitchell, P., & Ackroyd, K. (2005). Understanding boundary extension: Normalization and extension errors in picture memory among normal adults and boys with and without Asperger’s syndrome. Visual Cognition, 12(7), 1265–1290. Dickinson, C. A., & Intraub, H. (2008). Transsaccadic representation of layout: What is the time course of boundary extension? Journal of Experimental Psychology: Human Perception and Performance, 34, 543–555. DiCola, C., & Intraub, H. (2004). Reconstructing scenes: View reconstructing scenes: Viewboundaries vs. boundaries vs. object object-boundaries boundaries. Visual Science Society Meeting, Sarasota, FL. Epstein, R. A., & Higgins, J. S. (2007). Differential parahippocampal and retrosplenial involvement in three types of visual scene recognition. Cerebral Cortex, 17, 1680–1693. Epstein, R., & Kanwisher, N. (1998). A cortical representation of the local visual environment. Nature, 392(9), 598–601. Epstein, R. Al., & Ward, E. J. (2009). How reliable are visual context effects in the parahippocampal place area? Cerebral Cortex, Advance Access published on June 16, 2009; doi:10.1093/cercor/bhp099. Fantoni, C., Hilger, J. D., Gerbino, W., & Kellman, P. J. (2008). Surface interpolation and 3D relatability. Journal of Vision, 8(7)doi:10.1167/8.7.29 29, 1–19, http:// journalofvision.org/8/7/29/. Franklin, N., & Tversky, B. (1990). Searching imagined environments. Journal of Experimental Psychology: General, 119, 63–76. Gottesman, C. V., & Intraub, H. (2002). Surface construal and the mental representation of scenes. Journal of Experimental Psychology: Human Perception and Performance, 28(3), 589–599. Gottesman, C. V., & Intraub, H. (2003). Constraints on spatial extrapolation in the mental representation of scenes: View-boundaries vs. object-boundaries. Visual Cognition, 10(7), 875–893. Greene, M. R., & Oliva, A. (2009). Recognition of natural scenes from global properties: Seeing the forest without representing the trees. Cognitive Psychology, 58, 137–176. Henderson, J. M., & Hollingworth, A. (1999). High-level scene perception. Annual Review of Psychology, 50, 243–271.
262
Helene Intraub
Henderson, J. M., & Hollingworth, A. (2003). Eye movements and visual memory: Detecting changes to saccade targets in scenes. Perception & Psychophysics, 65, 58–71. Hochberg, J. (1986). Representation of motion and space in video and cinematic displays. In K. J. Boff, L. Kaufman & J. P. Thomas (Eds.), Handbook of perception and human performance Vol. 1(pp. 22.1–22.64). New York: Wiley. Intraub, H. (1984). Conceptual masking: The effects of subsequent visual events on memory for pictures. Journal of Experimental Psychology: Learning, Memory, and Cognition, l0, 115–125. Intraub, H. (1997). The representation of Visual Scenes. Trends in the Cognitive Sciences, 1, 217–221. Intraub, H. (2002). Anticipatory spatial representation of natural scenes: momentum without movement?. Visual Cognition, 9, 93–119. Intraub, H. (2004). Anticipatory spatial representation in a deaf and blind observer. Cognition, 94, 19–37. Intraub, H. (2007). Scene perception: The world through a window. In M. A. Peterson, B. Gillam & H. A. Sedgwick (Eds.), The mind’s eye: Julian Hochberg on the perception of pictures, films, and the world (pp. 454–466). New York: Oxford University Press. Intraub, H., Bender, R. S., & Mangels, J. A. (1992). Looking at pictures but remembering scenes. Journal of Experimental Psychology: Learning, Memory, and Cognition, 18(1), 180–191. Intraub, H., Daniels, K. K., Horowitz, T. S., & Wolfe, J. M. (2008). Looking at scenes while searching for numbers: Dividing attention multiplies space. Perception & Psychophysics, 70, 1337–1349. Intraub, H., & Dickinson, C. A. (2008). False memory 1/20th of a second later: What the early onset of boundary extension reveals about perception. Psychological Science, 19, 1007–1014. Intraub, H., Gottesman, C. V., & Bills, A. J. (1998). Effects of perceiving and imagining scenes on memory for pictures. Journal of Experimental Psychology: Learning, Memory, and Cognition, 24(1), 186–201. Intraub, H., Gottesman, C. V., Willey, E. V., & Zuk, I. J. (1996). Boundary extension for briefly glimpsed photographs: Do common perceptual processes result in unexpected memory distortions? Journal of Memory and Language, 35, 118–134. Intraub, H., Hoffman, J. E., Wetherhold, C. J., & Stoehs, S.-A. (2006). More than meets the eye: The effect of planned fixations on scene representation. Perception & Psychophysics, 68(5), 759–769. Intraub, H., Morelli, F., & Daniels, K. K. (2009). Exploring the world by eye and by hand. Manuscript in preparation. Intraub, H., & Richardson, M. (1989). Wide-angle memories of close-up scenes. Journal of Experimental Psychology: Learning, Memory, and Cognition, 15(2), 179–187. Irwin, D. E. (1991). Information integration across saccadic eye movements. Cognitive Psychology, 23, 420–456. Irwin, D. E. (1993). Perceiving an integrated visual world. In D. E. Meyer & S. Kornblum (Eds.), Attention and performance 14: Synergies in experimental psychology, artificial intelligence, and cognitive neuroscience (pp. 121–142). Cambridge, MA: MIT Press. James, W. (1890). The principles of psychology. Vol. II New York: Holt and Company. Johnson, M. K. (2006). Memory and reality. American Psychologist, 61, 760–771. Johnson, M. K., Hashtroudi, S., & Lindsay, D. S. (1993). Source Monitoring. Psychological Bulletin, 114, 3–28. Johnson, M. K., & Raye, C. L. (1981). Reality monitoring. Psychological Review, 88, 67–85. Kanizsa, G. (1979). Organization in vision. New York: Praeger. Koriat, A., Goldsmith, M., & Pansky, A. (2000). Toward a psychology of memory accuracy. Annual Review of Psychology, 51, 481–537. Legault, E., & Standing, L. (1992). Memory for size of drawings and of photographs. Perceptual and Motor Skills, 75, 121.
Rethinking Scene Perception: A Multisource Model
263
Lindsay, D. S. (2008). Source monitoring. In J. Byrne (Ed.), Cognitive psychology of memory. Vol. 2 of learning and memory: A comprehensive reference, 4 vols (pp. 325–348). Oxford: Elsevier. Loftus, G. R., & Ginn, M. (1984). Perceptual and conceptual processing of pictures. Journal of Experimental Psychology: Learning, Memory and Cognition, 10, 435–441. Loftus, G. R., Johnson, C. A., & Shimamura, A. P. (1985). How much is an icon worth? Journal of Experimental Psychology: Human Perception and Performance, 11, 1–13. Michod, K., & Intraub, H. (2009). Boundary extension. Scholarpedia, 4(2), 3324. Nakayama, K., He, Z. J., & Shimojo, S. (1995). Visual surface representation: A critical link between lower-level and higher level vision. In S. M. Kosslyn & D. N. Osherson (Eds.), Invitation to Cognitive Science (pp. 1–70). Cambridge, MA: MIT Press. O’Craven, K. M., & Kanwisher, N. (2000). Mental imagery of faces and places activates corresponding stimulus-specific brain regions. Journal of Cognitive Neuroscience, 12, 1013–1023. O’Regan, J. K. (1992). Solving the ‘‘real’’ mysteries of visual perception: The world as an outside memory. Canadian Journal of Psychology, 46, 461–488. O’Regan, J. K., & Noe¨, A. (2001). A sensorimotor account of vision and visual consciousness. Behavioral and Brain Sciences, 24, 939–1011. Park, S. J., Intraub, H., Yi, D.-J., Widders, D., & Chun, M. M. (2007). Beyond the edges of a view: Boundary extension in human scene-selective visual cortex. Neuron, 54(2), 335–342. Phillips, W. A. (1974). On the distinction between sensory storage and short-term visual memory. Perception & Psychophysics, 16(2), 283–290. Potter, M. C. (1976). Short-term conceptual memory for pictures. Journal of Experimental Psychology: Human Learning and Memory, 2(5), 509–522. Potter, M. C. (1999). Understanding sentences and scenes: The role of conceptual shortterm memory. In V. Coltheart (Ed.), Fleeting memories: Cognition of brief visual stimuli (pp. 13–46). Cambridge, MA: MIT Press. Previc, F. H. (1998). The neuropsychology of 3-D space. Psychological Bulletin, 124, 123–164. Quinn, P. C., & Intraub, H. (2007). Perceiving ‘‘outside the box’’ occurs early in development: Evidence for boundary extension in 3- to 7-month-old infants. Child Development, 78, 324–334. Rensink, R. A. (2000). The dynamic representation of scenes. Visual Cognition, 7(1–3), 17–42. Rensink, R. A., O’Regan, J. K., & Clark, J. J. (1997). To see or not to see: The need for attention to perceive changes in scenes. Psychological Science, 8, 368–373. Seamon, J. G., Schlegel, S. E., Hiester, P. M., Landau, A. M., & Blumenthal, B. F. (2002). Misremembering pictured objects: People of all ages demonstrate the boundary extension illusion. American Journal of Psychology, 115, 151–167. Simons, D. J., Franconeri, S. L., & Reimer, R. L. (2000). Change blindness in the absence of visual disruption. Perception, 29, 1143–1154. Simons, D. J., & Rensink, R. A. (2005). Change blindness: Past, present, and future. Trends in Cognitive Sciences, 9(1), 16–20. Standing, L. (1973). Learning 10, 000 pictures. The Quarterly Journal of Experimental Psychology, 25, 207–222. Standing, L., Conezio, J., & Haber, R. N. (1970). Perception and memory for pictures: Single-trial learning of 2500 visual stimuli. Psychonomic Science, 19, 73–74. Thorpe, S., Fize, D., & Marlot, C. (1996). Speed of processing in the human visual system. Nature, 381(6), 520–522. Turk-Browne, N. B., Scholl, B. J., & Chun, M. M. (2008). Habituation in infant cognition and functional neuroimaging. Frontiers in Human Neuroscience, 2, 16.
264
Helene Intraub
Tversky, B. (2009). Spatial cognition: Embodied and situated. In P. Robbins & M. Aydede (Eds.), The Cambridge handbook of situated cognition (pp. 201–216). Cambridge, MA: Cambridge University Press. VanRullen, R., & Thorpe, S. J. (2001). Is it a bird? Is it a plane? Ultra-rapid visual categorisation of natural and artifactual objects. Perception, 30, 655–668. Yin, C., Kellman, P. J., & Shipley, T. F. (2000). Surface integration influences depth discrimination. Vision Research, 40(15), 1969–1978. Zelinsky, G. J., & Loschky, L. C. (2005). Eye movements serialize memory for objects in scenes. Perception & Psychophysics, 67, 676–690. Zelinsky, G. J., & Schmidt, J. (2009). An effect of referential scene constraint on search implies scene segmentation. Visual Cognition, 17, 1004–1028.
C H A P T E R
S E V E N
Components of Spatial Intelligence Mary Hegarty Contents 1. Introduction 2. Identifying Components of Spatial Thinking 2.1. Spatial Ability Measures 2.2. Examination of Spatial Expertise 3. Two Components of Spatial Intelligence 4. Flexible Strategy Choice as a Component of Spatial Intelligence 4.1. Strategies in Spatial Abilities Tests 4.2. Strategies in Complex Spatial Tasks 5. Representational Metacompetence as a Component of Spatial Intelligence 5.1. Using Representations 5.2. Choosing Representations 6. Concluding Remarks Acknowledgments References
266 267 267 268 270 270 271 275 281 282 286 293 294 294
Abstract This chapter identifies two basic components of spatial intelligence, based on analyses of performance on tests of spatial ability and on complex spatial thinking tasks in domains such as mechanics, chemistry, medicine, and meteorology. The first component is flexible strategy choice between mental imagery (or mental simulation more generally) and more analytic forms of thinking. Research reviewed here suggests that mental simulation is an important strategy in spatial thinking, but that it is augmented by more analytic strategies such as task decomposition and rule-based reasoning. The second is meta-representational competence [diSessa, A. A. (2004). Metarepresentation: Native competence and targets for instruction. Cognition and Instruction, 22, 293–331], which encompasses ability to choose the optimal external representation for a task and to use novel external representations productively. Research on this aspect of spatial intelligence reveals large individual differences in ability to adaptively choose and use external visual–spatial representations for a task. This research suggests that we should not just think of interactive external visualizations as ways of augmenting spatial intelligence, but also consider the types of intelligence that are required for their use. Psychology of Learning and Motivation, Volume 52 ISSN 0079-7421, DOI: 10.1016/S0079-7421(10)52007-3
#
2010 Elsevier Inc. All rights reserved.
265
266
Mary Hegarty
1. Introduction When we think about how to best arrange suitcases to fit them in the trunk of a car, do a jigsaw puzzle, or plan the route to a friend’s house, we are engaging in spatial thinking. Architects think spatially when they design a new house, and financial analysts think spatially when they examine graphs of the rising and falling prices of different stocks. Spatial thinking is also central to many scientific domains. For example, geologists reason about the physical processes that lead to the formation of structures such as mountains and canyons, chemists develop models of the structure of molecules, and zoologists map the tracks of animals to gain insights into their foraging behavior. Spatial thinking involves thinking about the shapes and arrangements of objects in space and about spatial processes, such as the deformation of objects, and the movement of objects and other entities through space. It can also involve thinking with spatial representations of nonspatial entities, for example, when we use an organizational chart to think about the structure of a company or a graph to evaluate changes in the cost of health care. This chapter is about spatial intelligence, which can be defined as adaptive spatial thinking. The word intelligence also brings to mind the concept of individual differences in ability, and this concept is also central to the research reviewed here. A recent report (National Research Council, 2006) claimed that spatial intelligence is ‘‘not just undersupported but underappreciated, undervalued, and therefore underinstructed’’ (p. 5) and called for a national commitment to the development of spatial thinking across all areas of the school curriculum. A new interest in spatial thinking has also been stimulated by the development of powerful information technologies that can support this form of thinking. With developments in computer graphics and human– computer interaction, it is now easy to ‘‘visualize’’ information, that is, create external visual–spatial representations of data and interact with these visualizations to learn, understand, and solve problems. For example, medical students now learn anatomy by interacting with computer visualizations that they can rotate at will, children learn world geography by flying over the earth using Google Earth, and scientists gain insight into their data by visualizing and interacting with multidimensional plots. New technologies have primarily been seen as supporting spatial thinking (Card, Mackinlay, & Shneiderman, 1999; Thomas & Cook, 2005), but there are also questions about what types of spatial thinking they demand for their use. There is emerging support for the idea that aspects of spatial intelligence can be developed through instruction and training. For example, performance on tests of spatial ability and laboratory tasks such as mental rotation can be improved with practice (e.g., Kail, 1986; Wright, Thompson, Ganis, Newcombe, & Kosslyn, 2008), with instruction (Gerson, Sorby, Wysocki,
267
Components of Spatial Intelligence
& Baartmans, 2001) and even by experience playing video games (Feng, Spence, & Pratt, 2007; Terlecki, Newcombe, & Little, 2008). There is also evidence that the effects of training transfers to other spatial tasks that are not practiced (Wright et al.), and that it endures for months after the training experience (Terlicki et al.). But in training spatial thinking, what exactly should we instruct? If we are to be most effective in fostering spatial thinking, we need to identify the basic components of this form of thinking so that training can be aimed at these fundamental components.
2. Identifying Components of Spatial Thinking 2.1. Spatial Ability Measures One approach to identifying basic components of spatial thinking is to examine what is measured by spatial ability tests. This approach is taken in much current research on spatial thinking. Studies of spatial training have used classic tests of spatial visualization ability, such as the Paper Folding Test (Eckstrom, French, Harman, & Dermen, 1976) and the Vandenberg Mental Rotations Tests (Vandenberg & Kuse, 1978). Sample items from these tests are shown in Figure 1. These tasks have been used both to train aspects of spatial thinking and to measure the effects of other forms of spatial experience (Feng et al., 2007; Terlecki et al., 2008; Wright et al., 2008). A Paper folding task A
B
C
D
E
B Vandenberg mental rotation task
Figure 1 Sample item of the type used in the Paper Folding task and sample item from the Vandenberg Mental Rotation Test. In the Paper Folding Test, the diagrams on the left show a piece of paper being folded and a hole being punched in the paper. The task is to say which of the five diagrams on the right shows how the paper will look when it is unfolded. In the Vandenberg Mental Rotation Test, the task is to determine which two of the four figures on the right are rotations of the figure on the left.
268
Mary Hegarty
Furthermore, many of the claims of the importance of spatial intelligence in scientific thinking cite evidence that thinking in various scientific domains, such as physics, chemistry, geology, and mathematics, is correlated with performance on spatial visualization tests such as these (Casey, Nuttall, & Pezaris, 1997; Coleman & Gotch, 1998; Keehner et al., 2004; Kozhevnikov, Motes, & Hegarty, 2007; Orion, Ben-Chaim, & Kali, 1997). There are clear advantages of using spatial ability tests as one starting point in examining spatial thinking. There has been a long tradition of measuring and classifying these abilities, resulting in the development of many standardized tests (Eliot & Smith, 1983). From a long history of factor analytic studies, we also have a good understanding of the basic dimensions of spatial thinking that these tests measure (Carroll, 1993; Hegarty & Waller, 2006). There has also been extensive research identifying the cognitive processes involved in performing these tasks (e.g., Just & Carpenter, 1985; Lohman, 1988; Pellegrino & Kail, 1982) the dimensions (speed of processing, strategies, etc.) in which more and less able people differ, and the relation of test performance to fundamental theoretical constructs such as working memory (Miyake, Rettinger, Friedman, Shah & Hegarty, 2001; Shah & Miyake, 1996). Thus, one current approach in my laboratory uses spatial ability measures as a starting point, examining the range of strategies that people use in performing some of the most common measures of spatial ability and the degree to which they use similar strategies across different tests. But there are also disadvantages of confining ourselves to the study of spatial ability measures in identifying the basic components of spatial thinking. This is primarily because the development of spatial ability measures has not been systematic or theoretically motivated. Most of spatial ability measures were developed for practical reasons, such as predicting performance in various occupations (mechanic, airplane pilot, etc.) and the tests that came into wide use were those that were successful in prediction (Smith, 1964). Although there are dozens of published tests of spatial ability (Eliot & Smith, 1983), there was no systematic attempt to first identify the basic components of spatial intelligence and develop tests that measured each one (Hegarty & Waller, 2006). The components identified by factor analyses and meta-analyses are informative, but are strongly determined by the frequency of use of the various tests, because tests that are not used frequently are not included in meta-analyses, or have limited influence on the factors identified. It is possible, therefore that we might miss key aspects of spatial thinking when we focus exclusively on what these tests predict.
2.2. Examination of Spatial Expertise A complementary approach, adopted in my laboratory over the last few years, is to examine domains of expertise that demand spatial thinking, to analyze the types of tasks that experts in these domains have to accomplish, and the spatial
Components of Spatial Intelligence
269
cognitive processes with which they struggle. In addition to examining spatial expertise ‘‘in the wild,’’ we attempt to bring these cognitive processes under experimental control and develop standardized measures of these processes so that we can study them in more detail in the laboratory. This approach allows us to identify spatial cognitive processes that appear to be important to spatial thinking in the real world and that are not always well captured by existing tests of spatial ability. As a result, we have studied a wider range of spatial thinking tasks than are measured by the most commonly used spatial ability tests, and we do not consider a correlation with spatial ability measures to be a prerequisite for classifying a task or domain as involving spatial thinking. To date, we have examined aspects of spatial thinking in medicine (surgery, radiology, and learning anatomy) (Hegarty, Keehner, Cohen, Montello, & Lippa, 2007; Keehner, Hegarty, Cohen, Khooshabeh, & Montello, 2008; Keehner, Lippa, Montello, Tendick, & Hegarty, 2006; Stull, Hegarty, & Mayer, 2009), meteorology (Hegarty, Canham, & Fabrikant, in press; Hegarty, Smallman, Stull, & Canham, 2009), mechanical reasoning (Hegarty, 1992; 2004), and physics problem solving (Kozhevnikov et al., 2007), and a current research project examines spatial thinking in organic chemistry. These domains have several important characteristics in common. First, they are concerned with encoding, maintaining, and inferring information about spatial structures and processes. For example, in learning anatomy, medical students need to learn the shapes of three-dimensional objects, the spatial relations between parts of these structures and how they are connected in three-dimensional space, while chemists have to reason about how atoms combine in characteristic substructures, to compose complex molecules. In terms of processes, mechanics have to infer how the parts of an engine should move, on the basis of its structure (e.g., the shape, material composition, and connectivity of its parts) to diagnose why a faulty engine is not working properly. Meteorologists need to infer how pressure systems develop and interact with the surface topography of a region and moisture in the atmosphere to cause heat waves, thunderstorms, and sundowner winds. As we will see, thinking in these domains relies on some of the same cognitive processes as psychometric measures of spatial visualization, which measure ability to mentally rotate objects, imagine objects from different perspectives, imagine folding and unfolding of pieces of paper, and mentally construct patterns from elementary shapes. However, one distinctive characteristic of expert spatial thinking is that it typically involves imagining structures and processes that are much more complex than those contained in spatial ability items. There are important questions, therefore, about how the ability to imagine simple objects such as cubes, and simple transformations such as rotations, scales up to such complex cognitive activities as imagining the structure and functioning of the digestive system or predicting the development of a hurricane. A second distinctive characteristic of expert spatial thinking is that it increasingly involves using and interacting with external visual–spatial
270
Mary Hegarty
representations. Biologists, architects, and technicians have relied on printed diagrams such as cross sections, exploded views, and orthographic projections ever since the renaissance (Ferguson, 2001), meteorologists rely heavily on satellite and infrared images (Trafton & Hoffman, 2007), and chemists have developed several diverse ways of representing a molecule that facilitates different types of chemical problem solving (Stieff, 2007). Choosing the right external representation for a task can be an important component of spatial thinking. Furthermore, with developments in imaging technologies, computer graphics, and human–computer interaction, spatial thinking depends on the ability to interact effectively with instruments and powerful external visualizations. Today’s meteorologists work with interactive systems in which they can add and subtract meteorological variables to weather maps at will, while also superimposing satellite and infrared imagery on these maps (Trafton & Hoffman, 2007). In these situations, adaptive spatial thinking also involves using interactive visualizations to their best advantage.
3. Two Components of Spatial Intelligence In this chapter, I propose two basic components of spatial intelligence. The first is flexible strategy choice between mental imagery or mental simulation, more generally, and more analytic forms of thinking. The second is meta-representational competence (diSessa, 2004), which encompasses ability to choose the optimal external representation for a task, to use novel external representations productively, and to invent new representations as necessary. In a sense both components are about representation use, with the first being about choice and use of internal representations and the second about choice and use of external representations. These are not the only components of spatial intelligence, and the types of spatial thinking reviewed here exclude important aspects of spatial intelligence, including the ability to navigate, learn spatial layout, and update one’s position in the environment (e.g., Hegarty, Montello, Richardson, Ishikawa, & Lovelace, 2006; Loomis, Lippa, Klatzky, & Golledge, 2002; Montello, 2005). While there is evidence for excellent performance in both aspects of spatial intelligence reviewed here, there is also evidence for lack of competence among many individuals, and suggestions for how these aspects of spatial intelligence might be fostered.
4. Flexible Strategy Choice as a Component of Spatial Intelligence The dominant factor identified in factor analyses of spatial ability tests is labeled ‘‘spatial visualization’’ (Carroll, 1993). Tests that load on this factor include paper folding tests and three-dimensional mental rotation
Components of Spatial Intelligence
271
tests (see the examples in Figure 1) as well as form board and surface development tests (Hegarty & Waller, 2006). The name of this factor suggests that people literally ‘‘visualize’’ to solve items on these tests, that is, they construct mental images of the objects shown in the test and use analog imagery transformation processes to mentally simulate these processes and reveal the answers. The most classic spatial thinking task studied by cognitive psychologists is mental rotation, and a fundamental claim about mental rotation is that it is an analog process, such that the time taken to rotate an image is linearly related to the size of the angle of rotation (Shepard & Metzler, 1971). Given that mental rotation is a common task in spatial ability tests (e.g., Vandenberg & Kuse, 1978), this might lead us to assume that tests of spatial visualization are pure tests of the ability to construct and transform mental images. However, there is a history of studies showing that people use a variety of strategies on spatial ability tests, including more analytic strategies (Geisser, Lehmann, & Eid, 2006; Hegarty & Waller; Lohman, 1988). Recent research in my laboratory suggests while analog imagery processes are important in solving items from tests of spatial visualization, these processes are also augmented by more analytic forms of thinking such as task decomposition and rule-based reasoning. I therefore propose that spatial intelligence involves not just visualization ability, but flexible strategy choice between visualization and more analytic thinking processes. The following sections illustrate the interplay between these two types of thinking, first in solving items from tests of spatial ability, and then in more complex spatial thinking tasks in the domains of mechanics, medicine, and chemistry.
4.1. Strategies in Spatial Abilities Tests In recent studies in my laboratory, we asked students to think aloud while solving items from two commonly used tests of spatial visualization ability, the Paper Folding Test (Eckstrom et al., 1976) and the Vandenberg Mental Rotations Tests (Vandenberg & Kuse, 1978), sample items are shown in Figure 1. Each of these tests is made up of two sections in which students are given 3 min to solve 10 test items. In the first study, students took the first section of each test under the normal timing conditions and then gave a think-aloud protocol while solving the items in the second section (Hegarty, De Leeuw, & Bonura, 2008). On the basis of these protocols, we identified several strategies that students used in each of the tests and created ‘‘strategy choice’’ questionnaires that listed the different strategies identified in the protocols. Then a second group of 37 students (18 male, 19 female) were asked to complete these strategy choice questionnaires after taking computer administered versions of Paper Folding and Mental Rotation tests.
272 Table 1
Mary Hegarty
Strategy Use in the Paper Folding Test.
Imagery strategies I imagined folding the paper, punching the hole, and unfolding the paper in my mind I started at the last step shown and worked backward to unfold the paper and see where the holes would be Spatial analytic strategy First, I figured out where one of the holes would be and then eliminated answer choices that did not have a hole in that location Pure analytic strategies I figured out how many folds/sheets of paper were punched through/I figured out how many holes there would be in the paper at the end I used the number of holes/folds to eliminate some of the answer choices
Number of participants
Correlation with score
34 (92%)
0.02
17 (46%)
0.27
14 (38%)
0.03
25 (68%)
0.44**
20 (54%)
0.26
** p < 0.01. The first column indicates the number of participants who reported the strategy and the second column shows the point-biserial correlation between use of the strategy and test score.
Table 1 lists the strategies identified for the Paper Folding Test, the number of students who reported using each of these on the strategy checklist, and the correlation of use of each strategy with score on the test. Most participants indicated that they had the experience of visualizing the folding of the paper and noting where the holes would be. Several students additionally reported that they worked backward to unfold the paper and figure out where the holes would be. But in addition to these strategies, which we classified as mental imagery strategies, students used a number of more analytic strategies. One was to note the location of where the hole was punched, and to check the answer choices to see if there were any choices that did not have a hole in this location (there is almost always a hole in the location at which the hole was punched after the unfolding process). Using this strategy can eliminate 25% of the answer choices across the test items. Another was to count the number of folds of paper that were punched through to determine how many holes there should be. This was classified as an analytic or rule-based strategy in that it involves abstracting nonspatial information (the number of holes) from the problem information. Using this strategy can eliminate 61% of the answer choices.
Components of Spatial Intelligence
273
Although most students reported using the imagery strategies, they also reported using more analytic strategies. Students reported using 3.09 different strategies on average (SD ¼ 1.29) and the number of strategies that they reported was correlated with their score on the Paper Folding Test (r ¼ 0.40, p ¼ 0.01). In particular, students who reported determining the number of holes in the final answer choice had significantly higher scores on the test (M ¼ 10.8, SD ¼ 3.4) than those who did not report using this strategy (M ¼ 7.25, SD ¼ 3.7). Students who additionally reported that they explicitly used the number of holes to eliminate answer choices had slightly higher scores (M ¼ 10.6, SD ¼ 4.1) than those who just reported counting the number of holes (M ¼ 10.3, SD ¼ 1.6), but this was not a significant difference. This study suggests that although mental imagery may be the dominant strategy that people use to solve paper folding items, imagery is typically augmented by more analytic strategies and using at least one of these analytic strategies is correlated with test performance. A similar conclusion was reached with respect to the Vandenberg Mental Rotation Test. As Table 2 shows, again most students reported using a mental imagery strategy (either imagining the rotation of the objects or imagining changing their perspective with respect to the objects), but there were also a variety of analytic strategies used, including spatial analytic strategies that abstract the relative directions of the different segments of the object, and more abstract analytic strategies in which participants counted the number of cubes in the different segments of the object. Students reported 3.46 strategies on average; in the case of this test the number of strategies used was not correlated with test score (r ¼ 0.18). One notable strategy, previously identified by Geisser et al. (2006), was to compare the directions of the two end arms of the object. This strategy highlights a difference between some items on this paper-and-pencil test and the reaction time task used by Shepard and Metzler (1971). In the Shepard and Metzler task, the foils are always mirror images. However, in the Vandenberg Mental Rotation Test, 35% of the foils differ from the standard object in shape, and this can be detected by examining the two end arms of the object. For example, in the item in Figure 1B, it can be seen that in the standard object on the left, the two end arms are perpendicular to each other, whereas for the first answer choice on the right, the two ends are parallel. Students who reported comparing the directions of the two end arms of the object had significantly higher scores on the Mental Rotation Test (M ¼ 9.7, SD ¼ 4.1) than those who did not report using this strategy (M ¼ 6.6, SD ¼ 5.0). Performance was not significantly related to use of any of the other strategies. There are some limitations to this strategy choice study. It is are based on self reports, raising questions about whether all the strategies that students used were available to conscious awareness, and the suggestive nature of giving them a checklist of strategies. However, our earlier study, in which
274 Table 2
Mary Hegarty
Strategy Use in the Vandenberg Mental Rotation Test.
Mental imagery strategies I imagined one or more of the objects turning in my mind I imagined the objects being stationary as I moved around them to view them from different perspectives Spatial analytic strategies I noted the directions of the different sections of the target with respect to each other and checked whether these directions matched the answer choices I figured out whether the two end arms of the target were parallel or perpendicular to each other and eliminated answer choices in which they were not parallel Pure analytic strategy I counted the number of cubes in different arms of the target and checked whether this matched in the different answer choices Test taking strategy If an answer choice was hard to see, I skipped over it and tried to respond without considering that choice in detail
Number of participants
Correlation with score
34 (92%)
0.14
13 (35%)
0.02
30 (81%)
0.07
23 (62%)
0.33*
20 (54%)
0.12
8 (22%)
0.03
* p < 0.05. The first column indicates the number of subjects who reported the strategy and the second column shows the point-biserial correlation between use of the strategy and test score.
the strategies were determined on the basis of concurrent verbal protocols (Hegarty, De Leeuw, et al., 2008), also revealed that determining the number of holes for the Paper Folding Test and comparing the directions of the two end arms of the object in the Mental Rotation Test were significantly correlated with performance, consistent with the results reported here. Furthermore, in current research we are asking students first to give a retrospective verbal protocol after they complete each test, and then rank the strategies on strategy checklists in order of how much they used them. This research shows that students rank more strategies when given a checklist than are identified from their verbal protocols, but the strategies that they rank the highest are almost always the ones that are identified on the basis of the verbal protocols, providing further validity for the strategy checklists. In summary, studies on typical tests of spatial visualization suggest that
Components of Spatial Intelligence
275
‘‘visualizing’’ spatial transformations (also referred to as using mental imagery or mental simulation) is an important strategy used on these tests, but more analytic strategies are also used.
4.2. Strategies in Complex Spatial Tasks We now turn to a consideration of more complex spatial thinking tasks in domains such as mechanics, medicine, and chemistry. Examining spatial thinking in these domains reveals that many of the processes and structures that professionals have to think about in the real world are considerably more complex than those included in psychometric tests of spatial ability. In these situations, it is even more evident that visualization or other forms of mental imagery are augmented by more analytic processes. Two types of analytic thinking in particular, task decomposition and rule-based reasoning, are important ways in which visualization is augmented in complex spatial thinking. 4.2.1. Task Decomposition Consider the types of spatial thinking that mechanical engineers engage in when designing a new device, or car mechanics engage in when diagnosing what is wrong with your engine. These probably involve imagining how the machine works, but this in turn does not just involve mentally rotating one rigid object (as do measures of mental rotation). Instead it involves imagining different motions (rotations, translations, ratcheting, etc.) of many components and how these motions interact to accomplish the function of the machine. It is implausible that people could imagine these complex interactions within the limited capacity of working memory. One way in which people augment analog thinking processes in this situation is by task decomposition. That is, they use a ‘‘divide and conquer’’ approach to mentally simulate the behavior of complex mechanical systems piecemeal rather than holistically. Take, for example, the pulley system in Figure 2. When you pull on the rope of this pulley system, all of its parts move at once. However, when asked to infer how the system works, people appear to work through the causal chain of events and infer the motion of one component at a time. Hegarty (1992) showed people diagrams of pulley systems such as the one in Figure 2 and asked them to verify statements about how different components would move (e.g., ‘‘When the rope is pulled, the lower pulley turns clockwise’’) while measuring response time and eye fixations. When asked to infer the motion of a particular component (say the middle pulley), eye fixations indicated that people looked at that component, and components earlier in the causal chain of events (i.e., the upper rope and pulley) but not components later in that chain of events (see the patterns of eye fixations for different trials Figure 2). Time to infer the motion of a
276
Mary Hegarty
The upper pulley turns counterclockwise 1
The middle pulley turns counterclockwise
The lower pulley turns clockwise
2 1
3
1
2 4 3 5
Figure 2 Sequence of eye fixations on three different mechanical reasoning trials in which the subject had to infer how a pulley in a pulley system would turn when the rope was pulled. Their task was to say whether the sentence was true or false.
component was also linearly related to the position of the component in the causal chain of events, with later components in the causal chain taking longer (Hegarty, 1992). This pattern of results suggests that people accomplish the task of inferring the motion of a complex system by decomposing the task into a sequence of relatively simple interactions (e.g., how the motion of a rope causes a pulley to rotate). This account is consistent with artificial intelligence models in proposing that mechanical reasoning involves sequentially propagating the effects of local interactions between components (e.g., DeKleer & Brown, 1984). At the same time, there is evidence that people are using mental simulation processes involving spatial working memory rather than merely applying verbally encoded rules when they imagine each of the ‘‘links’’ in the causal chain. Visual–spatial working memory loads interfere more with mechanical reasoning than do verbal working memory loads (Sims & Hegarty, 1997). Similarly, mechanical reasoning interferes more with visual–spatial than with verbal memory loads, suggesting that mechanical reasoning depends on representations in visual–spatial working memory (cf. Logie, 1995). Furthermore, when asked to ‘‘think aloud’’ while they infer how parts of a machine work, people tend to express their thoughts in gestures rather than in words (Hegarty, Mayer, Kriz, & Keehner, 2005; Schwartz & Black, 1996) and asking people to trace an irrelevant spatial pattern while reasoning about mechanical systems impairs their reasoning (Hegarty et al., 2005). In summary, I have argued that mechanical reasoning involves first decomposing the task into one of inferring the motion of individual components and then using mental image transformations to simulate the motion of each component in order of the causal chain of events in the machine’s functioning (Hegarty, 1992, 2004).
277
Components of Spatial Intelligence
A similar type of task decomposition can be seen in a recent study in which we examined people’s ability to infer the appearance of a cross section of a three-dimensional object. This task was inspired by research on spatial thinking in medicine (Hegarty et al., 2007), especially the skills needed to use medical imaging technologies such as MRI and ultrasound. For example, when a radiologist or surgeon inspects a medical image of some part of the anatomy, he or she has to be able to imagine what the cross section of the anatomy should look like, in order to notice and diagnose the abnormality (e.g., a tumor). In a series of studies (Cohen & Hegarty, 2007; Keehner et al., 2008), we examined performance in a laboratory-based task in which people had to infer and draw the cross section of an anatomy-like object which was oval in shape, with some ducts running through it (see Figure 3). While performing this task, participants had access to a threedimensional model of the 3D object and they could rotate this object around either its horizontal or vertical axis. Tables 3 and 4 present thinkaloud protocols of two different participants performing two different trials of this task. It can be seen that both decompose the task. They first determine what the outside shape of the cross section should look like,
Trial 2
Correct cross section
Drawing by participant 2 (see table 3)
Slice 02
Trial 6
Correct cross section
Drawing by participant 5 (see table 4)
Slice 06
Figure 3 Examples of two different trials from the cross section test, the correct cross section for each trials, and the cross section drawn by participants, whose verbal protocols appear in Tables 3 and 4.
278
Mary Hegarty
Table 3 Protocol of a Participant (Participant 2) Drawing the Cross Section Corresponding to Trial 2 in Figure 3. Cognitive process
Verbal protocol
S: So, uh, a vertical cross-section, so . . . [switches to vertical animation] we’re gonna start with an ellipse, [draws an ellipse] Infers number of S: and we’ve only cut through one branch. . .. ducts: E: Okay Infers shape of the S: . . . and it seems to be pretty head-on, so it seems that duct: the cut is perpendicular to the direction of the branch. [rotating vertical animation] E: Okay Infers location of S: And now I’m just trying to figure out where in this the duct: circle now along the horizontal axis that, that cut will be. . . [Rotating animation back and forth to arrow view] E: Okay. And you’re using the rotation. . . S: And yes so I’m rotating to figure that out. It looks like it ought be pretty central, [Makes a small circle with his finger on the computer animation] S: so I’m gonna give it a circle right about here. [draws duct] Infers outside shape:
The drawing that this participant produced is shown in Figure 3.
then infer how many ducts there should be in the drawn cross section, next they infer the shape of the ducts, and finally they figure out where the ducts should be. As in the mechanical reasoning example above, the participants do not seem to visualize rotating and slicing the object as a whole. Instead they use a divide and conquer approach to accomplish the task. 4.2.2. Rule-Based Reasoning Another way in which imagery-based processing or mental simulation is augmented by more analytic thinking is that it often leads to the observation of regularities, so that rule-based reasoning takes over. For example, take the gear problem in Figure 4. When Schwartz and Black (1996) asked people to solve problems like this, their gestures indicated that they initially mentally simulated the motion of the individual gears, but on the basis of these simulations, people discovered the simple rule that any two interlocking gears move in opposite directions. Participants then switched to a rule-based strategy, but reverted to the mental simulation strategy again when given a
279
Components of Spatial Intelligence
Table 4 Protocol of a Participant (Participant 5) Drawing the Cross Section Corresponding to Trial 6 in Figure 3. Cognitive process
Verbal protocol
Infers outer shape
S: All right, this is a vertical cut, oblong exterior shape [draws outer shape] S: We’re cutting the same, uhm, branch, uh, fork in two places So it will be a two internal structure slice S: So we can see that the top, the top is gonna be the angle One is gonna be oblong in one direction and the other one’s gonna be oblong in another direction S: Let me make that cut. [jitters horizontal animation] [switches to vertical animation and rotates it ending at default view] Structure is roughly equidistant between the middle and outer sides [points with pencil toward monitor] So. . . [draws duct] But it’s about in the center [erases] [rotates horizontal animation to arrow view] [redraws first duct] [draws second duct] S: Okay
Infers number of ducts Infers duct shape
Infers location of the ducts
The drawing that this participant produced is shown in Figure 3.
When the handle is turned in the direction shown, which direction will the final gear turn? (if either, answer C.) B
A
Figure 4 Example of a gear problem from Hegarty et al. (2005). Reprinted by permission of the publisher (Taylor & Francis, Ltd, http://www.tandf.co.uk/journals).
280
Mary Hegarty
novel type of gear problem. Schwartz and Black proposed that people use mental simulation in novel situations in which they do not have an available rule or when their rules are inadequate (e.g., are too narrow for the situation at hand). Rule-based reasoning can also be seen in the protocols in Tables 3 and 4, in which the participants seem to just retrieve the knowledge that a vertical cut of an egg-shaped object will result in an oval outside shape of the cross section. Rule-based reasoning was also observed by Stieff (2007) in examining problem solving in organic chemistry. An important topic in organic chemistry is stereochemistry, in which students learn to understand the three-dimensional structure of molecules. Molecules that contain the same atoms may or may not have the same three-dimensional structure—for example, two molecules made up of the same atoms may be mirror images of each other (known as enantiomers in chemistry) or may have the same structure. One way of determining whether two molecules have the same structure or are enantiomers is to attempt to mentally rotate one molecule into the other. However, chemists have also developed a heuristic for making this judgment. It turns out that if two of the bonds to a central carbon atom are identical, the molecule is always symmetrical and the molecule will always superimpose on its mirror image. Stieff found that beginning students almost always used a mental rotation strategy when determining whether two molecules had the same structure, but expert chemists were much more likely to use the analytical strategy, especially when the molecule was symmetrical. Furthermore, the novices readily adopted the analytical strategy when it was taught to them. In examining the use of visualization versus analytical strategies in domains such as mechanics and chemistry, researchers have suggested that visual–spatial strategies are default domain-general problem solving heuristics that are used by novices or by experts in novel situations, whereas rule-based analytic strategies are learned or discovered in the course of instruction and are used by experts in routine problem solving (Schwartz & Black, 1996; Stieff, 2007). In summary, I have argued that one component of spatial intelligence may be flexible strategy choice in solving spatial problems. The studies reviewed in this section of the chapter suggest that simulating spatial transformations (e.g., using visual imagery or ‘‘simulation’’) are an important strategy in mechanical reasoning, chemistry problem solving, inferring cross sections of three-dimensional objects, and performance of psychometric spatial abilities tests. But in each of these cases, more analytic strategies are also used. These can involve decomposing the problem, such that less information needs to be visualized at a time, or the abstraction of nonspatial information and the application of rules to generate an answer or eliminate answer choices. One tentative interpretation of these results is that spatial visualization is an effortful process, and the best spatial thinkers are those
Components of Spatial Intelligence
281
who augment visualization with more analytic strategies, and use these analytic strategies when they can, so that they visualize only the information that they need to represent and transform in order to solve a problem. This characterization of successful spatial problem solvers as flexibly switching between imagery and more analytical thinking processes is consistent with studies examining the relation between spatial ability and working memory (Kane et al., 2004; Miyake et al., 2001). These studies indicate that as spatial ability tests get more complex, they share more variance with executive working memory tasks and do not just depend on spatial working memory. The correlation with executive working memory may reflect adaptive strategy choice or the application of more analytic strategies. It is also consistent with a new characterization of the visualizer–verbalizer test proposed by Kozhevnikov and colleagues (Kozhevnikov, Hegarty, & Mayer, 2002; Kozhevnikov, Kosslyn, & Shephard, 2005). This research has revealed that people who are identified as having a ‘‘visualizer’’ as opposed to a ‘‘verbalizer’’ cognitive style can in fact be classified into two groups, based on their spatial ability. High-spatial visualizers tend to abstract only the information necessary to solve a spatial problem and are successful in solving spatial ability test items, but are less successful on problems that depend on vivid detailed mental images. In contrast, low-spatial visualizers have detailed and vivid imagery, but tend to represent irrelevant details when performing tests of spatial ability and therefore do not do well. It appears that high-spatial visualizers may use more analytic strategies that abstract only the spatial information necessary to solve spatial problems. Finally, this characterization informs the classic imagery debate. Kosslyn and his colleagues (Kosslyn, Thompson, & Ganis, 2006) have argued that people solve a variety of spatial thinking problems by constructing and transforming visual images, whereas Pylyshyn (2003) has argued that these tasks can be solved on the basis of tacit knowledge and the basic representation underlying these tasks may be propositional. The research reported here suggests that the use of visual imagery versus more analytic strategies may not be an either–or situation. Instead adaptive spatial thinking may depend on choosing between available strategies which might be imagery based or more analytic, and at least in some cases, use of imagery can lead to the noticing and abstraction of regularities (i.e., tacit knowledge) than can then be used to solve problems in the future, without depending on effortful visualization processes.
5. Representational Metacompetence as a Component of Spatial Intelligence In addition to flexible strategy choice, our observations of expert spatial thinking reveal that another component of spatial intelligence is adaptive use of external visual–spatial representations. In medicine, not
282
Mary Hegarty
just radiologists but also surgeons increasingly rely on a variety of imaging technologies such as ultrasound and magnetic resonance imaging, and interactive computer visualizations are prevalent in medical education, for example, in teaching anatomy. Similarly, meteorologists work with massively interactive visualizations in which they can add and subtract the predictions of weather models for different meteorological variables (pressure, rainfall, etc.) on remotely sensed satellite images. Scientists and intelligence analysts alike can explore multivariate data with powerful interactive visualizations (Thomas & Cook, 2005). These visualizations have the power to augment spatial thinking, for example, by providing external visualizations of phenomena that are too complex to be visualized internally. But they also depend on intelligence for their use. Specifically, to use these visualizations effectively, a user has to choose which information to visualize, how to visualize the information in support of a specific task, and how to manipulate the visualization system to create that representation. In research on children’s scientific problem solving, diSessa (2004) introduced the term meta-representational competence to refer to the ability to create new representations, choose the best representation for a particular task, and understand why particular representations facilitate task performance. This ability goes beyond the capacity to understand the conventions of a particular type of representation (such as a graph, map, or diagram). It includes the ability to use a novel type of display without instruction, and to choose the optimal type of display for a given task. It is therefore a form of metacognition about displays. In this section, I examine meta-representational competence as a component of adult spatial intelligence, outlining evidence for individual differences in both using representational systems, and choosing the optimal representation for a given task.
5.1. Using Representations In research inspired by the use of new technologies in medicine, my colleagues and I have found large individual differences in ability to use interactive visualizations. Take, for example, the task shown in Figure 3 in which people had to infer and draw the cross section of a three-dimensional anatomy-like object. While performing this task, participants had access to an interactive visualization that they could rotate to see different views of the object. In some experiments (exemplified by the protocols in Tables 3 and 4), they could rotate the 3D object around either its horizontal or vertical axis using separate interactive animations, and in other experiments the interface was a three degrees of freedom inertial tracker which could be rotated in three dimensions and which produced corresponding real-time rotations of the computer visualization (Cohen & Hegarty, 2007; Keehner et al., 2008).
Components of Spatial Intelligence
283
The protocols in Tables 3 and 4 illustrate productive use of the horizontal and vertical rotations to which the participants had access. After determining how many ducts there would be in the cross section, both participants rotated the online visualizations to determine where these ducts would be located in the resulting cross section. Specifically, they rotated the visualization to what we called the ‘‘arrow view,’’ which is the view of the object that one would see if one was viewing it from the perspective of the arrow given in the problem statement (see Figure 3). Rotating the external visualization in this way relieves the participant of the need to mentally rotate the object or mentally change his or her perspective with respect to the object. This is an example of what Kirsh (1997) referred to as a complementary action, that is, an action performed in the world that relieves the individual of the need to perform an internal computation. In our experiments, we found large individual differences in how people used the interactive visualizations to solve these problems, especially in how much they accessed the arrow view. For example, in one experiment in which there were 10 cross section trials, the number of trials on which participants accessed the arrow view ranged from 0 to 9 across individuals, and the amount of time spent on this view ranged from 0 to 35.8 s per trial. Access of the arrow view was correlated with ability to draw an accurate cross section but was not correlated with psychometric measures of spatial ability (mental rotation or perspective taking). Interestingly, seeing this arrow view was related to performance on the cross section task regardless of whether the participant actively manipulated the interface or passively viewed the results of another participant’s interactions. One study used a ‘‘yoked’’ design, such that each participant who used the interactive visualization to accomplish the task was ‘‘yoked’’ to a passive participant who viewed the visualizations of the interactive participant while he or she performed the task, but had no control of the visualization (Keehner et al., 2008, Experiment 2). The passive participants performed just as well as their interactive partners and the quality of the cross sections drawn by both groups was related to how much they saw the arrow view. That is, participants performed well on the cross section drawing task if they viewed the threedimensional visualization from the perspective of the arrow, regardless of whether they were actually the ones who interacted with the visualization to achieve this view or whether they were yoked to a participant who accessed this view. In a final experiment (Keehner et al., Experiment 3), we created an animation that mimicked the interactions of the most successful active participants. This animation rotated to the arrow view, and then ‘‘jittered’’ back and forth around this view to reveal the three-dimensional structure of the object. Participants who viewed this animation were highly successful at the drawing task, although they had no control over the animation, and interestingly those with higher spatial ability were most able to benefit from this animation, such that they drew better cross sections after seeing this animation.
284
Mary Hegarty
Effective use of the interactive visualizations in this task demands several spatial thinking capabilities besides the ability to benefit from the arrow view in drawing the cross section. It also demands the ability to rotate the visualization to the arrow view, which in turn demands both (1) the metacognitive awareness of how accessing this view might help you accomplish the task and (2) the capability to use the interface to achieve this rotation. We found large individual differences in both of these aspects of task performance. The following protocol of a low-spatial participant in one of our experiments illustrates a lack of metacognitive awareness. This participant is unable to discover how using the external visualization can help with the cross section task. She rotates the problem statement, printed on paper (see the example of a problem statement in Figure 3), to try to imagine a different view of the object, rather than rotating the external 3D visualization which would actually give her this view. She remarks that once she starts rotating the external visualization she gets disoriented. I’m sure this could be helpful, but . . . I don’t know, it isn’t . . . I can’t connect with it. [referring to the computer visualization] So. I’m turning it upside down, which may or may not be a good strategy, but it feels like it gives me something to do, to look at it from another way. [turning the page with the problem statement upside down] The computer, when it turns I have no . . . I feel like I have lost my bearings when I go with it, but with the book at least I have some, I have some grounding.
Finally, in a recent study in my laboratory (Stull et al., 2009) we found that even rotating a three-dimensional virtual object to a specified view can be difficult for some people (see also Ruddle & Jones, 2001). In this research, which concerned anatomy learning, participants learned the structure of a complex anatomical object (a vertebra, shown in Figure 5A) by manipulating a virtual model. They rotated the virtual object using the three degrees-of-freedom interface described above, in which rotations of the interface produce corresponding rotations of the virtual model. While learning the anatomy, participants performed 80 trials in which they were shown a cue card with views of the anatomy from different orientations in 3D space, each of which highlighted a particular anatomical feature that could be seen from this view (see the example in Figure 5B). Their task was to rotate the virtual model to that orientation and note the appearance and location of that feature. After this learning phase, we had them identify features from different orientations, to test their knowledge of the anatomy. There were large individual differences in accuracy, response time, and how directly participants rotated the object to the desired view, and participants with low scores on spatial ability tests had the most difficulty with
285
Components of Spatial Intelligence
A
B
Transverse foramen C
D
Transverse foramen
Figure 5 Examples of trials from the anatomy learning task studied by Stull, Hegarty, and Mayer (2009). The task is to rotate the visualization from the orientation shown on the left to the orientation shown on the right. The lower two images show the object with orientation references added.
this manual rotation task. That is, just rotating the external visualization to a specified view, shown in a picture, was challenging for some people. To alleviate their difficulties, we introduced ‘‘orientation references,’’ that is, markers indicating the vertical and horizontal axes of the object (see Figure 5C and D). With these orientation references, participants were more successful in manipulating the virtual model, and in one experiment, low-spatial individuals were more successful in learning the structure of the anatomy with these orientation references. In summary, external representations such as interactive 3D visualizations are often proposed as ways of augmenting spatial thinking (e.g., Card, Mackinlay, &Shneiderman, 1999). The studies presented in this section suggest that interactive 3D visualizations also depend on intelligence for their use. We have found individual differences in ability to discover how to use an external visualization to accomplish a task (in inferring cross sections), how to manipulate a virtual object to a particular orientation (in our research on learning anatomy) and in ability to benefit from the most task-relevant view of an object (i.e., seeing the arrow view in the cross section task). These studies indicate that ability to use a novel external representation is not always a given, and provide evidence for individual differences in adult meta-representational competence.
286
Mary Hegarty
5.2. Choosing Representations Another aspect of meta-representational competence is the ability to choose the best representation for a given task. This type of spatial intelligence comes into play when I am analyzing some new data and use a graphing program to see the patterns in the data, or making graphs to present the data in a paper. Should I use a bar graph or a line graph? If the data are multivariate, which variable should be on the x-axis? Research on graph comprehension has shown that people get different messages out of a graph depending on these decisions (Gattis & Holyoak, 1996; Shah & Carpenter, 1995; Shah, Mayer, & Hegarty, 1999). In map comprehension, different color or intensity values make the different variables displayed on the map more or less visually distinct (Yeh & Wickens, 2001). Display format can also affect problem solving with more abstract relational graphics (Novick & Catley, 2007) or even equations (Landy & Goldstone, 2007). But how good are people at choosing the best format of representation for a given situation? In research on abstract spatial representations, Novick and colleagues (Novick, 2001; Novick & Hurley, 2001) found a high degree of competence among college students in ability to match the structure of a problem to a type of diagram (a matrix, a network, or a hierarchy). They argued that people have schemas that include applicability conditions for the different types of diagrams, and found that college students were often able to articulate these conditions. Similarly, educators and developmental psychologists have identified children’s native competence to create appropriate representations, for example, in graphing motion and mapping terrain (Azevedo, 2000; diSessa, 2004; Sherrin, 2000). However, these researchers also point to limitations in this natural competence. For example, children show a strong preference for realistic representations, even when less realistic representations are more effective for task performance. Recent research in my laboratory has focused on choice of representations as an aspect of adult meta-representational competence. Rather than examining more abstract diagrams that depict the structure of a problem or information space (cf. Novick, 2001), our research has focused on representations of objects and events that correspond to concrete objects or entities that exist in real space. These representations include street maps, weather maps and diagrams of mechanisms. Like all representations, the effectiveness of different maps and diagrams is relative to the task at hand and effective diagrams typically abstract the most task-relevant information from the entity being represented. Thus, a street map is a more effective representation for finding your way in a new city than is a satellite image of the city. Cartographers, cognitive scientists, and designers of information visualizations all emphasize the importance of simplification and parsimony in designing representations (Bertin, 1983; Kosslyn, 1989; Tufte, 1983). For example, in his analysis of effective design of graphs based on perceptual
Components of Spatial Intelligence
287
processes Kosslyn (1989) states as a cardinal rule that ‘‘no more or less information should be provided than is needed by the user’’ (p. 211). According to these experts, good external representations simplify and abstract from the real world that they represent. Our research suggests that contrary to these principles of effective graphical designs, when displays represent real-world entities, people have a bias toward preferring more detailed, realistic displays that represent their referents with greater fidelity, over more simplified and parsimonious displays. In a preliminary study (Hegarty, Smallman, et al., 2009), we developed a questionnaire to evaluate intuitions about the effectiveness of animation, realism, showing the third dimension (3D) and detail in visual displays. This questionnaire was given to a large group of 739 undergraduate students. Students were asked about both their preferences for different attributes of displays and their ratings of the effectiveness of these different display attributes for a variety of everyday tasks that were chosen to be relevant to college students (e.g., learning from textbooks, navigating with the use of maps, and understanding weather reports). The following are sample items:
I learn more effectively from diagrams that depict objects in three dimensions. I prefer to use a map that shows as many details as possible when I am trying to decide which route to follow. When looking at a weather map on TV or the Internet, I prefer to have the movement of weather systems animated, so I can watch how the systems are moving across the country. Participants responded by choosing a number from 1 (strongly disagree) to 7 (strongly agree) and all items were worded such that agreeing meant that one was in favor of the display characteristic described (realism, 3D, animation, or detail). Figure 6 shows the means and standard errors for the 17 individual scale items and also indicates which display enhancement (realism, 3D, etc. was asked about in each item). The neutral response for each item was 4 (‘‘neither agree nor disagree’’), so mean values of greater than 4 suggest that on average participants were in favor of display enhancements. What is striking here is the overall pattern of responding, which indicated that participants were almost always in favor of more enhanced displays. They preferred animated to static displays (it can be seen that the two longest bars in Figure 6 represent items that asked about animated displays), 3D to 2D displays, more detailed to less detailed displays and more realistic to less realistic displays. The means for all but two items were significantly greater than 4. The two exceptions (items 11 and 12) were about using three-dimensional maps to find routes. These data indicate that undergraduate students have consistent intuitions that visual displays are more effective when they present more information or represent their referents with greater fidelity.
Questionnaire item
288
Mary Hegarty
1. Realism 2. Detail 3. 3D 4. Realism 5. Detail 6. Realism 7. Animation 8. Detail 9. Realism 10. Animation 11. 3D 12. 3D 13. Animation 14. 3D 15. Animation 16. Detail 17. Realism 1 Strongly disagree
2
3
4
5
6
Neutral
7 Strongly agree
Mean amount of agreement with statement
Figure 6 Mean scores for the 17 items on a questionnaire assessing students’ preferences and evaluations of the effectiveness of different types of visual displays. The y-axis indicates which type of display attribute (animation, 3D, realism or detail) is asked about in each questionnaire item.
Similar intuitions have been found among experts who work with visual displays. For example, Navy users prefer realistic 3D rendered icons of ships over less realistic, more abstract symbols in their tactical displays. But different ships are visually similar in the real world, so maximizing realism here has the unanticipated disadvantage of creating ship icons that are hard to discriminate. Consequently, people perform better with ‘‘symbicons’’ that pare down realism to maximize discriminability (Smallman, St John, Oonk, & Cowen, 2001). Similarly, in a recent study, participants from the US Navy were shown highly detailed 3D terrain maps and smoother, more simplified maps and asked to predict which would be more effective for laying routes across terrain. Participants predicted better performance with the most realistic detailed 3D maps, but in fact performed more accurately with the simplified maps that removed task-irrelevant details (Smallman, Cook, Manes, & Cowen, 2007). In summary, people prefer displays that simulate the real world with greater fidelity, compared to simpler and more abstract displays (Scaife & Rogers, 1996; Smallman & St John, 2005). But in fact, display enhancements such as detail, animation, realism, and showing the third dimension do not consistently enhance performance and often impede it (e.g., Khooshabeh & Hegarty, 2009; Smallman and St John, 2005; Smallman et al., 2001; Tversky, Morrison, & Betrancourt, 2002; Zacks et al., 1998).
Components of Spatial Intelligence
289
In more recent research, my colleagues and I have focused on choice of visual–spatial displays in the domain of weather forecasting (meteorology). Meteorology offers a rich domain in which to study issues regarding choice of visual–spatial displays. Forecasters use weather maps for a variety of tasks, including reconciling model data with observations, generating forecasts for different client needs, and issuing warnings of severe weather events. The displays that they use while performing these tasks typically show a variety of different meteorological variables (pressure, wind, temperature, etc.) superimposed on the map. Existing display systems give forecasters a great deal of flexibility and tailorability in terms of which variables are shown and how they are visualized (Trafton & Hoffman, 2007). Weather maps also have the advantage of being meaningful and relevant to both experts and novices (although of course, experts use a greater range of different displays than can be understood by novices). In an initial naturalistic study (Smallman & Hegarty, 2007), we gave 21 Navy weather forecasters the task of preparing a weather forecast for a fictional ship off the coast of California, asked them to save all the displays they created or accessed (e.g., from the World Wide Web) while working on the forecast, and later interviewed them about their display choices. We found that in general, the forecasters accessed weather maps that were more complex than they needed, displaying variables that were extraneous to their task. That is, the forecasters typically used displays that contained more variables than they said that they were thinking about while viewing those displays. Interestingly, this effect was exacerbated with forecasters of lower spatial ability. That is, low-spatial forecasters put more extraneous variables in their displays and their forecasts were somewhat less accurate. We then developed laboratory tasks that allowed us to examine the intuitions of both novice and expert users about display effectiveness under more controlled conditions (Canham et al., 2007; Hegarty, Smallman et al., 2008; Hegarty, Smallman et al., 2009). These studies also evaluated the actual effectiveness of different displays. One task (see example in Figure 7) examined participants’ intuitions about visual displays. In this task, participants were shown eight different weather maps, varying in complexity and had to choose the map they would use when performing a task such as comparing the pressure or temperature in different regions of the map. One of the maps showed only the information necessary to perform the task, while the others also showed extraneous variables, which were either off-task meteorological variables (e.g., adding temperature information to a display when the only variable to be compared was pressure) or realism (completely task-irrelevant terrain features and state boundaries). Participants were sometimes asked which map they would prefer to use, as in Figure 7, and were sometimes asked to choose the map with which they would perform most efficiently. We also measured participants’ performance of meteorological tasks with the different maps. In some studies (Canham et al., 2007; Hegarty,
290
Mary Hegarty
1
3
2
If you wanted to compare the wind direction at different points on the map, which of the displayed maps would you prefer to use?
4
6
5
8
7
Figure 7 Example of an intuition trial from the meteorology studies.
Compared to region A, which Region is most similar in wind direction (1 or 2)?
A
2
1
Less realistic map
A
2
1
More realistic map
Figure 8 Example of a comparison trial from the meteorology studies, showing a less realistic map (displaying only the task-relevant information) and a more realistic map. The task is to indicate which region (1 or 2) is most similar to region A in wind direction.
Smallman et al., 2008), the performance tasks involved reading and comparing values of meteorological variables in different regions of the maps. I will refer to this as the comparison task (see Figure 8). In another study, they involved inferring differences in wind speed from pressure differences across
291
Components of Spatial Intelligence
Which region (1, 2, 3, or 4) has the strongest wind? Less realistic map
More realistic map
2
2
1
1 3 4
4
3
Figure 9 Example of an inference trial from the meteorology studies, showing a less realistic map (displaying only the task-relevant information) and a more realistic map. The task is to say which of the four areas has the strongest wind, which in turn involves inferring wind strength from the pressure differential across the area.
an area (e.g., Figure 9) or inferring pressure differences from wind speed (Hegarty, Smallman et al., 2009). I will refer to these as inference tasks. In the case of inference tasks, novices were first taught the meteorological principles that they needed to make the necessary inferences. Participants’ map choices in the intuition task (example in Figure 7) indicated that about one third of the time novices preferred the more realistic maps that added terrain and state boundaries although this additional information provided no task-relevant information. Novices tended not to choose maps that included task irrelevant meteorological variables. More expert participants (postgraduate students in meteorology at the Naval Postgraduate School in Monterey) chose maps that included both extraneous realism and extraneous meteorological variables. In general there were only small differences between participants’ choice of maps when asked with which map they would be more efficient than when asked which map they would prefer to use. This indicates that they were not just responding on the basis of aesthetics. Turning to the measures of actual performance with the different maps, novices were very accurate (over 95% accurate) for the comparison task and quite accurate (89%) for the inference task. Figures 8 and 9 show examples of the comparison and inference tasks with maps that displayed only the task-relevant information (less realistic map) and maps that displayed the additional variables of terrain and state boundaries (more realistic maps). With the simplest (less realistic) maps, comparison task trials took about 5 s to complete. Adding realism to the weather maps added over half a second (10%) to average response times, and each extraneous meteorological variable on a weather map increased response time by about an additional half second. On the inference task, both the accuracy and the response times of novices suffered when realism was added to the maps (mean accuracy
292
Mary Hegarty
decreased slightly from 91.2% to 87.7% and mean response time increased from 3.0 to 3.3 s). As might be expected, more expert participants (postgraduate students in meteorology at the Naval Postgraduate School in Monterey) had very accurate performance (over 96%) on both the comparison task and on the inference task. However, additional variables on a map significantly slowed their performance. For example, on the comparison task, their response time increased from 4.9 s on less realistic maps to 5.1 s on realistic maps and each additional meteorological variable increased their response time by about 0.2 s. While they were less affected by the addition of task-irrelevant variables than were novices, experts were also not immune to the effects of these extraneous variables. In summary, our research on choosing and using displays in the domain of meteorology indicates that novices and experts alike have a tendency to choose more realistic over less realistic displays, even though realism impairs their performance in simple display comprehension tasks. Some experts prefer not just realism, but also prefer maps that display extraneous meteorological variables. For example, in debriefing interviews, some meteorologists had the strong intuition that adding pressure to a map showing wind would enhance their performance, but in fact adding extraneous pressure information to any map significantly slowed performance. It is interesting to speculate about why people prefer more realistic and complex displays over simpler displays. Smallman and St John (2005) theorize that this stems from fundamental misconceptions about how perception works and the fidelity of what perception delivers. They argue that people possess ‘‘folk fallacies’’ that scene perception is simple, accurate, and rich, when, in fact, perception is remarkably complex, error-prone, and sparse. These misconceptions result in a misplaced faith in realistic displays of the world that give users flawed, imprecise representations—a phenomenon they refer to as Naı¨ve Realism. In the case of experts, we cannot rule out the possibility that they respond as they do because the laboratory tasks that we assigned them are simple and unfamiliar compared to their everyday tasks as meteorologists. Meteorologists may prefer maps with state boundaries because it is important to know where a weather pattern is developing in the everyday tasks that they perform as part of their jobs, and terrain may either reinforce this georeference or provide information about typical weather patterns that are likely to develop in different locations, for example, due to a nearby mountain range. This highlights a basic challenge in looking to expert spatial thinking in identifying components of spatial intelligence. As tasks are simplified to bring them under experimental control and make them more doable by naı¨ve participants, they can become less meaningful and identifiable to the real-world expert. We may have erred on the side of simplicity with our laboratory proxy tasks, but it is still surprising that experts did not exhibit better calibrated
Components of Spatial Intelligence
293
intuitions about the best displays for such simple tasks. Furthermore, the results from our laboratory studies are highly consistent with results from our earlier naturalistic observation of Navy weather forecasters, in which the task was similar to what they do everyday. Even in this situation participants accessed weather maps that were more complex than they needed, displaying variables that were extraneous to their task (Smallman & Hegarty, 2007). In summary, our research on choosing displays indicates that not just children but also adults have a bias to prefer realistic and complex displays over more parsimonious displays. Given the increasing availability of interactive, custom displays on the Internet as well as in professional settings, our research suggests that meta-representational competence is often lacking and may lead people to create graphical display configurations that actually impair their performance. Although new technologies have great potential to augment human intelligence, intelligence is also required for their use. Teaching people to appreciate the affordances of different display types and critique displays, especially confronting the bias toward more realistic displays, may therefore present an interesting new opportunity for fostering spatial intelligence.
6. Concluding Remarks In this chapter, I have argued that we need to look more broadly than psychometric tests of spatial ability to identify components of spatial intelligence or adaptive spatial thinking. The studies I have reviewed here have been based on one approach to identifying spatial intelligence; examining how experts in a variety of different domains think about spatial structures and processes. On the basis of these studies, I have identified two components of spatial intelligence. The first is flexible strategy choice between constructing and transforming mental images and more analytic thinking. I have demonstrated that even relatively simple spatial tasks included in psychometric tests of spatial ability include analytic thinking, although people also report mental imagery as the dominant strategy by which they perform this tasks. With more complex forms of thinking involved in mechanical reasoning, and scientific thinking, it appears to be even more important to supplement mental imagery with more analytical forms of thinking. The second component of spatial intelligence that I have identified is what diSessa (2004) refers to as meta-representational competence, and includes the ability to choose the optimal external representation for a task, and to use novel external representations, such as interactive visualizations, effectively. I have identified large individual differences in this ability, which is becoming more important with developments in
294
Mary Hegarty
information technology, which puts interactive display design in the hands of experts and novices alike. The research reviewed here has important implications for how to foster the development of spatial intelligence. Current approaches focus on training visualization ability. My research suggests this approach, because it indicates that internal visualization is a basic strategy in spatial thinking. However, it also suggests that training in visualization should be supplemented by instruction in more analytic ways of thinking about space, and the conditions under which analytic thinking can either supplement or replace more imagistic thinking. It also indicates that we should not just consider interactive external visualizations as ways of augmenting spatial intelligence, but also consider the types of intelligence that are required to use these visualizations.
ACKNOWLEDGMENTS I would like to thank my collaborators, especially Madeleine Keehner, Harvey Smallman, Mike Stieff, and Bonnie Dixon for discussions of many of the issues in this chapter. This material was based in part upon work supported by the National Science Foundation under Grants Number 0313237 and 0722333, and work supported by the Office of Naval research under Grant Number N000140610163.
REFERENCES Azevedo, F. S. (2000). Designing representations of terrain: A study in meta-representational competence. Journal of Mathematical Behavior, 19, 423–480. Bertin, J. (1983). Semiology of Graphics. Wisconsin: University of Wisconsin Press (William J. Berg, Trans.). Canham, M., Hegarty, M., & Smallman, H. (2007). Using complex visual displays: When users want more than is good for them. In: Proceedings of the eighth international naturalistic decision making conference. Pacific Grove, CA, June 1st–4th. Card, S. K., Mackinlay, J. D., & Shneiderman, B. (1999). Readings in Information Visualization. Using Vision to Think, San Francisco, CA: Morgan Kaufmann. Carroll, J. (1993). Human cognitive abilities: A survey of factor-analytical studies. New York: Cambridge University Press. Casey, M. B., Nuttall, R. L., & Pezaris, E. (1997). Mediators of gender differences in mathematics college entrance test scores: A comparison of spatial skills with internalized beliefs and anxieties. Developmental Psychology, 33(4), 669–680. Cohen, C. A., & Hegarty, M. (2007). Individual differences in use of an external visualization while performing an internal visualization task. Applied Cognitive Psychology, 21, 701–711. Coleman, S. L., & Gotch, A. J. (1998). Spatial perception skills of chemistry students. Journal of Chemical Education, 75(2), 206–209. DeKleer, J., & Brown, J. S. (1984). A qualitative physics based on confluences. Artificial Intelligence, 24, 7–83. diSessa, A. A. (2004). Metarepresentation: Native competence and targets for instruction. Cognition and Instruction, 22, 293–331.
Components of Spatial Intelligence
295
Eckstrom, R. B., French, J. W., Harman, H. H., & Dermen, D. (1976). Kit of factor-referenced cognitive tests. Princeton: Eductational Testing Service. Eliot, J., & Smith, I. M. (1983). An international directory of spatial tests. Windsor, Berkshire: NFER-Nelson. Feng, J., Spence, I., & Pratt, J. (2007). Playing an action video game reduces gender differences in spatial cognition. Psychological Science, 18, 850–855. Ferguson, E. S. (2001). Engineering and the mind’s eye. Cambridge: MIT Press. Gattis, M., & Holyoak, K. J. (1996). Mapping conceptual to spatial relations in visual reasoning. Journal of Experimental Psychology: Learning, Memory & Cognition, 22, 231–239. Geisser, C., Lehmann, W., & Eid, M. (2006). Spearating ‘‘rotators’’ from ‘‘non rotators’’ in the mental rotations test: A multivariate latent class analsysis. Multivariate Behavioral Research, 41, 261–293. Gerson, H. B. P., Sorby, S. A., Wysocki, A., & Baartmans, B. J. (2001). The development and assessment of multimedia software for improving 3-D visualization skills. Computer Applications in Engineering Education, 9, 105–113. Hegarty, M. (1992). Mental animation: Inferring motion from static diagrams of mechanical systems. Journal of Experimental Psychology: Learning, Memory and Cognition, 18(5), 1084–1102. Hegarty, M. (2004). Mechanical reasoning as mental simulation. Trends in Cognitive Sciences, 8, 280–285. Hegarty, M., Canham, M., & Fabrikant, S. I. (in press). Thinking about the weather: How display salience and knowledge affect performance in a graphic inference task. Journal of Exoerimental Psychology: Learning, Memory and Cognition. Hegarty, M., De Leeuw, K., & Bonura, B. (2008). What do spatial ability tests really measure? In: Proceedings of the 49th meeting of the psychonomic society Chicago, IL, November. Hegarty, M., Keehner, M., Cohen, C., Montello, D. R., & Lippa, Y. (2007). The role of spatial cognition in medicine: Applications for selecting and training professionals. In G. Allen (Ed.), Applied spatial cognition. Mahwah: Lawrence Erlbaum Associates. Hegarty, M., Mayer, S., Kriz, S., & Keehner, M. (2005). The role of gestures in mental animation. Spatial Cognition and Computation, 5, 333–356. Hegarty, M., Montello, D. R., Richardson, A. E., Ishikawa, T., & Lovelace, K. (2006). Spatial abilities at different scales: Individual differences in aptitude-test performance and spatial-layout learning. Intelligence, 34, 151–176. Hegarty, M., Smallman, H. S., & Stull, A. T. (2008). Decoupling of intuitions and performance in the use of complex visual displays. In: Proceedings of the 30th annual meeting of the cognitive science society (pp. 881–886), Washington, DC, July. Hegarty, M., Smallman, H. S., Stull, A. T., & Canham, M. (2009). Naı¨ve Cartography: How intuitions about display configuration can hurt performance. Cartographica, 44, 171–186. Hegarty, M., & Waller, D. (2006). Individual differences in spatial abilities. In P. Shah & A. Miyake (Eds.), Handbook of visuospatial thinking (pp. 121–169). Cambridge: Cambridge University Press. Just, M. A., & Carpenter, P. A. (1985). Cognitive coordinate systems: Accounts of mental rotation and individual differences in spatial ability. Psychological Review, 92, 137–172. Kail, R. (1986). The impact of practice on rate of mental rotation. Journal of Experimental Child Psychology, 42, 378–391. Kane, M. J., Hambrick, D. Z., Tuholski, S. W., Wilhelm, O., Payne, T. W., & Engle, R. W. (2004). The generality of working memory capacity: A latent-variable approach to verbal and visuospatial memory span and reasoning. Journal of Experimental Psychology: General, 133, 189–217. Keehner, M., Hegarty, M., Cohen, C. A., Khooshabeh, P., & Montello, D. R. (2008). Spatial reasoning with external visualizations: What matters is what you see, not whether you interact. Cognitive Science, 32, 1099–1132.
296
Mary Hegarty
Keehner, M., Lippa, Y., Montello, D. R., Tendick, F., & Hegarty, M. (2006). Learning a spatial skill for surgery: How the contributions of abilities change with practice. Applied Cognitive Psychology, 20, 487–503. Keehner, M. M., Tendick, F., Meng, M. V., Anwar, H. P., Hegarty, M., Stoller, M. L., & Duh, Q. (2004). Spatial ability, experience, and skill in laparoscopic surgery. The American Journal of Surgery, 188, 71–75. Khooshabeh, P., & Hegarty, M. (in press). Inferring cross-sections: When internal visualizations are more important than properties of external visualizations. Human Computer Interaction. Kirsh, D. (1997). Interactivity and multimedia interfaces. Instructional Science, 25, 79–96. Kosslyn, S. M. (1989). Understanding charts and graphs. Applied Cognitive Psychology, 3, 185–226. Kosslyn, S. M., Thompson, W. L., & Ganis, G. (2006). The case for mental imagery. New York: Oxford University Press. Kozhevnikov, M., Hegarty, M., & Mayer, R. E. (2002). Revising the visualizer/verbalizer dimension: Evidence for two types of visualizers. Cognition and Instruction, 20, 47–77. Kozhevnikov, M., Kosslyn, S. M., & Shephard, J. M. (2005). Spatial versus object visualizers: A new characterization of visual cognitive style. Memory and Cognition, 33, 710–726. Kozhevnikov, M., Motes, M., & Hegarty, M. (2007). Spatial visualization in physics problem solving. Cognitive Science, 31, 549–579. Landy, D., & Goldstone, R. L. (2007). How abstract is symbolic thought? Journal of Experimental Psychology: Learning, Memory, and Cognition, 33, 720–733. Logie, R. H. (1995). Visuo-spatial working memory. Hove: Lawrence Erlbaum Associates. Lohman, D. F. (1988). Spatial abilities as traits, processes, and knowledge. In R. J. Sternberg (Ed.), Advances in the Psychology of Human Intelligence (pp. 181–248). Hillsdale: Erlbaum. Loomis, J. M., Lippa, Y., Klatzky, R. L., & Golledge, R. G. (2002). Spatial updating of locations specified by 3-D sound and spatial language. Journal of Experimental Psychology: Learning, Memory, & Cognition, 28, 335–345. Miyake, A., Rettinger, D. A., Friedman, N. P., Shah, P., & Hegarty, M. (2001). Visuospatial working memory, executive functioning and spatial abilities. Journal of Experimental Psychology: General, 130, 621–640. Montello, D. R. (2005). Navigation. In P. Shah & A. Miyake (Eds.), The Cambridge handbook of visuospatial thinking (pp. 257–294). Cambridge: Cambridge University Press. National Research Council. (2006). Learning to think spatially: GIS as a support system in the K-12 curriculum. Washington: National Research Council Press. Novick, L. R. (2001). Spatial diagrams: Key instruments in the toolbox for thought. In D. L. Medin (Ed.), The psychology of learning and motivation, Vol. 40 (pp. 279–325). San Diego: Academic Press. Novick, L. R., & Catley, K. M. (2007). Understanding phylogenies in biology: The influence of a Gestalt perceptual principle. Journal of Experimental Psychology: Applied, 13, 197–223. Novick, L. R., & Hurley, S. M. (2001). To matrix, network, or hierarchy: That is the question. Cognitive Psychology, 42, 158–216. Orion, N., Ben-Chaim, D., & Kali, Y. (1997). Relationship between earth-science education and spatial visualization. Journal of Geoscience Education, 45, 129–132. Pellegrino, J. W., & Kail, R. V. (1982). Process analyses of spatial aptitude. In R. J. Sternberg (Ed.), Advances in the psychology of human intelligence, Vol. 1 (pp. 311–365). Hillsdale: Erlbaum. Pylyshyn, Z. W. (2003). Seeing and visualizing: It’s not what you think. Cambridge: MIT Press. Ruddle, R. A., & Jones, D. M. (2001). Manual and virtual rotations of a three-dimensional object. Journal of Experimental Psychology: Applied, 7, 286–296. Scaife, M., & Rogers, Y. (1996). External cognition: how do graphical representations work? International Journal of Human-Computer Studies, 45, 185–213. Schwartz, D. L., & Black, J. B. (1996). Shuttling between depictive models and abstract rules: Induction and fall-back. Cognitive Science, 20, 457–497.
Components of Spatial Intelligence
297
Shah, P., & Carpenter, P. (1995). Conceptual limitations in comprehending line graphs. Journal of Experimental Psychology: General, 124, 337–370. Shah, P., & Miyake, A. (1996). The separability of working memory resources for spatial thinking and language processing: An individual differences approach. Journal of Experimental Psychology: General, 125, 4–27. Shah, P., Mayer, R. E., & Hegarty, M. (1999). Graphs as aids to knowledge construction: Signaling techniques for guiding the process of graph comprehension. Journal of Educational Psychology, 91, 690–702. Shepard, R. N., & Metzler, J. (1971). Mental rotation of three dimensional objects. Science, 171, 701–703. Sherrin, B. (2000). How students invent representations of motion. Journal of Mathematical Behavior, 19, 399–441. Sims, V. K., & Hegarty, M. (1997). Mental animation in the visual-spatial sketchpad: Evidence from dual-task studies. Memory & Cognition, 25, 321–332. Smallman, H. S., Cook, M. B., Manes, D. I., & Cowen, M. B. (2007). Naı¨ve Realism in terrain appreciation. In: Proceedings of the 51st annual meeting of the human factors and ergonomics society (pp. 1317–1321), Santa Monica, CA: Human Factors and Ergonomics Society 1–5 October, Baltimore, MD. Smallman, H. S., & Hegarty, M. (2007). Expertise, spatial ability and intuition in the use of complex visual displays. In: Proceedings of the 51st annual meeting of the human factors and ergonomics society (pp. 2000–2004), Santa Monica: Human Factors and Ergonomics Society. Smallman, H. S., & St John, M. (2005). Naı¨ve Realism: Misplaced faith in the utility of realistic displays. Ergonomics in Design, 13, 6–13. Smallman, H. S., St John, M., Oonk, H. M., & Cowen, M. B. (2001). ‘Symbicons’: A hybrid symbology that combines the best elements of SYMBols and ICONS. In: Proceedings of the 45th annual meeting of the human factors and ergonomics society (pp. 110–114), Santa Monica, CA: Human Factors and Ergonomics Society. 29 September–4 October, Baltimore, MD. Smith, I. M. (1964). Spatial ability: Its educational and social significance. San Diego: Knapp. Stieff, M. (2007). Mental rotation and diagrammatic reasoning in science. Learning and Instruction, 17, 219–234. Stull, A. T., Hegarty, M., & Mayer, R. E. (2009). Orientation references: Getting a handle on spatial learning. Journal of Educational Psychology, 101, 803–816. Terlecki, M. S., Newcombe, N. S., & Little, M. (2008). Durable and generalized effects of spatial experience on mental rotation: Gender differences in growth patterns. Applied Cognitive Psychology, 22(7), 996–1013. Thomas, J. J., & Cook, K. A. (2005). Illuminating the path: Research and development agenda for visual analytics. Richland, WA: IEEE Press. Trafton, J. G., & Hoffman, R. R. (2007). Computer-aided visualization in meteorology. In R. R. Hoffman (Ed.), Expertise out of context: Proceedings of the sixth international conference on naturalistic decision making (pp. 337–358). New York: CRC Press. Tufte, E. R. (1983). The visual display of quantitative information. Cheshire, CT: Graphics Press. Tversky, B., Morrison, J., & Betrancourt, M. (2002). Animation: can it facilitate? International Journal of Human-Computer Studies, 57, 247–262. Vandenberg, S. G., & Kuse, A. R. (1978). Mental rotations, a group test of three-dimensional spatial visualization. Perceptual and Motor Skills, 47, 599–604. Wright, R., Thompson, W. L., Ganis, G., Newcombe, N. S., & Kosslyn, S. M. (2008). Training generalized spatial skills. Psychonomic Bulletin & Review, 15(4), 763–771. Yeh, M., & Wickens, C. D. (2001). Attentional filtering in the design of electronic map displays: A comparison of color coding, intensity coding, and decluttering techniques. Human Factors, 43, 543–562. Zacks, J. M., Levy, E., Tversky, B., & Schiano, D. J. (1998). Reading bar graphs: Effects of extraneous depth cues and graphical context. Journal of Experimental Psychology: Applied, 4, 119–138.
This page intentionally left blank
C H A P T E R
E I G H T
Toward an Integrative Theory of Hypothesis Generation, Probability Judgment, and Hypothesis Testing Michael Dougherty, Rick Thomas, and Nicholas Lange Contents 300 301 302 303 306 310 313 314 319 334 338 338 339 339
1. Introduction 2. A Computational Level of Analysis 2.1. The Computational Problem 2.2. A Solution to the Computational Problem 3. An Algorithmic Level of Analysis 3.1. A Representational Instantiation 3.2. Hypothesis Generation 3.3. Probability Judgment 3.4. Information Search and Hypothesis Testing 4. Discussion 4.1. Bridging Memory Theory 4.2. Bridging Categorization 4.3. Bridging Judgment and Decision Making References
Abstract Basic-level memory processes have long been viewed as important for understanding judgment and decision-making behavior—a view that was established by Tversky and Kahneman [1974. Science 185, 1124–1131] in their seminal work on heuristics and biases, and which was carried forth throughout the last 35 years. Nevertheless, theoretical progress in explicating the link between memory and judgment has been slow, despite the fact that many important advances in understanding memory have emerged. In this chapter, we provide an integrative account of judgment and decision making based on the HyGene model, which links the processes involved with judgment and choice to the predecisional processes of option generation and memory retrieval. HyGene is implemented as a fully functioning memory model, but incorporates processes that enable it account a variety of phenomena within the judgment and decisionmaking literature. Psychology of Learning and Motivation, Volume 52 ISSN 0079-7421, DOI: 10.1016/S0079-7421(10)52008-5
#
2010 Elsevier Inc. All rights reserved.
299
300
Michael Dougherty et al.
1. Introduction People often find themselves generating various options when considering where to eat for dinner, who to nominate for a student award, and which employees to promote. In some cases, one might be presented with a set of options from which to choose, as would be the case if you flipped through a restaurant guide when choosing where to eat. However, in many cases the options are generated from long-term memory: People recollect dining options based on past experience, instructors remember the best students from their past classes, and managers are reminded of their best employees. One’s memories, past experiences, and knowledge allow him or her to impose constraints on the multitude of options that could be considered and allow him or her to narrow-down the potential set of restaurants, students, and employees to something cognitively tractable. Upon first glance, this option generation process may seem relatively simple and inconsequential. Yet, the process is far from simple and the consequences of the generation process can have important implications. Consider, for example, the processes involved in diagnosing a patient. The process is initiated when the apprehension of the presenting symptoms prompt the physician to generate explanations (i.e., options or hypotheses) of the symptoms (Sox, Blatt, Higgins, & Marton, 2006). The process obviously does not stop there. Rather, this initial set of generated hypotheses sets off a chain reaction in which the physician uses the generated hypotheses to guide information search to test the validity of the various hypotheses under consideration, which in turn may trigger the generation of new hypotheses and new information search threads (Elstein & Schwarz, 2002; Weber, Bo¨ckenholt, Hilton, & Wallace, 1993). The physician must mentally navigate the vast semantic network of diseases to narrow in on a particular diagnosis. The medical diagnosis example is but one of many instances in which people engage in hypothesis generation. Although hypothesis generation is an important component of many professional domains ranging from medicine (Elstein & Schwarz, 2002) to accounting (Libby, 1985), it is also quite common in a number of nonprofessional contexts. For example, people routinely generate hypotheses in social contexts as a way of explaining other people’s behaviors, and everyone has probably experienced at least one episode of self-diagnosis—such as discerning whether a head ache is the result of tension, a migraine, or a symptom of caffeine withdraw. Simply put, hypothesis generation is a ubiquitous process, as it is the minds way of making sense of the unending stream of sensory data that bombards us. In this chapter, we outline a new theory of option generation and illustrate the consequence of option generation for probability judgment
Hypothesis Generation, Probability Judgment, and Hypothesis Testing
301
and hypothesis testing. We view options as a form of ‘‘hypotheses.’’ Hypotheses are the events, alternatives, or options that guide our understanding of the world; they are the lenses through which we evaluate information; they serve as the basis for our choices; and they form the cornerstone of our probability judgments. Gettys and Fisher (1979) described hypothesis generation processes as predecision processes, since they take place prior to the decision process. The generation process constrains the set of options or hypotheses ultimately considered by the decision maker, and by consequence, these constraints can propagate into biases in choice, probability judgment, and information search. The goals of this chapter are threefold. Our first goal is to outline a general theoretical framework for understanding hypothesis generation and its relationship to probability judgment and hypothesis-testing processes. In pursuit of this goal, we will present a computational model of hypothesis generation and judgment called HyGene (which stands for hypothesis generation; Thomas, Dougherty, Sprenger, & Harbison, 2008), which postulates a tight connection between long-term memory, working memory, and a set of processes that enable one to estimate probability and engage in hypothesis testing. Our second goal is to illustrate the consequences of the generation process for probability judgment. In pursuit of this goal, we present both simulation and human experimental data illustrating how errors and biases in the generation process can cascade into errors and biases in judgment. Finally, our third goal is to outline the implications of HyGene for understanding the interplay between the statistical properties of the environment and hypothesis-testing behavior. In pursuit of this goal, we use HyGene to illustrate how hypothesis generation processes can affect hypothesis-testing behavior, and how sampling biases can exacerbate biases in hypothesis generation and hypothesis-testing behavior. We show that people’s mental representations of the statistical structure of the environment can influence, and be influenced by, hypothesis generation processes, and which can, in turn, affect hypothesis testing.
2. A Computational Level of Analysis HyGene is a model of memory that has been extended to account for judgment and decision-making behavior. At the highest level of analysis, HyGene can be seen as a computational model that merges principles of memory theory with principles of decision theory. However, these principles can also be instantiated algorithmically within the context of a specific representational system. In what follows, we describe HyGene at both levels of analyses: the computational and algorithmic levels. Throughout this
302
Michael Dougherty et al.
discussion, we use a medical diagnosis task as the drosophila for describing the HyGene architecture.
2.1. The Computational Problem Throughout the 1960s, normative models, such as Bayes’ theorem, were used as rough descriptions of judgment and decision-making behavior (Edwards, 1968). Today, however, the use of normative models as descriptions of behavior has given way to heuristic mechanisms (Gigerenzer, Todd, & the ABC Research Group, 1999; Tversky & Kahneman, 1974) and computational models based on principles derived from decades of research in cognitive science (Dougherty, Gettys, & Ogden, 1999; Juslin & Persson, 2002; Reyna & Brainerd, 1992). Nevertheless, normative models, in particular Bayes’ theorem, still hold a special status within the judgment and decision-making literature. The central role of Bayes’ theorem in decision-making research is due partly to the fact that it remains an important benchmark of rationality, and partly to the view that the computational goals of the human brain are inherently Bayesian (e.g., Tenenbaum, Griffiths, & Kemp, 2006; Xu & Tenenbaum, 2007). Despite its appeal as a rational model of behavior, however, Bayes’ theorem lacks key elements that characterize real-world decision-making behavior. It is these elements that HyGene was designed to address. Perhaps, the most key of these elements is the requirement of structuring the hypothesis space. That is, before one can choose among a set of hypotheses, rate the probability of a particular hypothesis, or engage in hypothesis testing, one must define the space of possibilities that are relevant for the particular decision task at hand. This is certainly true for real-world tasks such as medical diagnosis, where the physician is required test individual hypotheses against at least one alternative (or differential) hypothesis. For example, Sox et al. (2006) suggest that the most important functions of a clinical interview are to ‘‘focus the investigation on the diseases suggested by the patient’s appearance. . .to reduce to a manageable size the list of diseases that could be causing the patient’s problem’’ (pp. 9–10). In this respect, normative models that describe how to integrate information to form a judgment (i.e., Bayes’ theorem) are of little help. While Bayes’ theorem provides a framework for integrating probabilities from a known set of possibilities, it is silent with respect on how to define the set of possibilities in the first place. To illustrate the importance, as well as the difficulty, of defining the hypothesis space, consider the odds form of Bayes’ theorem: PðHjDÞ PðDjHÞ PðHÞ ¼ ; PðHjDÞ PðDj HÞ PðHÞ
ð1Þ
Hypothesis Generation, Probability Judgment, and Hypothesis Testing
303
where H corresponds to the hypothesis being evaluated (the focal hypothesis) and D corresponds to the information that is used to evaluate H. H might correspond to a particular disease hypothesis, such as pneumonia, whereas H would correspond to a disease other than pneumonia (not pneumonia). The left-hand side of the equation corresponds to the posterior odds in favor of the focal hypothesis (pneumonia) given the observable pattern of data (i.e., the symptom pattern). The first component of the right-hand side corresponds to the likelihood ratio; this ratio can also be interpreted as representing the diagnosticity of the data for the focal versus the alternatives. The second component corresponds to the odds in favor of the focal hypothesis compared to its alternatives. The strength of evidence for H (pneumonia) is readily assessed in terms of frequency of occurrence, but assessing H (diseases other than pneumonia) requires that the decision maker generate a relevant set of alternatives. The same is true for assessing the diagnosticity of the data: Diagnosticity can be assessed only with respect to a specific reference class, and requires the decision maker to generate a set of plausible alternatives to serve as the basis for assessing the diagnosticity of the data for differentiating between the focal and one of its alternatives. Returning to the pneumonia example, the diagnosticity of labored breathing for differentiating between pneumonia and influenza is quite high, whereas this diagnosticity ratio is presumably quite low when evaluated against emphysema and asthma. Note, however, that the set of relevant hypotheses that could be compared to pneumonia is ill-defined and potentially quite large. Thus, it is incumbent upon the decision maker to define the set of relevant hypotheses, and to narrow the set down to something manageable (Sox et al., 2006). The problem of ill-defined hypothesis spaces was first identified by John Venn (see Kilinc¸, 2001), and is not unique to the Bayesian account of probability (see Ha´jek, 2007). We raise this issue here because the question of how to define the appropriate hypothesis space is fundamental to deriving a model of hypothesis generation, probability judgment, and hypothesis testing. We argue that the definition of accuracy for probability judgment and the definition of diagnosticity for evaluating hypothesis-testing behavior critically depend on how the decision maker structures his or her hypothesis space. Moreover, we argue that the particular hypothesis space that one brings to bear in a particular judgment task is dependent on memory-retrieval variables, semantic structure, and cognitive limitations.
2.2. A Solution to the Computational Problem HyGene provides a cognitively tractable solution to the problem of defining the hypothesis space. According to HyGene, the hypothesis space that one brings to bear on a particular decision problem is constructed dynamically as information (i.e., data) is acquired by the decision maker. HyGene assumes
304
Michael Dougherty et al.
that the data observed by the decision maker activates a subset of hypotheses in semantic memory that are semantically or associatively related to the observed data. This subset, therefore, serves as the space of possibilities, and it is from this space that one generates hypotheses for active consideration. The general form of HyGene is based on three principles: 1. Data extracted from the environment are used as retrieval cues for retrieving hypotheses from long-term memory. 2. Working-memory (WM) processes constrain how many hypotheses one can actively maintain in the focus of attention. 3. Hypotheses that are generated and maintained in working memory serve as input into a comparison process for probability judgments and serve as the basis for hypothesis testing and information search via a process we call hypothesis-guided information search. Principle 1 implies that hypothesis generation is a general case of cued recall, in which observed data are used to cue the retrieval of diagnostic hypotheses from either episodic long-term memory or knowledge. However, one major difference between cued recall and hypothesis generation tasks concerns the goal of the task. In many laboratory recall tasks, the goal is to retrieve a single item or event from memory. Accordingly, a single cue may be associated with a single memorandum. This contrasts with many hypothesis generation tasks, where a single cue may be associated with multiple memoranda. For example, sore right abdomen is diagnostic of several conditions, including kidney infection, appendicitis, and bruising. Thus, the goal of the decision maker is to generate the set of plausible hypotheses that could explain the data, rather than generate a single explanation. Our use of the term hypothesis generation in this context is limited to the case of retrieving the hypotheses from memory, and does not include the process of discovering novel hypotheses that are not represented in semantic memory (for a discussion of discovery processes in HyGene, see Thomas et al., 2008). A second difference between recall tasks and hypothesis generation tasks is that in many recall tasks, the output of the recall process is an end to itself—once the recall process has terminated, the individual has reached the ending state. In contrast, the retrieval component in a hypothesis generation task is the jumping off point for a collection of higher-level processes aimed at assessing the probability of candidate hypotheses and/or engaging in hypothesis testing. Although the ultimate goal is to eventually arrive at the best explanation of the data, this process often requires that one evaluate and test multiple explanations to rule out alternative explanations. Principle 2 embodies the assumption of limited cognitive resources. Specifically, we assume that the number of hypotheses one consciously entertains at any point in time is limited by WM constraints, and that WM places an upper limit on the number of hypotheses that the decision
Hypothesis Generation, Probability Judgment, and Hypothesis Testing
305
maker can actively entertain at any point in time. Although individual differences in WM capacity are one form of limitation (Friedman & Miyake, 2004), the available resources for entertaining hypotheses can be drained by concurrent divided attention, or situational factors that promote stress or anxiety. Because it takes time to populate WM with hypotheses, time constraints can lead to the generation of fewer hypotheses (Dougherty & Hunter, 2003b). For example, an emergency room physician likely will be forced to truncate the retrieval process if the patient’s presenting symptoms require immediate action, such as resuscitation. Thus, the conscious hypothesis space is a function of both WM limitations and factors that affect retrieval (Dougherty & Hunter, 2003a,b). Principle 3 implies that the higher-level processes involved with probability judgment and hypothesis testing are based only on the hypotheses in WM, rather than the entire hypothesis space defined by the activated portion of semantic memory or a normatively defined set. Considerable work within the probability judgment literature is consistent with the idea that people make probability judgments using a comparison process, where the strength of evidence for a focal event is compared with the strength of evidence for a set of alternatives (Dougherty, 2001; Dougherty et al., 1999; Sprenger & Dougherty, 2006; Tversky & Koehler, 1994; Windschitl & Wells, 1998). Tversky and Koehler (1994) proposed support theory as a general framework for describing this comparison process. According to support theory the probability of hypothesis A, rather than hypothesis B, is given by the evidential support for A, divided by the sum of the support for A and B: pðA; BÞ ¼ sðAÞ=½sðAÞ þ sðBÞ. HyGene uses a support-theory like process, but assumes that judgments are based only on those hypotheses contained within WM, and further specifies that the support values themselves reflect an underlying memory-strength variable. The processes involved in hypothesis-guided information search have received much less attention in the literature. For our purposes, we consider two forms of hypothesis-guided information search. One form involves the top-down guidance of visual attention. Within the visual attention literature, work by Downing (2000), Moores, Laiti, and Chelazzi (2003), and Soto, Hodsoll, Rotshtein, and Humphreys (2008) indicates that the contents of working memory serve to modulate individuals’ deployment of visual attention to stimuli in the visual environment. We argue that these visual attention processes are tantamount to hypothesis testing. For example, a physician who believes that a patient has measles, will likely show attentional bias toward measles-related symptoms. This form of hypothesistesting behavior is somewhat informal, and is assumed to be obligatory or automatic. The second form of hypothesis-testing behavior resembles the more formal process of diagnostic search and entails an explicit consideration of the fit of the data under multiple hypotheses. While the contents of WM
306
Michael Dougherty et al.
are still assumed to guide hypothesis testing, the decision maker is assumed to compare how well the data fits with one hypothesis versus another hypothesis. For example, a physician who is entertaining measles and hives as viable hypotheses might show attentional bias toward symptoms that enable him or her to discriminate between the two hypotheses. Note that by linking top-down visual search to the processes of hypothesis generation, HyGene provides an integrative framework that describes how contents generated from long-term memory can be used to guide the allocation of visual attention. The idea that the contents of WM can guide visual search presupposes that visual search is (or can be considered) a goal directed behavior, and that it serves an adaptive purpose. In the context of diagnostic inference, one important goal of top-down guided search is that of hypothesis testing. Within high-level decision-making tasks, one can think of the contents of working memory as providing a basis for directed external information search, which can be used to test the validity of the various hypotheses held in WM. HyGene assumes that the process of hypothesis-guided information search enables the decision maker to winnow the set of actively considered hypotheses to the most plausible hypothesis given the data. One important question concerning hypothesis-guided information search concerns the rules that might be applied by the decision maker to select the most diagnostic information. Below, we evaluate potential information search rules and present simulation results demonstrating their characteristics.
3. An Algorithmic Level of Analysis The above computational analysis provides some of the necessary components of the HyGene model. In this section, we detail HyGene at an algorithmic level. We should point out that this algorithmic level reflects our approach to addressing the computational framework outlined above, but that other reasonable solutions likely exist. HyGene assumes three main memory constructs: (1) working memory, (2) exemplar or episodic memory, and (3) semantic memory. Working memory is used for the maintenance of the set of leading contenders (SOC) hypotheses. The SOC is a subset of the total possible set of hypotheses that are maintained in WM, and the number of hypotheses maintained in the SOC is limited by WM capacity. One may maintain fewer hypotheses in WM because they have a relatively low-WM capacity (an individual difference) or because their WM capacity is being consumed by a secondary task. In keeping with research within the WM literature, we assume that working memory reflects one’s ability to maintain goal-relevant
Hypothesis Generation, Probability Judgment, and Hypothesis Testing
307
information (i.e., hypotheses) in the focus of attention in the face of distraction (Engle, Kane, & Tuholski, 1999). Episodic memory consists of a collection of traces in long-term memory that represent the decision makers past experiences. This database preserves the base rates of the events as experienced by the decision maker, and provides the record of the co-occurrences between the data (or cues) and the hypotheses. We assume that people’s episodic memory representations preserve the probabilistic relationships between the hypotheses and data that naturally occur in their learning environment (Gigerenzer, Hoffrage, & Kleinbo¨lting, 1991), but that the memory traces themselves are degraded copies of the experienced events. Given that the episodic memory representation preserves the probabilistic structure of the decision maker’s environment, it can serve as a reasonable proxy for inferring which hypotheses plausibly explain a given pattern of observed data (Dobs) in the environment. As well, the episodic memory representation can also be used for assessing the conditional probability of the various hypotheses (H) generated, given Dobs (Dougherty et al., 1999). The third construct within HyGene is semantic memory, which is assumed to maintain both abstractions from the episodic system and generalized knowledge obtained outside of experience (e.g., medical school). However, it is important to note that semantic memory lacks reference to the naturally occurring base rates. That is, the abstracted representations contained within semantic memory contain no information about the frequency with which a particular hypothesis might occur, or what data are associated with that hypothesis. For example, while episodic memory is assumed to contain a trace corresponding to each patient a physician encounters with influenza, pneumonia, or meningitis, semantic memory would be assumed to contain one and only one representation of each disease, regardless of the number of times it has been experienced. Working memory, episodic memory, and semantic memory are assumed to operate in concert with a set of retrieval operations, which together enable the decision maker to identify a set of plausible hypotheses. We identify two retrieval operations: A prototype-extraction process and a semantic-activation process. The prototype-extraction process involves the derivation of an ‘‘unspecified’’ probe from exemplar memory that is suggested by Dobs. The semantic-activation process involves a disambiguation process in which the unspecified probe is matched against hypotheses in semantic memory. Hypotheses that are sufficiently activated by the unspecified probe are generated from semantic memory and fed into the SOC, where they can be selected as the basis of choice, used in estimating conditional probabilities, or used in hypothesis-guided information search. The basic structure of HyGene is illustrated in Figure 1. For the sake of clarity, we characterize the processes in HyGene in terms of a series of steps. However, we do not suppose that these steps are necessarily carried out serially:
308
Michael Dougherty et al.
Environmental data Dobs-1, Dobs-2... Dobs-i ... Dobs-N
Step 1: Dobs-i activates traces in episodic memory
Step 5: Judge probability using comparison process
Step 2: Extract unspecified probe from episodic memory
Step 3: Match unspecified probe against known hypotheses in semantic memory
Step 6: Select new data for hypothesis testing using information search heuristic
Yes
No
Step 4: Add hypothesis to set of leading contenders (SOC)
Does T = Kmax?
Yes
Is As > ActMinH?
T =T + 1
Figure 1
No
Flow diagram illustrating HyGene’s hypothesis generation processes.
Step 1. Dobs (i.e., a symptom or a set of symptoms) are sampled from or are observed in the environment. This initial sampling of data serves to initiate the activation of traces in episodic memory that represent past instances of the target hypothesis that share features with Dobs. Step 2. The traces in episodic memory that are activated above a threshold value (Ac) lead to the extraction of an unspecified probe that resembles those hypotheses that are most commonly (and strongly) associated with the data. The unspecified probe is much like a prototype representation, but one that has not yet been assigned category membership. The representation of the unspecified probe will be dominated by those events in episodic memory that closely match Dobs, and which are
Hypothesis Generation, Probability Judgment, and Hypothesis Testing
309
frequently occurring. Thus, the unspecified probe contains information regarding the episodic base rates of the various hypotheses. Step 3. The unspecified probe is matched against known hypotheses stored in semantic memory. This is done to identify the set of hypotheses that could explain the initial Dobs, symptoms that are comorbid with Dobs, and potential treatments that have been associated with Dobs in the past. Step 4. Hypotheses are generated from semantic memory and placed in the SOC if they are sufficiently activated by the unspecified probe. The generation process involves stochastic sampling and replacement, where hypotheses are sampled from semantic memory according to their activation value. Thus, hypotheses that are more activated by the Dobs have a higher probability of being sampled. Hypotheses are recovered from semantic memory and placed into the SOC if their activation values are greater than the least active member of the SOC. Hypotheses in the SOC are referred to as ‘‘leading-contender hypotheses,’’ because they represent the decision maker’s leading explanations for the presenting symptoms. Steps 3 and 4 are responsible for generating the set of options, which will be used for probability judgment (step 5) and information search (step 6). Step 5. The posterior probability of the focal hypothesis PðHi jDobs Þ is given by comparing its (memory) strength to the (memory) strengths of all hypotheses in the SOC. A conditional probability is computed for each of the i hypotheses in the SOC. These probabilities are then normalized across all hypotheses in the SOC, which results in constrained additivity: The sum of the judgments for the hypotheses held in WM will equal 1.0. However, since the true hypothesis space may be larger than the one actually entertained by the decision maker, judgments are likely to be inflated relative to the objective probabilities. Step 6. Hypotheses in the SOC can be used for hypothesis-guided information search or to guide the selection of cues for hypothesis testing. We assume that diagnostic search can occur only when the decision maker is entertaining more than one hypothesis. Moreover, we postulate a consistency-checking process that eliminates leading contenders from the SOC that are inconsistent with Dobs (Fisher, Gettys, Manning, Mehle, & Baca, 1983). That is, we assume that a clinician rejects from the SOC any hypotheses that are inconsistent with the patient’s symptoms. The steps outlined above are assumed to be a dynamic and iterative process, where the decision maker continually updates the SOC as new data are encountered in the environment and old hypotheses are rejected from the SOC. HyGene assumes that hypothesis generation processes stop either when there is no time left or after a threshold is met on the number of failed retrieval attempts (i.e., total retrieval failures). The algorithm outlined in Figure 2 can be instantiated within any number of representational systems. For the time being, we have limited our modeling to the use of a vector-based model of memory. However,
310
Michael Dougherty et al.
Sprenger, Tomlinson and Dougherty (2009) Mean additivity
300 250 200 150 100 Low span High span Working memory capacity (median split)
Predicted additivity
300
Hygene predictions
250 200 150 100
f = 4 (high span)
f = 2 (low span)
Working memory capacity Balanced
Unbalanced
Figure 2 Effect of distribution shape on additivity of judgments for low- and high-span participants. Top panel plots mean additivity as a function of distribution and WM span. Error bars represent 1 standard error. Data are from an unpublished study by Sprenger et al. (2009). The bottom panel plots the corresponding HyGene predictions.
we have begun exploring ways of extending the basic architecture to other representational systems, such as latent semantic analysis (Landauer & Dumais, 1997), which would enable HyGene to operate on top of a representation that captures the semantics of natural language.
3.1. A Representational Instantiation1 The representational structure of our version of HyGene consists of ordered sets of features, with each set represented by a mini-vector, and each minivector consisting of N cells, where values of þ1, 0, or 1 are randomly assigned to each cell with equal probability (Hintzman, 1988). Events that occur together in the environment are represented by a set of concatenated mini-vectors, such that any component can correspond to a hypothesis 1
A more thorough description of HyGene is provided in Thomas et al. (2008).
Hypothesis Generation, Probability Judgment, and Hypothesis Testing
311
and the rest to data or context. For the purposes here, one can think of the hypothesis component as corresponding to a lexical label describing a particular disease, and the data components as corresponding to symptoms. Episodic events are learned through experience, and are encoded in episodic memory with varying degrees of fidelity. Encoding quality is modeled by the learning-rate parameter, L, which specifies how well the traces stored in memory correspond to the experienced event. L determines the probability that each feature in the experienced event is encoded into the corresponding memory trace vector, where 0 L 1: Nonzero features are coded as a 0 with probability 1 L, to represent the loss of information. Retrieval involves computing the similarity between a probe vector, P, and each trace Ti, in memory, M. The similarity metric used in HyGene is the dot-product between the probe vector and the trace, as defined by Equation (2): PN j¼1 Pj Tij Si ¼ ; ð2Þ Ni where Pj is a feature in the jth position of the probe, Tij is a feature in the jth position of the ith trace, and Ni is the number of features where Pj 6¼ 0 or Tij 6¼ 0. The activation, A, of trace i is given by cubing the value of S for each trace: ð3Þ Ai ¼ Si3 : This cubing function enables those traces most similar to the probe to contribute more to the output of the model. Thus, the cubing function serves as a parameter-free weighting function that depends only on the similarity. Summing the values of Ai across all traces in memory gives what Hintzman (1988) called echo intensity, which he used to model recognition memory and frequency judgment. HyGene works slightly different. Specifically, HyGene assumes a conditional memory search process, wherein similarity is computed on only a subset of episodic memory. The conditional memory search allows the model to partition episodic memory into set relevant and set irrelevant memory traces. As an example, imagine that you are asked to judge the probability of some hypothesis H (e.g., pneumonia) given some observed data, Dobs (e.g., high fever, fluid in lungs): Pðpneumoniajhigh fever \ fluid in lungsÞ. We assume that the probability of H, pneumonia, is to be made conditional on Dobs, high fever \ fluid in lungs, such that participants first partition episodic memory, M, into the subset of K traces that contain data components sufficiently similar to the Dobs. In our example, traces that contain data sufficiently similar to high fever \ fluid in lungs would be placed in the activated subset. Trace i is placed in the activated subset if and only if the
312
Michael Dougherty et al.
Ai between the D component of trace i and Dobs in the probe exceeds a threshold parameter, Ac: A i Ac : ð4Þ Traces included in the activated subset are probed a second time by the H component (pneumonia) of the probe vector, with the sum of the activations across the K traces in the activated subset giving rise to the conditional echo intensity: P IAi Ac IC ¼ : ð5Þ K Here, IC is the mean conditional echo intensity and K is the number of traces for which Ai Ac. Dougherty et al. (1999) used IC to model conditional probability judgments within Minerva-DM. The computation of IC resembles very closely a Bayesian estimator, though one that is implemented within the context of a memory model (see Dougherty et al., 1999 for a formal derivation). A key contribution of HyGene is the development of a process for generating hypotheses, which would then be used as input into Equation (5). Hypothesis generation in HyGene begins by deriving a content vector from the subset of K traces activated by the initial retrieval cue, Dobs. We refer to this content vector as the conditional echo content, Cc. The conditional echo content is a vector, Cc, whose jth element is given by Equation (6): XK Cc ¼ AT ; ð6Þ i¼1 i ij where Cc is the conditional echo content for the jth element and K is the number of traces for which Ai Ac. The vector Cc often will have content feature values outside of the allowable feature range of 1 to þ 1. To solve this problem, the echo content vector is normalized by the absolute value of the largest content value. This ensures that any positive content value greater than 1.0 and any negative content value less than 1.0 are perceived within the allowable feature range of þ 1 to 1, while preserving the sign of the original content values. The output of the conditional echo content process is the creation of an unspecified probe. The unspecified probe contains information about events that have been associated with the symptoms in the past (e.g., ‘‘high fever \ fluid in lungs’’), and functions much like a prototype in that it does not represent any one thing within episodic or semantic memory. By conditionalizing on the subset of traces activated by Dobs (high fever \ fluid in lungs), the model is able to utilize the cluster of traces in episodic memory corresponding to hypotheses that are related to the known data. In this way, the conditionalization process enables HyGene to embody properties of Bayesian inference, in that the unspecified probe is sensitive to the
Hypothesis Generation, Probability Judgment, and Hypothesis Testing
313
base rates of the hypotheses within the reference class defined by Dobs. This Bayesian-like process plays out in the hypothesis generation process.
3.2. Hypothesis Generation The next step in the process involves determining the plausible identity of the unspecified probe. This process begins by matching it against known hypotheses in semantic memory, and culminates in the set of options that will be considered by the decision maker and used as the basis for probability judgment and hypothesis testing. Hypothesis generation involves matching the unspecified probe against all known hypotheses in semantic memory in parallel and computing their activation values. Hypotheses in semantic memory whose semantic activation (As) is greater than zero define the semantic hypothesis space, and it is from this semantic hypothesis space that hypotheses are sampled and potentially generated. Hypotheses are sampled probabilistically according to their activation value (cf. Luce’s choice axiom; Luce, 1959) and are recovered from semantic memory and added to the SOC if their As exceeds ActMinH. Although the initial value of ActMinH ¼ 0, ActMinH is assumed to be dynamically updated based on the activation values of the hypotheses that have been generated from semantic memory. In particular, ActMinH is always set equal to the activation value of the least active hypothesis in the SOC. Adjusting the value of ActMinH in this way ensures that the model will only generate additional hypotheses if they are better (i.e., more strongly active) than the ones already contained within the SOC. The advantage of specifying ActMinH in this way is that it provides a parameter-free method for ensuring that the model becomes increasingly more selective in generating hypotheses as a function of time. Retrieval from semantic memory involves sampling hypotheses from semantic memory sequentially and comparing the activation value of the sampled hypothesis to ActMinH, with successful retrieval achieved when As > ActMinH. However, this retrieval process cannot be carried out indefinitely, and must eventually be terminated. Thomas et al. (2008) used a termination rule based on the number of consecutive retrieval failures, such that search was terminated when the number of consecutive retrieval failures reached the threshold parameter Tmax. Retrieval failures were defined as a failure to add a new hypothesis to the SOC (i.e., when As < ActMinH) on a particular retrieval attempt. However, subsequent work by Dougherty and Harbison (2007) and Harbison, Dougherty, Davelaar, and Fayyad (2009) revealed that search termination decisions in memory-retrieval tasks were better accounted for by a total number of retrieval failures accumulated throughout the retrieval episode.2 Thus, recent versions of 2
The total number of retrieval failure rule is identical to the rule used in search of associative memory (SAM), which was modeled by Kmax (Raaijmakers & Shiffrin, 1981).
314
Michael Dougherty et al.
HyGene have adopted a new search termination rule based on the total number or retrieval failures in which the model terminates hypothesis generation after Tmax total retrieval failures. Hypotheses in the SOC are ordered according to their activation values, with the member of the SOC with the highest As interpreted as the best explanation of Dobs. However, we also assume that the number of hypotheses that can be maintained in the SOC is limited by WM capacity. The WMcapacity parameter, f, specifies the upper limit of how many hypotheses can be held in working memory. It is important to point out that the generation process itself is not constrained by WM capacity, but rather it is the maintenance of the hypotheses in WM that is limited. Indeed, there is an abundance of work on the role of working memory and divided attention at retrieval which suggests that retrieval processes are protected from the effects of divided attention (Craik, Govoni, Naveh-Benjamin, & Anderson, 1996; Naveh-Benjamin, Craik, Guez, & Dori, 1998; but see Fernandes & Moscovitch, 2000). What little effect working memory has on retrieval appears to operate only in circumstances in which there is a good deal of proactive interference in the retrieval task (Kane & Engle, 2000; Rosen & Engle, 1997). Thus, over the course of solving a decision problem we assume that the decision maker may consider a rather large set of potential hypotheses, but that only a small subset of those hypotheses will be retained in WM and have a direct influence on judgment. HyGene assumes a consistency-checking process, which involves paring hypotheses from the set of leading contenders if they contain data that are inconsistent with the available data. This process was elaborated by Dougherty, Gettys, and Thomas (1997) and Fisher et al. (1983), and is based on the observation that participants tend to eliminate hypotheses from consideration when the observed data conflict with expected data given the hypothesis. Within HyGene, consistency checking is based on the similarity between the ith mini-vector in Dobs and the corresponding D mini-vector in the generated hypothesis. Hypotheses are dropped from the SOC when the value of Si derived from this comparison is below zero.
3.3. Probability Judgment Probability judgments are derived using a version of the comparison process specified by Tversky and Koehler’s (1994) support theory. Support theory postulates that the probability of a particular hypothesis is given by the ratio of its strength compared to its alternatives: PðHa ; Hb Þ ¼
SðHa Þ ; SðHa Þ þ SðHb Þ
ð7Þ
Hypothesis Generation, Probability Judgment, and Hypothesis Testing
315
where P(Ha, Hb) is the probability of the focal compared to its alternative, and where S(Ha) and S(Hb) correspond to the amount of evidential support for the focal and alternative hypotheses, respectively. The elegance of support theory lies in its broad applicability (Rottenstreich & Tversky, 1997; Sloman, Rottenstreich, Wisniewski, Hadjichristidis, & Fox, 2004; Tversky & Koehler, 1994). However, the weakness of the model that it does not make strong assumptions about how support is assessed by the decision maker. While Tversky and Koehler (1994) did not instantiate the assessment of support within the context of memory, they did speculate that the support values for each hypothesis could be estimated using heuristic mechanisms such as availability and representativeness. HyGene uses support theory’s comparison process but substitutes conditional echo intensities for the support values, and further postulates a capacity limitation on the number of hypotheses included in the comparison process, as specified by Equation (8): IC PðHi jDobs Þ ¼ Pw i : ð8Þ i¼1 ICi Here, PðHi jDobs Þ is the probability of the ith hypothesis in the SOC, conditional on the subset of traces activated by Dobs. w represents the number of hypotheses in the SOC, where w f. Thus, Equation (8) provides a memory-theoretic basis for the derivation of probability judgments that instantiate support theory’s comparison process, and allows one to predict the relationship between retrieval from long-term memory, working memory, and probability judgment. The fact that the comparison process is carried out over only those hypotheses in WM implies that the total probability is partitioned over only those hypotheses contained in the SOC. This assumption leads to the principle of constrained additivity (for relevant data, see Dougherty & Hunter, 2003a), which states that judgments will be additive within the set of hypotheses explicitly considered by the decision maker and included in this comparison process. By consequence, the decision maker’s probability judgments will be, on average, excessive whenever he or she fails to consider all hypotheses within the normative set of hypotheses. However, if w equals the total number of possible hypotheses in the normative set then HyGene predicts that probability judgments will be additive. Given the framework presented above for describing hypothesis generation and probability judgment, the next question is whether HyGene anticipates common findings within the judgment literature. For this, we focus on a number of findings in the judgment literature related to the phenomenon of subadditivity, but note that HyGene makes predictions at the level of individual hypotheses as well. Subadditivity occurs when the sum of the subjective probabilities assigned to a set of mutually exclusive and exhaustive hypotheses exceeds
316
Michael Dougherty et al.
the probability of an implicit disjunction of those same hypotheses. Let fh1 ; h2 ; hi ; . . . P ; hk g 2 H. Normatively, judgments should be additive, Pi¼1 such pðh Þ. Subadditivity occurs when pðHÞ < that pðHÞ ¼ i¼1 i k k pðhi Þ, P pðh Þ. The majority of whereas superadditivity occurs when pðHÞ > i¼1 i k studies investigating additivity of judgments show a pronounced tendency toward subadditivity (Bearden & Wallsten, 2004; Brenner, 2003; Dougherty & Hunter, 2003a,b; Dougherty & Sprenger, 2006; Mulford & Dawes, 1999; Rottenstreich & Tversky, 1997; Sprenger & Dougherty, 2006; Tversky & Koehler, 1994; but see Sloman et al., 2004 for an example of superadditivity). However, this general tendency is characterized by a number of more nuanced effects. For example, Dougherty and Hunter (2003a,b) showed that the magnitude of subadditivity was affected by the distribution of the set of items being judged. Participants show less subadditivity when the distribution of the to-be-judged items is characterized by a single strong hypothesis (i.e., the distribution is unbalanced), compared to when the items are all relatively equal in strength (i.e., the distribution is balanced). This finding is similar to the alternative-outcomes effect identified by Windschitl and Wells (1998), but extends the effect to subadditivity. A second effect identified in the literature is the covariation between WM span and subadditivity. Subadditivity is more pronounced in participants with lower WM span (Dougherty & Hunter, 2003a,b; Sprenger & Dougherty, 2006), and when judgments are made under conditions of divided attention (Dougherty et al., 2009). The top panel of Figure 2 presents data from a previously unpublished study examining the effect of distribution shape on subadditivity for both high- and low-WM span participants (Sprenger, Tomlinson, & Dougherty, 2009). In this experiment, 89 participants studied lists of menu items that were ordered over the course of 74 days by patrons of a hypothetical diner. For the balanced condition, the relative frequencies of the purchased items were 20, 10, 9, 9, 8, 8, 8, and 2. For the unbalanced condition, the relative frequencies of the items were 42, 20, 2, 2, 2, 2, 2, and 2. For example, Bob might order pancakes 20 times, fruit 10 times, cereal 9 times, and so forth. At the end of the 74 days, participants were asked to rate the probability that each of the items would be ordered on the following day (day 75). The experiment was constructed such that each distribution represented its own sampling space. Thus, normatively, the sum of the probabilities for each distribution is 100%. Working-memory span was measured using the operation span task (see Turner & Engle, 1989), and high- and low-span groups were created with a median split. As illustrated in the top panel of Figure 2, participants’ judgments showed pronounced subadditivity. Moreover, the degree to which participants’ judgments were subadditive was affected by the distribution of items being judged, and covaried with WM capacity. The finding of subadditivity is consistent of the typical finding in the literature on probability judgment,
Hypothesis Generation, Probability Judgment, and Hypothesis Testing
317
and the relationship with distribution shape and WM replicate published work by Dougherty and Hunter (2003a,b). However, can HyGene account for these findings and does it make further novel predictions? To evaluate HyGene’s ability to account for the results of this experiment, we simulated the effect of manipulating the distribution of alternatives, WM capacity, and encoding quality. In the first simulation the model’s performance was assessed across two values of f, to simulate low-WM span (f ¼ 2) and high-WM span (f ¼ 4). For this simulation, encoding quality was held constant at L ¼ 0.8. In the second simulation, the performance of the model was evaluated across five levels of encoding quality, with L ¼ 0.5, 0.6, 0.7, 0.8, and 0.9. For this simulation, WM capacity was held constant at f ¼ 4. To explore the effect of distribution shape, both simulations used two distributions of hypotheses. For the balanced condition, eight hypotheses were stored in memory with trace frequencies of 10, 5, 5, 4, 4, 4, 4, and 1. For the unbalanced condition, eight hypotheses were stored in memory with trace frequencies of 15, 10, 7, 1, 1, 1, 1, and 1. These distributions closely approximate the form of those used in the experiment presented in Figure 2 (i.e., each item in the simulation was roughly half as frequent as those used in the experiment).3 Each simulation consisted of 1000 trials with Tmax ¼ 10 and Ac ¼ 0.166. The bottom half of Figure 2 presents the simulation results for the effect of distribution on subadditivity, as a function of WM. Note that HyGene predicts greater subadditivity for the balanced distribution than for the unbalanced distribution, as well as a modest decrease in subadditivity as a function of WM span. Both results mirror the behavioral data presented in the top panel of Figure 2. Figure 3 plots HyGene’s predictions for hypothesis generation (top panel) and subadditivity (bottom panel) as a function of encoding quality for both balanced and unbalanced distributions. HyGene’s hypothesis generation behavior is intuitive: The number of hypotheses generated by the model increases as encoding quality increases. However, HyGene’s predictions for the effect of encoding quality on subadditivity are quite counter intuitive. Generally, HyGene predicts an increase in subadditivity with increases in encoding, with some nonmonotonicity for the balanced distribution. The simple explanation for these predictions is that increases in encoding quality lead to an increase in the strength of the focal. However, note that as encoding quality increases, the model generates more alternatives to include in the comparison process. Thus, the increase in strength resulting from increased encoding overwhelms the effect of including more alternatives in the comparison process. The net result, therefore, is that subadditivity is predicted to increase as a function increased encoding, even 3
The absolute frequencies used in the simulation are of less importance than the relative frequencies.
318
Michael Dougherty et al.
Number generated
3
2
Mean predicted subadditivity
1 0.5
0.6 0.7 0.8 0.9 Encoding parameter (L)
1
0.6
1
300 250 200 150 100 0.5
0.7
0.8
0.9
Encoding parameter (L) Balanced distribution Unbalanced distribution
Figure 3 HyGene predictions for the effect of encoding on hypothesis generation (top panel) and subadditivity (bottom panel), as a function of distribution shape.
though the model also predicts increases in the number of alternative hypotheses generated. The question, then, is whether such an effect of encoding on judgment bears out in behavioral data. The answer to this question is no. Using an experimental paradigm similar to the one used by Sprenger et al. (2009; see also Dougherty & Hunter, 2003a), Dougherty et al. (2009) investigated the effect of encoding quality on hypothesis generation and judgment (the distributions were identical to the ones used in the simulation). Encoding quality was manipulated by having participants complete a concurrent divided attention task during the learning phase. As in Dougherty and Hunter (2003a) and Sprenger et al. (2009), participants showed greater subadditivity for the balanced distribution than for the unbalanced distribution, and high-span participants showed less subadditivity than low-span participants. Additionally, as predicted by HyGene, participants generated more hypotheses in the high-encoding condition than in the low-encoding condition. However, the effect of
Hypothesis Generation, Probability Judgment, and Hypothesis Testing
319
encoding on subadditivity was exactly opposite the pattern predicted by HyGene: Participants showed less subadditivity in the high-encoding condition than in the low-encoding condition. Moreover, the effect of encoding on subadditivity was completely mediated by the number of hypotheses generated. That is, once the effect of number of hypothesis generated on judgment was factored out, the effect of encoding on subadditivity was virtually zero. This study indicates that, while HyGene can successfully predict a number of probability judgment phenomena, it produces falsifiable predictions. Indeed, HyGene does not anticipate a decrease in subadditivity with increased encoding, though it does correctly anticipate an increase in the number of generated hypotheses with increased encoding. Table 1 summarizes six empirical findings that we have attempted to model with HyGene, along with the explanation or mechanism responsible for HyGene’s prediction. As can be seen, HyGene successfully accounts for five of these six findings. Some of these predictions follow directly from mechanisms built into the model (e.g., WM constraints are directly modeled by the parameter f), whereas others are entirely determined by factors external to the model assumptions (e.g., environmental constraints refer to the correspondence between the statistical structure of the decision maker’s learning environment, and the structure of long-term memory). Note that none of HyGene’s predictions regarding probability judgment require ad hoc assumptions about the judgment process per se; the comparison process is assumed to be carried out without regard to processes such as anchoring and adjustment and without introducing weighting or discounting functions (e.g., prospect theory; Tversky & Kahneman, 1981). Rather, all of HyGene’s probability judgment predictions are a function of the more basic processes involved with storage in long-term memory, retrieval from long-term memory, or maintenance in WM. Moreover, HyGene does not require the introduction of heuristic mechanisms such as the availability or representativeness heuristics (Tversky & Kahneman, 1973): All of HyGene’s predictions flow directly from the underlying architecture of the model. We suspect that the one effect unaccounted for by HyGene (the effect of encoding on subadditivity) stems from our use of ActMinH as a threshold for adding hypotheses to the SOC. ActMinH will tend to produce a ‘‘lock out’’ effect when the strongest (i.e., most highly activated) alternative is generated into the SOC first. Once the most activated hypothesis is generated, no other alternatives can be added to the SOC. This would also explain the models rather modest increase in the number of hypotheses generated as a function of encoding.
3.4. Information Search and Hypothesis Testing While the original impetus for developing HyGene was to account for the relationship between long-term memory, working memory, and probability judgment, our more recent developments extend the model to
Table 1
Six Empirical Findings from the Probability Judgment and Hypothesis Generation Literatures.
Empirical result
Predicted by HyGene?
Yes Alternative-outcomes effect. Judged probability and subadditivity are higher for hypotheses drawn from relatively uniform or balanced distributions compared to asymmetrical or unbalanced distributions (Dougherty & Hunter, 2003a,b; Sprenger et al., 2009; Windschitl & Wells, 1998). Yes Negative correlation between judgment and individual differences in working memory. Probability judgment and subadditivity are higher for low-WM span participants than high-WM span participants (Dougherty & Hunter, 2003a,b; Sprenger & Dougherty, 2006). Yes Increased subadditivity under time pressure. Judgments are more subadditive when made under time pressure (Dougherty & Hunter, 2003b).
Increased subadditivity with divided attention during judgment. Yes Subadditivity increases when judgments are made under conditions of divided attention (Dougherty et al., 2009).
Explanation based on HyGene
Environmental constraint. Stronger hypotheses have a higher probability of being retrieved from memory and included in the comparison process. As the strength of the alternative hypotheses included in the comparison process increases, judged probability and subadditivity decrease. Thus, distributions with strong alternatives will lead to lower judgment. Working-memory constraint. Individual differences in working memory are modeled by varying the WM parameter (f). f limits the number of hypotheses included in the comparison process, which in turn leads to higher judged probability and greater subadditivity. Retrieval constraint. Time pressure is modeled by the Tmax parameter, which sets the minimum number of retrievals tolerated before terminating retrieval. Lower values of Tmax result in the generation of fewer hypotheses. Tmax may also reflect individual differences in decisiveness (Harbison et al., 2009). Working-memory constraint. Divided attention during judgment is modeled by varying the WM parameter (f).
No Decreased subadditivity with increases in encoding. Subadditivity is higher when participants learning the to-be-judged items under conditions of divided attention (Dougherty et al., 2009). Increased hypothesis generation with increases in encoding. People generate fewer hypotheses when learning takes places under conditions of divided attention (Dougherty et al., 2009).
Yes
Prediction failure. Interactions between encoding assumptions, environmental structure, and retrieval assumptions lead to an asymmetrical increase in the strength of the focal judgment with increases in encoding. Attention constraint. Divided attention during learning is modeled with the learning-rate parameter (L). Decreases in L lead to reductions in recall.
322
Michael Dougherty et al.
hypothesis testing. Hypothesis testing involves searching for, or selecting information in order to test the veracity of a hypothesis. A common finding in the literature is that of confirmation bias or pseudodiagnostic search (Beth-Marom & Fischhoff, 1983; Klayman & Ha, 1987; Mynatt, Doherty, & Tweney, 1977; Sanbonmatsu, Posavac, & Kardes, 1998; Trope & Bassok, 1982; Trope & Mackie, 1987; Wason, 1966, 1968), where people are often observed as selecting information that is relevant for evaluating only a single hypothesis. Although a few studies have found evidence for diagnostic search (Mynatt, Doherty, & Dragan, 1993), there have been few attempts to define the predecision processes that lead to diagnostic versus pseudodiagnostic search. Moreover, no process model has been developed that links information search to the processes of memory retrieval. HyGene, of course, makes this link explicit by postulating that the output of the hypothesis generation process feeds into and constrains the information search algorithms used by the decision maker. 3.4.1. Information Search Heuristics We assume that people invoke one of a small number of heuristic processes for guiding information search, all of which function by utilizing hypotheses maintained in the SOC. We focus on four heuristics and demonstrate the effects of these heuristics on information search: (1) memory-strength heuristic: choose the cue associated with the focal hypothesis that has the highest activation value, (2) dissimilarity heuristic: choose the cue that maximizes the dissimilarity between the focal hypothesis and another leading contender, (3) MSdiff heuristic: choose the cue that maximizes the difference in memory strengths between a focal hypothesis and leading contenders, and (4) Bayesian diagnosticity: choose the cue with the highest likelihood ratio. Memory-strength heuristic. When the decision maker generates one hypothesis, information search is assumed to be controlled by the memory strength of cues. That is, cues (e.g., symptoms) of the leading contender with the highest activation, as calculated by conditional echo intensity, are searched first; assuming costs of search between information sources are irrelevant. The symptoms that have the highest memory strength will tend to be those that are most prevalent, or encoded with higher fidelity, in the activated subset of episodic memory. Of course, the memory-strength heuristic can prefer nondiagnostic over diagnostic cues, because the cue that is most strongly associated with the hypothesis being tested does not have to be diagnostic. Thus, the HyGene information search model predicts most positive-testing behavior to occur when there is only one leading contender. Dissimilarity heuristic. When there are multiple leading contenders in working memory, the model engages in disconfirmation testing. One way to achieve disconfirmation is by computing the similarity between each of the cues associated with the leading-contender hypotheses. The cue-selection rule searches for information in the environment that corresponds to the cue
Hypothesis Generation, Probability Judgment, and Hypothesis Testing
323
that is maximally dissimilar among the top two leading-contender hypotheses. For instance, assume the leading-contender hypotheses ‘‘appendicitis’’ and ‘‘hernia’’ both have the symptom ‘‘severe abdominal pain’’ but differ on the ‘‘fever’’ symptom (i.e., appendicitis is associated with high fever, hernia with no fever). Assuming both appendicitis and hernia are in WM, HyGene would choose to search for the cue ‘‘fever’’ in the patient because the fever symptom allows it to discriminate between appendicitis and hernia. Thus, HyGene predicts that cues (i.e., symptoms) that discriminate between the leading contenders (e.g., disease hypotheses) will be searched when more than one hypothesis is maintained in the SOC. Although using dissimilarity to select cues leads HyGene to search for information that discriminates among hypotheses in the SOC, the mechanism is not sensitive to the relative diagnosticity of the cues. The dissimilarity of cue values in working memory can be insensitive to diagnosticity, because the mechanism does not explicitly consider the prior probability of the cue. Nevertheless, using the dissimilarity of corresponding cues between hypotheses in the SOC is a simple heuristic process that entails very little computation on the part of the decision maker and rarely results in a preference for positive tests. Note that the dissimilarity heuristic can be used only when more than one hypothesis is contained within the SOC. MSdiff heuristic. HyGene can also employ a cue-selection strategy based on the ability of a cue to differentiate a focal hypothesis from leading contenders in working memory (i.e., SOC). This form of search is close to the dissimilarity heuristic, but where the dissimilarity is weighted by traces in episodic memory. Note that a focal hypothesis can be a hypothesis that the decision maker has been prompted to assess, a hypothesis the decision maker is motivated to assess, or a leading contender that is most likely (highest in activation) given the current state of information (data). One way HyGene can test a particular focal hypothesis is to assess the absolute difference between the memory strength of each cue in the focal prototype and the memory strength of the respective cue in the alternative prototypes, while trace memory is conditioned on the focal hypothesis; and then select the cue with the largest difference. This cue-selection strategy is referred to as MSdiff and is described by Equation (9): XC XW i MSdiff ¼ max j¼1 i¼1 ICFocal ICLCj jFocal ; ð9Þ j jFocal where C equals the number of cues and W equals the number of leading contenders in the SOC. The superscript refers to the conditional echo intensity of the cue level specified in the prototype of the focal hypothesis, where the focal hypothesis defines the conditional set. The LCi superscript refers to the conditional echo intensity of the cue level specified in the prototype of a leading contender.
324
Michael Dougherty et al.
Bayesian diagnosticity. It is possible for HyGene to directly estimate the Bayesian diagnosticity of the cues and to select the cue with the highest diagnosticity. The formula for the Bayesian diagnosticity of a cue is given by Equation (10) (Bassok & Trope, 1984; Skov & Sherman, 1986; Trope & Bassok, 1982): Cue diagnosticity ¼ pðCA Þ
pðCA jH1 Þ pðCB jH1 Þ þ pðCB Þ ; pðCA jH2 Þ pðCB jH2 Þ
ð10Þ
where CA refers to the prototype value of the cue (e.g., rash) of the first hypothesis (H1) and CB refers to the prototype value of the cue (e.g., not rash) of the second hypothesis (H2). The values p(CA) and p(CB) refer to the probabilities of encountering each value of the cue. The ratios in Equation (10) are the likelihood ratios for each value of the cue to differentiate between the competing hypotheses. Thus, Equation (10) captures the diagnosticity of a cue by taking the product between the base rate of a cue value and its likelihood ratio, summed over all possible values of the cue. HyGene does not represent conditional probabilities directly, but allows for conditional probabilities to be estimated from the trace memory using the conditional echo intensity calculation. Thus, cue diagnosticity can be derived by computing the ratio of two conditional echo intensities as illustrated in Equation (11): HyGene0 s cue diagnosticity ¼ ICA
ICA jH1 IC jH þ ICB B 1 : ICA jH2 ICB jH2
ð11Þ
Cue diagnosticity derived using Equation (11) is monotonic with Bayesian diagnosticity. Note the major difference between Equations (10) and (11) is that the probability in the Bayesian formula for diagnosticity has been replaced with HyGene’s conditional echo intensity measure IC. The computation of ICA jH1 (which corresponds to the numerator of the Bayesian formula in Equation (10)) is achieved by computing the echo intensity of cue A (CA) conditional on the subset of traces in exemplar memory that were activated above the threshold by the H1 portion of the memory probe. All of the Bayesian diagnosticity components in Equation (11) can be estimated in this manner. It is important to note that HyGene’s cue diagnosticity is based on the leading contenders that populate working memory. That is, the subjective diagnosticity of a cue depends on the particular hypotheses in the SOC. Thus, the subjective diagnosticity of a test and its objective diagnosticity will be at variance if the set of leading contenders in the SOC differ from the proper set of hypotheses. Also, if trace memory is biased (e.g., through biased sampling or search processes; Fiedler, 2000), the subjective diagnosticity of a cue will deviate from its objective diagnosticity.
Hypothesis Generation, Probability Judgment, and Hypothesis Testing
325
The heuristic mechanisms presented above represent our first attempt to integrate information search algorithms within the context of the HyGene architecture. However, do these algorithms make testable predictions that bear out in behavioral studies? While there have been a number of studies investigating hypothesis-testing behavior, most of these studies have focused on examining hypothesis testing under conditions in which participants are presented with the to-be-evaluated hypotheses. As well, these studies have focused on two general classes of phenomena: diagnostic and nondiagnostic search. While HyGene can account for diagnostic and nondiagnostic search behavior, it also makes a number of more nuanced predictions regarding the interplay between the statistical structure of the environment, hypothesis generation processes, working memory, and information search behavior. Thus, HyGene makes strong and testable predictions concerning information search, many of which lie outside the scope of prior research on hypothesis testing. Nevertheless, it is useful to examine the behavior of HyGene’s information search algorithms across a variety of conditions to determine if the model makes reasonable predictions. 3.4.2. Predicting Hypothesis-Guided Information Search Behavior We evaluated the behavior of HyGene’s information search algorithms across a variety of parameter values and ecologies, where the ecology defined the probabilistic relationship between the data and hypotheses. Our simulations were specifically designed to illustrate how the output of the hypothesis generation process affects hypothesis testing, and how ecological structure (the probabilistic relationship between data and hypotheses) can affect hypothesis generation. Table 2 provides the conditional probability distribution (i.e., disease by symptom matrix) that defines the ecology instantiated in the simulations. The hypotheses (i.e., diseases) in the ecologies have equal prevalence in the sample of patients encoded by the model. Temperature and vision are used as Dobs to prompt the model to generate disease hypotheses. Only culture, eardrum, and balance are available to the model as tests of the generated hypotheses. Note that in the baseline ecology specified in Table 2, balance constitutes a positive-test cue, since its conditional probability is constant across hypotheses. We manipulated the baseline ecology specified in Table 2 to investigate the effects of ecological structure on hypothesis-testing behavior in two ways. The first manipulation is the positive-test absent manipulation, where the strength of association of balance with the hypotheses is changed from being a positive-test cue to just a nondiagnostic cue. This is accomplished by changing vertigo from 0.9s in the baseline ecology (Table 2) to 0.5s and changing equilibrium from 0.1s to 0.5s. Thus, the positive-test absent manipulation eliminates a positive test from the ecology, because neither vertigo nor equilibrium is associated with the hypotheses above chance levels.
Table 2 Hypothesis by Data Conditional Probability Distributions that Define the Baseline Ecology and Ecologies Defined by the Ecological Manipulations Implemented in the Hypothesis-Testing Simulations. Temperature Fever
98.6
Baseline ecology Metalytis 0.9 0.1 Zymosis 0.5 0.5 Gwaronia 0.5 0.5 Descolada 0.5 0.5 Positive-test absent manipulation Metalytis 0.9 0.1 Zymosis 0.5 0.5 Gwaronia 0.5 0.5 Descolada 0.5 0.5 Generation-strength manipulation Metalytis 0.9 0.1 Zymosis 0.1 0.9 Gwaronia 0.1 0.9 Descolada 0.1 0.9
Vision
Culture
Eardrum
Balance
Blurry
Clear
Bacterial
No growth
Convex
Flat
Vertigo
Equilibrium
0.1 0.1 0.9 0.9
0.9 0.9 0.1 0.1
0.5 0.5 0.1 0.9
0.5 0.5 0.9 0.1
0.9 0.1 0.5 0.5
0.1 0.9 0.5 0.5
0.9 0.9 0.9 0.9
0.1 0.1 0.1 0.1
0.1 0.1 0.9 0.9
0.9 0.9 0.1 0.1
0.5 0.5 0.1 0.9
0.5 0.5 0.9 0.1
0.9 0.1 0.5 0.5
0.1 0.9 0.5 0.5
0.5 0.5 0.5 0.5
0.5 0.5 0.5 0.5
0.1 0.1 0.9 0.9
0.9 0.9 0.1 0.1
0.5 0.5 0.1 0.9
0.5 0.5 0.9 0.1
0.9 0.1 0.5 0.5
0.1 0.9 0.5 0.5
0.9 0.9 0.9 0.9
0.1 0.1 0.1 0.1
Italicized probabilities indicate a modification from the baseline ecology.
Hypothesis Generation, Probability Judgment, and Hypothesis Testing
327
The second manipulation is the generation-strength manipulation, where the strength of association between fever and zymosis, gwaronia and descolada is decreased from 0.5 to 0.1. The generation-strength manipulation makes it more likely that only metalytis will be generated when prompted with fever. Thus, pairwise differences between the baseline ecology and the ecological manipulations will allow us to investigate how hypothesis generation influences information search, particularly HyGene’s preference for diagnostic, nondiagnostic, and positive tests. Separate simulations were run for the baseline ecology (Table 2) and each of the ecological manipulations discussed above. All other input and parameters of the simulations were identical. To explore the effects of learning on the model’s behavior, encoding fidelity (L) and experience (E) were systematically manipulated. The parameter values used in the following simulations were as follows (and fully crossed in the experimental design of the simulations): L ¼ {0.35, 0.65, 0.95}, Ac ¼ {0.125}, Tmax ¼ {7}, and E ¼ {10, 50}. Hypothesis generation was initiated by probing episodic memory with a single Dobs mini-vector representing one of the three available symptoms ( fever, blurry, or clear). From this point on, the model’s hypothesis-testing behavior was entirely dictated by the number of hypotheses HyGene generated into the SOC. If the model only generated one hypothesis into the SOC, MSmax was implemented. When more than one hypothesis was generated the diagnostic rule MSdiff was implemented. Figure 4 plots the effect of prompting the model with different symptoms on hypothesis generation. In the baseline and positive-test absent ecologies fever generally prompts the generation of metalytis (in addition to one of the three remaining disease), blurry generally prompts the generation of gwaronia and descolada, and clear generally prompts the generation of metalytis and zymosis. The effect of the ecological manipulations is clear when comparing hypothesis generation for fever in the baseline and positive-test absent ecologies to the generation-strength ecology. The generation of metalytis increases due to the generation-strength manipulation while the generation of the alternative disease hypotheses decreases. Thus, a far greater proportion of runs result in only one hypothesis being generated into working memory as a result of the generation-strength manipulation. We will explore how this difference in generation between the ecologies influences test preference next. Increases in encoding (L) leads to increases in the number of hypotheses generated. It is also noteworthy that the model predicts a tradeoff between L and E. That is, increases in encoding fidelity enhance hypothesis generation most when the model has only limited experience and vice versa. This tradeoff is important both theoretically and pragmatically as it suggests that impoverished generation due to conditions interfering with encoding processes (e.g., dual-task conditions) can be remediated with increases in experience.
Generation prop.
328
Michael Dougherty et al.
Baseline
1 0.8 0.6 0.4 0.2 0 Low
High
Low
Generation prop.
Generation prop.
Fever
High
Low
Blurry
High Clear
Positive test absent manipulation
1 0.8 0.6 0.4 0.2 0
L = 0.35 L = 0.65 Fever
L = 0.35 L = 0.65 Blurry
L = 0.35
L = 0.65 Clear
Generation-strength manipulation
1 0.8 0.6 0.4 0.2 0 L = 0.35
L = 0.65
Fever Metalytis
L = 0.35
L = 0.65
Blurry Zymosis
Gwaronia
L = 0.35
L = 0.65 Clear
Descolada
Figure 4 Proportion of times each hypothesis was generated across 500 simulated runs as a function of encoding (L) and patient symptom prompt (fever, blurry, and clear). Note that E ¼ 50 for the simulation results plotted.
HyGene’s test preference across 500 iterations is plotted in Figure 5. The hypothesis-testing behavior plotted in the figures represents a mixture of strategies where the MSmax rule selected tests when only one disease hypothesis was generated and the MSdiff rule selected tests when two or more hypotheses were generated. As previously mentioned, culture, eardrum, and balance are the only tests available to the model. Accordingly, the model has at its disposal two diagnostic tests in culture and eardrum and one entirely nondiagnostic test in balance. The nondiagnostic test in the baseline ecology has a clear impact on the testing behavior of the model in response to each presenting symptom. In the positive-test absent ecology, MSmax only rarely selected balance as culture and eardrum were more strongly associated with the generated hypotheses. In turn, the model is able to select diagnostic tests a relatively large proportion of the time and appropriately appraises balance as uninformative.
329
Test preference
Test preference
Test preference
Hypothesis Generation, Probability Judgment, and Hypothesis Testing
1 0.8 0.6 0.4 0.2 0
1 0.8 0.6 0.4 0.2 0
1 0.8 0.6 0.4 0.2 0
Baseline
Low
High Fever
Low
High Blurry
Low
High Clear
Low
High Clear
Low
High Clear
Positive-test absent manipulation
Low
High Fever
Low
High Blurry
Generation-strength manipulation
Low
High Fever
Low
Culture
High Blurry Eardrum
Balance
Figure 5 Proportion of times the model selected each available test (culture, eardrum, or balance) as a function of encoding (L) and patient symptom prompt (fever, blurry, and clear). Note that E ¼ 50 for the simulation results plotted.
At a gross level the hypothesis-testing behavior (top panel of Figure 5) in the baseline ecology favors the positive test (balance) and one of the diagnostic tests in accordance with the presenting symptom. When presented with blurry, the model shows a preference for culture which is diagnostic between gwaronia and descolada which the model generated a high proportion of the time in response to blurry. Because of the inverse pattern of generation demonstrated for clear, the model prefers diagnostic eardrum in addition to balance (the positive test). When presented with fever, the model selects eardrum, in addition to the positive-test balance, as eardrum is the most diagnostic test between metalytis and the remaining hypotheses.
330
Michael Dougherty et al.
The generation-strength manipulation demonstrates increased nondiagnostic/positive-test selection. Note. The generation-strength ecology differs from baseline on only the conditional probabilities of fever under zymosis, gwaronia, and descolada. This change in association considerably effects hypothesis generation in response to fever. Specifically, the model generates only one hypothesis more often and as a result a greater proportion of time implements MSmax. Given that balance is a positive test, the model demonstrates a much greater preference for this cue. Thus, the sometimes disastrous effects of impoverished hypothesis generation are demonstrated by the generation-strength manipulation. We do want to raise a flag of caution concerning the rationality of positive-test selection. The ecologies explored in the simulations clearly demonstrate situations where positive-test selection is irrational. We believe, however, that in more complex ecologies than simulated here (e.g., hierarchical or branched ecologies) that positive testing can indeed be rational. One can imagine that the ecologies investigated in Table 2 represent only a single cluster of diseases where the positive-test cue could be diagnostic between clusters! Thus, it seems reasonable that decision makers might be sensitive to two kinds of diagnosticity: intercluster and intracluster. Cues that have high intercluster diagnosticity can be used efficiently to discriminate between clusters of diseases (e.g., the ability of high white blood cell count to discriminate between injury and infection for patients complaining of pain in their abdomen). Alternatively, cues with high intracluster diagnosticity can be used to discriminate between diseases within a cluster (e.g., X-ray can be used to discriminate different injuries to the abdomen) or even multiple clusters. Interestingly, it is not possible for cues to be high on both kinds of diagnosticity simultaneously. That is, if a cue is highly diagnostic at discriminating between clusters of diseases, its ability to discriminate within clusters of diseases is necessarily lower. Moreover, cues can be nondiagnostic either within cluster, between cluster or both (i.e., completely nondiagnostic). Thus, we plan to test decision makers’ sensitivity to different kinds of diagnosticity in hierarchically organized ecologies. Moreover, we expect the preferences for different kinds of diagnosticity are closely related to the initial cues presented, hypothesis generation (i.e., the make-up of hypotheses in the SOC) and time course (i.e., early vs. late in search). It is also important to note that the conditions under which HyGene predicts positive-test selection is precisely the condition where it has high accuracy in generation (Thomas et al., 2008). That is, HyGene predicts positive testing in situations where the hypothesis being tested is most likely the correct answer! Thus, the costs of the ‘‘irrationality’’ of positive testing may not be as high as previously believed once we consider the systematic effects in people’s generation of hypotheses (for a similar argument, see Navarro & Perfors, 2009). It is difficult to reconcile how a particular
Hypothesis Generation, Probability Judgment, and Hypothesis Testing
331
hypothesis-testing strategy can be tagged as irrational if one ignores the systematic reasons why particular hypotheses were generated to be tested in the first place. This is particularly the case if our speculation concerning the role of positive testing in complex ecologies discussed above proves correct. 3.4.3. The Impact of Sampling Biases on Hypothesis Generation and Information Search Many of the events that people experience result from processes that yield systematic perturbations of the true statistical relationships in the environment (Dawes, 2005; Fiedler, 2000). In some cases, the causes of sampling biases are embedded within organizational structures. For example, the system of medical referrals used in the United States ensures that specialists see only patients that are relevant to their specialty area. Thus, a specialist’s representation of the statistical relationships between the diseases and symptoms within a particular specialty area (e.g., cardiology) will deviate systematically from those of the generalist or an individual with a different specialty area (e.g., endocrinology). The perceived diagnosticity of a particular symptom (e.g., high white blood cell count) for differentiating between diseases can therefore differ between specialists. In other cases, sampling biases may result from the natural functioning of the cognitive system. For example, in as much as the hypotheses held in WM influence the search of information in the environment, one’s mental representation of the environmental structure will also be influenced: Hypothesis-guided information search not only affects what information we attend to in the environment, but also affects what information we store in memory (Fiedler, 2000). Unfortunately, people generally lack the metacognitive ability to correct for the biased samples (Fiedler, 2000). Given that peoples’ experiences are rife with sampling biases, it is important to examine the conditions under which biased samples lead to miscalibration of beliefs and promote suboptimal, if not hazardous, information search. To illustrate the impact of biased sampling on hypothesis generation and information search, we simulated how the model’s behavior is influenced by systematic sampling biases operating over the experience acquired by the model. The general methodology applied for this simulation was nearly identical to that used for the simulation above. Rather than prescribing the number of cases of each hypothesis to be encoded by the model, however, the selection of cases to be presented to the model was determined by the presence of a prespecified cue level (i.e., symptom) being present in a random case. Four conditions were defined by the specific selection criteria applied. On each run of the model 500 patient cases were randomly generated and only those matching the specific selection criteria (i.e., a particular symptom) were presented to the model for encoding. One can think of this methodology as simulating the kind of biased samples acquired by a medical specialist, such as a cardiologist who receives referrals of
332
Michael Dougherty et al.
patients experiencing heart palpitations. For instance, in the fever condition the portion of the 500 random cases with the presence of fever were encoded by the model. As a result, the base rate of each disease hypothesis in LTM was proportional to the conditional probabilities between the symptom defining the selection criteria (e.g., fever) and each disease. These simulations used the baseline ecology presented in Table 2. Four symptoms were used as selection criteria to define the conditions instantiated in the simulation: Fever, 98.6, blurry, and clear. An additional baseline condition was run in which the model was endowed with 200 traces of each hypothesis and no selection criteria were applied. Figure 6 plots the hypothesis generation exhibited by the model. Perhaps the most evident effect of the sampling biases on generation is the near complete retrieval failure suffered by the model when probed with the complement of the symptom that served as the basis for selection. For instance, when clear was the selection criteria, the model exhibited extreme retrieval failure to the presenting symptom blurry. This is due to the fact that blurry represents the first level of the vision cue, which in this condition is rare in trace memory. Comparisons to the baseline reveal substantial deviations due to the selection bias. The most potentially hazardous divergences of generation occur when clear is the presenting symptom and 98.6 is the selection criterion as well as when the presenting symptom is fever and the selection criterion are blurry. When the selection criterion is blurry and the presenting symptom is fever, the generation behavior of the model practically inverts in comparison to the baseline. As a result of the dramatic disturbance in generation when fever is the presenting symptom and blurry is the selection criterion, the concomitant hypothesis-testing behavior observed substantially deviates from baseline. Examination of the model’s test-selection behavior (bottom panel of Figure 6) reveals that the model’s rank order for test preference is completely inverted due to the selection bias. In terms of adjusting belief toward the hypothesis with the highest a priori likelihood (metalytis), this order of test preference is the most damaging order that could occur for this condition as the normatively diagnostic test for metalytis is the least preferred and selected a scant 5% of the time. This is an interesting case as the model is unable to select the appropriate diagnostic test despite the generation of multiple hypotheses. The selection bias has essentially isolated the model from the information that would allow it to overcome the biased generation. The other condition demonstrating biased generation (presenting symptom clear and selection criterion 98.6), however, did not demonstrate biases in test selection. The overall maintenance of SOC composition permitted the model’s test-selection behavior to agree with that of the baseline.
Generation prop.
Generation 1 0.8 0.6 0.4 0.2 0
Fever Clear Baseline
Fever Clear SC = fever Metalytis
Fever Clear SC = 98.6 Zymosis
Gwaronia
Fever Clear SC = clear
Descolada
Test selection
1 Test preference
Fever Clear SC = blurry
0.8 0.6 0.4 0.2 0
Fever Clear Baseline
Fever Clear SC = fever
Fever Clear SC = 98.6 Culture
Eardrum
Fever Clear SC = blurry
Fever Clear SC = clear
Balance
Figure 6 Proportion of times each hypothesis was generated (top panel) and each test selected (bottom panel) by patient symptom prompt and selection criterion (SC). Note that E ¼ 50 for the simulation results plotted.
334
Michael Dougherty et al.
Simulations of the effect of hypothesis generation and biased samples on hypothesis testing yielded a number of interesting predictions. The next question is whether these predictions match up to established empirical findings. Unfortunately, few empirical studies have been specifically designed to examine the interplay between the factors that affect hypothesis generation and the effect of these generated hypotheses on information search. Thus, many of the predictions outlined above remain untested. However, there are a small number of commonly observed effects that speak to some of the more obvious predictions, including diagnostic and nondiagnostic search. These effects are presented in Table 3, along with a description of how HyGene accounts for them. It is important to reiterate that HyGene does not merely predict diagnostic and nondiagnostic search behavior. Rather, it predicts the conditions under which these two different types of behavior will occur. Table 4 presents several untested model predictions. Note that many of the predictions listed in this table stem from second- and third-order interactions between the structure of the ecology, retrieval and workingmemory processes, and the specific search heuristic employed. Thus, according to HyGene, information search behavior in real-world tasks is not a straightforward application of a simple search strategy, but rather a complex interaction between the statistical structure of the decision makers learning environment, basic memory processes, and the application of a search strategy that operates on the output of the memory system. Thus, much like the application of HyGene to probability judgment, the majority of HyGene’s predictions regarding hypothesis testing flow directly from the model architecture, and are a direct consequence of memory-retrieval processes. Thus, the process of hypothesis generation (i.e., memory retrieval) plays an important role in creating the necessary conditions for diagnostic search. However, at the same time, memory-retrieval processes can also create conditions that result in positive testing.
4. Discussion Basic-level memory processes have long been instrumental in describing judgment and decision-making behavior—a revelation that can be traced quite directly to the ground breaking work of Tversky and Kahneman (1974) on heuristics and biases over 35 years ago. While this original work established the link between memory and judgment, theoretical progress in explicating this link has been slow. This has remained true despite the fact that many important theoretical advances within memory theory have since emerged; including the wide spread use of computational models of memory.
Table 3
Three Empirical Findings from the Hypothesis-Testing Literature.
Empirical result
Predicted by HyGene?
Explanation based on HyGene
Nondiagnostic search. Confirmation bias, positive-test bias, and pseudodiagnostic search (Beth-Marom & Fischhoff, 1983; Klayman & Ha, 1987; Mynatt et al., 1977; Sanbonmatsu et al., 1998; Trope & Bassok, 1982; Trope & Mackie, 1987; Wason, 1966, 1968) Diagnostic search (Mynatt et al., 1993; Trope & Mackie, 1987)
Yes
Retrieval constraint. When only one hypothesis is retrieved from memory, HyGene predicts that cues are selected due to associative strength (i.e., MSmax), which can lead to nondiagnostic search.
Yes
Covariation between confirmation bias and cognitive capacity (Stanovich & West, 1998)
Yes
Retrieval constraint. When multiple hypotheses are retrieved from memory due to Dobs, HyGene predicts that cues are selected on their ability to differentiate the hypotheses (e.g., MSdiff), which leads to diagnostic search. Retrieval constraint and working-memory constraint. Individual differences in working memory are modeled by varying the WM parameter (f). f limits the number of hypotheses included in the SOC. Thus, decision makers with more cognitive capacity (i.e., higher f) can maintain more hypothesis in the SOC, which the model predicts will lead to less confirmation bias than decision makers with lower capacity. (continued)
Table 3
(continued)
Empirical result
Nondiagnostic search with biased experience (Fiedler, 2000)
Predicted by HyGene?
Yes
Explanation based on HyGene
Ecological constraint. Associative strength between a cue and a hypothesis in trace memory is partially reflected by experience. Thus, highly associated data and hypotheses in the ecology will generally be highly associated in trace memory (i.e., cognitive adjustment) even if the experiences are the result of a sampling bias. Thus, HyGene predicts that sampling biases can promote nondiagnostic search. Moreover, HyGene has no metacognitive mechanisms that can correct association strength with explicit knowledge of the sampling/selection bias.
Hypothesis Generation, Probability Judgment, and Hypothesis Testing
337
Table 4 Three Predicted Empirical Findings of HyGene Concerning HypothesisTesting Behavior. Predicted empirical result
Explanation based on HyGene
Increase in nondiagnostic search under divided attention at encoding
Retrieval constraint. In general, the model tends to generate fewer hypotheses with poorer encoding which predicts more nondiagnostic search under divided attention at encoding. Note that the effect of encoding is predicted to interact with experience. Thus, the effect of divided attention at encoding can be ameliorated by increases in experience. Retrieval constraint. The model tends to generate fewer hypotheses under time pressure, which predicts more nondiagnostic search. Working-memory constraint. Decision makers with more cognitive capacity (i.e., higher f) can maintain more hypothesis in the SOC. Divided attention essentially lowers f, which the model predicts will impoverish generation leading to confirmation bias.
Decrease in diagnostic search under time pressure
Less diagnostic search under divided attention at retrieval
One potential explanation for this slow progress is the tacit assumption by decision researchers that memory processes lie outside the scope of decision-making theories, and that whatever takes place in the memory system is irrelevant to understanding judgment and decision-making behavior. Indeed, even some contemporary models of judgment and decision making that make explicit assumptions about the role of memory in judgment and decision making often fail to directly model memory. Rather, judgment is often assumed to operate on the output of memory (cf. Goldstein & Gigerenzer, 2002). While such simplifying assumptions may prove convenient for modeling purposes, they undermine the models’ ability to make testable predictions, precisely because the output of memory is dependent on the functioning of memory (Dougherty, Franco-Watkins, & Thomas, 2008; see Pleskac, 2007 for a particularly compelling example). As we have demonstrated throughout this chapter, instantiating principles of judgment and decision making within a fully functioning model of memory can lead to a deeper understanding of judgment and decision-making
338
Michael Dougherty et al.
behavior, while providing a natural bridge between literatures that have historically been kept separated by research traditions.
4.1. Bridging Memory Theory HyGene was founded on the idea that memory processes can be used to explain judgment and decision-making behavior. However, because HyGene’s basic architecture is instantiated within the context of a memory model, in particular, Hintzman’s (1988) Minerva-2 memory model, it still can be used to model any memory-related phenomena to which Minerva-2 has been applied. These include list-length effects, frequency discrimination, forgetting rates, levels of processing effects, cued recall, and schema abstraction (see Clark & Gronlund, 1996; Hintzman, 1986, 1988). Importantly, our use of Hintzman’s (1988) model as a starting point for HyGene is somewhat arbitrary, and HyGene’s core principles could be instantiated in any number of alternative memory models. Certainly, as computational models of memory continue to evolve, the representational assumptions underpinning HyGene will also need to evolve. The important point here is that HyGene requires a close connection to memory theory for it to retain its explanatory power.
4.2. Bridging Categorization Categorization behavior is often studied using two-alternative forced choice tasks, where the participant is given a set of features representing a stimulus and asked to categorize it as an instance of category A or category B (Medin & Schaffer, 1978; Nosofsky & Zaki, 2003). In many ways, such categorization tasks map closely to the type of tasks for which HyGene was designed to model. Although HyGene is able to model two-alternative forced choice categorization tasks (e.g., by assuming that the two categories are in WM), it goes beyond simple categorization tasks to describe the processes that determine which categories the decision maker explicitly considers when making categorization decisions (i.e., SOC). Our working assumption is that the first step in the categorization process is to generate a candidate set of potential categories or hypotheses; only after this first step can one begin to engage in the processes that lead to a categorization decision. In some cases, this first step may yield only a single potential hypothesis, in which case the categorization decision is determined entirely by the generation process and proceeds automatically. In other cases, this first step may yield multiple hypotheses, in which case the decision maker may seek additional information to distinguish between the candidate hypotheses. In neither case, however, can a categorization decision be made prior to the generation of potential hypotheses.
Hypothesis Generation, Probability Judgment, and Hypothesis Testing
339
4.3. Bridging Judgment and Decision Making Our development of HyGene was predicated on the assumption that much of the behavior observed in judgment and decision-making tasks is affected by processes that take place well before one is asked to assess the probability of a particular hypothesis, choose among a set of options, or engage in hypothesis testing. To this end, HyGene provides a cognitive process model that describes the predecision process of hypothesis generation, and anticipates that processes such as encoding, experience, and information sampling can have concomitant effects on hypothesis generation and judgment processes. However, it is important to point out that HyGene is an extension of the Minerva-DM (Dougherty, 2001; Dougherty et al., 1999) model, which was developed to account for a variety of heuristics and biases in the probability judgment literature (e.g., availability, representativeness, conservatism, and overconfidence). Thus, HyGene provides a truly cumulative account of judgment and decision-making phenomena that simultaneously accounts for a number of historically important findings, generates novel predictions, and provides a process-level explanation of heuristic mechanisms previously viewed as functionally independent.
REFERENCES Bassok, M., & Trope, Y. (1984). People’s strategies for testing hypotheses about another’s personality: Confirmatory or diagnostic? Social Cognition, 2, 199–216. Bearden, J. N., & Wallsten, T. S. (2004). MINERVA-DM and subadditive frequency judgments. Journal of Behavioral Decision Making, 17, 349–363. Beth-Marom, R., & Fischhoff, B. (1983). Diagnosticity and pseudodiagnosticity. Journal of Personality and Social Psychology, 45, 1185–1195. Brenner, L. (2003). A random support model of the calibration of subjective probabilities. Organizational Behavior and Human Decision Processes, 90, 87–110. Clark, S. E., & Gronlund, S. D. (1996). Global matching models of recognition memory: How the models match the data. Psychonomic Bulletin & Review, 3(1), 37–60. Craik, F. I., Govoni, R., Naveh-Benjamin, M., & Anderson, N. D. (1996). The effects of divided attention on encoding and retrieval processes in human memory. Journal of Experimental Psychology: General, 125, 159–180. Dawes, R. M. (2005). An analysis of structural availability biases, and a brief study. In K. Fiedler & P. Juslin (Eds.), Information sampling and adaptive cognition (pp. 147–152). New York, NY: Cambridge University Press. Dougherty, M. R., Franco-Watkins, A. M., Sprenger, A. M., Thomas, R. P., Abbs, B., & Lange, N. (2009). The effect of cognitive load on judgments of probability (unpublished manuscript). Dougherty, M. R., Franco-Watkins, A. M., & Thomas, R. (2008). Psychological plausibility of the theory of probabilistic mental models and the fast and frugal heuristics. Psychological Review, 115, 199–211. Dougherty, M. R., & Harbison, J. I. (2007). Motivated to retrieve: How often are you willing to go back to the well when the well is dry? Journal of Experimental Psychology: Learning, Memory, and Cognition, 33, 1108–1117.
340
Michael Dougherty et al.
Dougherty, M. R., & Sprenger, A. (2006). The influence of improper sets of information on judgment: How irrelevant information can bias judged probability. Journal of Experimental Psychology: General, 135, 262–281. Dougherty, M. R. P. (2001). Integration of the ecological and error models of overconfidence using a multiple-trace memory model. Journal of Experimental Psychology: General, 130, 579–599. Dougherty, M. R. P., Gettys, C. F., & Ogden, E. E. (1999). Minerva-DM: A memory processes model for judgments of likelihood. Psychological Review, 106, 180–209. Dougherty, M. R. P., Gettys, C. F., & Thomas, R. P. (1997). The role of mental simulation in judgments of likelihood. Organizational Behavior and Human Decision Processes, 70, 135–148. Dougherty, M. R. P., & Hunter, J. E. (2003a). Hypothesis generation, probability judgment, and individual differences in working memory capacity. Acta Psychologica, 113, 263–282. Dougherty, M. R. P., & Hunter, J. E. (2003b). Probability judgment and subadditivity: The role of working memory capacity and constraining retrieval. Memory & Cognition, 31, 968–982. Downing, P. E. (2000). Interactions between visual working memory and selective attention. Psychological Science, 11(6), 467–473. Edwards, W. (1968). Conservatism in human information processing. In B. Kleinmuntz (Ed.), Formal representations of human judgment (pp. 17–52). New York, NY: Wiley. Elstein, A. S., & Schwarz, A. (2002). Clinical problem solving and diagnostic decision making: Selective review of the cognitive literature. British Medical Journal, 324, 729–732. Engle, R. W., Kane, M. J., & Tuholski, S. W. (1999). Individual differences in working memory capacity and what they tell us about controlled attention, intelligence, and function of the prefrontal cortex. In A. Miyake & P. Shah (Eds.), Models of working memory: Mechanisms of active executive control (pp. 102–134). New York, NY: Cambridge University Press. Fernandes, M. A., & Moscovitch, M. (2000). Divided attention and memory: Evidence of substantial interference effects at retrieval and encoding. Journal of Experimental Psychology: General, 129(2), 155–176. Fiedler, K. (2000). Beware of samples! A cognitive-ecological sampling approach to judgment biases. Psychological Review, 107, 659–676. Fisher, S. D., Gettys, C. F., Manning, C., Mehle, T., & Baca, S. (1983). Consistency checking hypothesis generation. Organizational Behavior and Human Performance, 31, 233–254. Friedman, N. P., & Miyake, A. (2004). The relations among inhibition and interference control functions: A latent-variable analysis. Journal of Experimental Psychology: General, 133, 101–135. Gettys, C. F., & Fisher, S. D. (1979). Hypothesis plausibility and hypothesis generation. Organizational Behavior and Human Performance, 24, 93–110. Gigerenzer, G., Hoffrage, U., & Kleinbo¨lting, H. (1991). Probabilistic mental models: A Brunswikian theory of confidence. Psychological Review, 98, 506–528. Gigerenzer, G., & Todd, P. M., the ABC Research Group. (1999). Simple heuristics that make us smart. New York, NY: Oxford University Press. Goldstein, D. G., & Gigerenzer, G. (2002). Models of ecological rationality: The recognition heuristic. Psychological Review, 109, 75–90. Ha´jek, A. (2007). The reference class problem is your problem too. Synthese, 156, 563–585. Harbison, J. I., Dougherty, M. R., Davelaar, E. J., & Fayyad, B. (2009). On the lawfulness of the decision to terminate memory search. Cognition, 111, 397–402. Hintzman, D. L. (1986). ‘‘Schema abstraction’’ in a multiple-trace memory model. Psychological Review, 93, 411–428.
Hypothesis Generation, Probability Judgment, and Hypothesis Testing
341
Hintzman, D. L. (1988). Judgments of frequency and recognition memory in a multipletrace memory model. Psychological Review, 96, 528–551. Juslin, P., & Persson, M. (2002). PROBabilities from EXemplars (PROBEX): A ‘‘lazy’’ algorithm for probabilistic inference from generic knowledge. Cognitive Science: A Multidisciplinary Journal, 26(5), 563–607. Kane, M. J., & Engle, R. W. (2000). Working-memory capacity, proactive interference, and divided attention: Limits on long-term memory retrieval. Journal of Experimental Psychology: Learning, Memory, and Cognition, 26, 336–358. Kilinc¸, B. E. (2001). The reception of John Venn’s philosophy of probability. In V. F. Hendricks, S. A. Pedersen, & K. F. Jorgensen (Eds.), Probability theory: Philosophy, recent history, and relations to science (pp. 97–124). Synthese Library. Vol. 297(pp. 97–124). Netherlands: Springer. Klayman, J., & Ha, Y. (1987). Confirmation, disconfirmation, and information in hypothesis testing. Psychological Review, 94, 211–228. Landauer, T. K., & Dumais, S. T. (1997). A solution to Plato’s problem: The latent semantic analysis theory of acquisition, induction and representation of knowledge. Psychological Review, 104(2), 211–240. Libby, R. (1985). Availability and the generation of hypotheses in analytical review. Journal of Accounting Research, 23, 646–665. Luce, R. D. (1959). Individual choice behavior: A theoretical Analysis. New York, NY: Wiley. Medin, D. L., & Schaffer, M. M. (1978). Context theory of classification learning. Psychological Review, 85, 207–238. Moores, E., Laiti, L., & Chelazzi, L. (2003). Associative knowledge controls deployment of visual selective attention. Nature Neuroscience, 6(2), 182–189. Mulford, M., & Dawes, R. M. (1999). Subadditivity in memory for personal events. Psychological Science, 10, 47–51. Mynatt, C. R., Doherty, M. E., & Dragan, W. (1993). Information relevance, working memory and the consideration of alternatives. Quarterly Journal of Experimental Psychology, 46A, 759–778. Mynatt, C. R., Doherty, M. E., & Tweney, R. D. (1977). Confirmation bias in a simulated research environment: An experimental study of scientific inference. Quarterly Journal of Experimental Psychology, 29, 85–95. Navarro, D. J., & Perfors, A. F. (2009). The positive test strategy: Optimal evaluation of rationally selected rules. Unpublished manuscript. Naveh-Benjamin, M., Craik, F. I., Guez, J., & Dori, H. (1998). Effects of divided attention on encoding and retrieval processes in human memory: Further support for an asymmetry. Journal of Experimental Psychology: Learning, Memory, and Cognition, 24, 1091–1104. Nosofsky, R. M., & Zaki, S. R. (2003). A hybrid-similarity exemplar model for predicting distinctiveness effects in perceptual old–new recognition. Journal of Experimental Psychology: Learning, Memory, and Cognition, 29(6), 1194–1209. Pleskac, T. J. (2007). A signal detection analysis of the recognition heuristic. Psychonomic Bulletin & Review, 14, 379–391. Raaijmakers, J. G. W., & Shiffrin, R. M. (1981). Search of associative memory. Psychological Review, 88, 93–134. Reyna, V. F., & Brainerd, C. J. (1992). A fuzzy-trace theory of reasoning and remembering: Paradoxes, patterns, and parallelism. In A. Healy, S Kosslyn, & R. Shiffrin (Eds.), From learning processes to cognitive processes: Essays in honor of William K. Estes (pp. 235–259). Hillsdale, NJ: Erlbaum. Rosen, V. M., & Engle, R. W. (1997). The role of working memory capacity in retrieval. Journal of Experimental Psychology: General, 126, 211–227. Rottenstreich, Y., & Tversky, A. (1997). Unpacking, repacking, and anchoring: Advances in support theory. Psychological Review, 104, 406–415.
342
Michael Dougherty et al.
Sanbonmatsu, D. M., Posavac, S. S., & Kardes, F. R. (1998). Selective hypothesis testing. Psychonomic Bulletin & Review, 5, 197–220. Skov, R. B., & Sherman, S. J. (1986). Information-gathering processes: Diagnosticity, hypothesis-confirmatory strategies, and perceived confirmation. Journal of Experimental Social Psychology, 22(2), 93–121. Sloman, S., Rottenstreich, Y., Wisniewski, E., Hadjichristidis, C., & Fox, C. R. (2004). Typical versus atypical unpacking and superadditive probability judgment. Journal of Experimental Psychology: Learning, Memory, and Cognition, 30, 573–582. Soto, D., Hodsoll, J., Rotshtein, P., & Humphreys, G. W. (2008). Automatic guidance of attention from working memory. Trends in Cognitive Sciences, 12, 342–348. Sox, H., Blatt, M. A., Higgins, M. C., & Marton, K. I. (2006). Medical decision making. Philadelphia, PA: The American College of Physicians Press. Sprenger, A., & Dougherty, M. R. (2006). Differences between probability and frequency judgments: The role of individual differences in working memory capacity. Organizational Behavior and Human Decision Processes, 99, 202–211. Sprenger, A., Tomlinson, T., & Dougherty, M. R. (2009). The impact of alternatives outcomes and working memory on judgments of probability (unpublished manuscript).. Stanovich, K. E., & West, R. F. (1998). Who uses base rates and PðD= HÞ? An analysis of individual differences. Memory & Cognition, 28, 161–179. Tenenbaum, J. B., Griffiths, T. L., & Kemp, C. (2006). Theory-based Bayesian models of inductive learning and reasoning. Trends in Cognitive Sciences, 10(7), 309–318. Thomas, R. P., Dougherty, M. R., Sprenger, A. M., & Harbison, J. I. (2008). Diagnostic hypothesis generation and human judgment. Psychological Review, 115, 155–185. Trope, Y., & Bassok, M. (1982). Confirmatory and diagnosing strategies in social information gathering. Journal of Personality and Social Psychology, 43, 22–34. Trope, Y., & Mackie, D. M. (1987). Sensitivity to alternatives in social hypothesis-testing. Journal of Experimental Social Psychology, 23, 445–459. Turner, M. L., & Engle, R. W. (1989). Is working memory capacity task dependent? Journal of Memory and Language, 28, 127–154. Tversky, A., & Kahneman, D. (1973). Availability: A heuristic for judging frequency and probability. Cognitive Psychology, 5, 201–232. Tversky, A., & Kahneman, D. (1974). Judgment under uncertainty: Heuristics and biases. Science, 185, 1124–1131. Tversky, A., & Kahneman, D. (1981). The framing of decisions and the psychology of choice. Science, 211, 453–458. Tversky, A., & Koehler, D. J. (1994). Support theory: A nonextensional representation of subjective probability. Psychological Review, 101, 547–567. Wason, P. C. (1966). Reasoning. In B. M. Foss (Ed.), New horizons in psychology I (pp. 135– 151). Harmondsworth, UK: Penguin. Wason, P. C. (1968). Reasoning about a rule. Quarterly Journal of Experimental Psychology, 20, 273–281. Weber, E. U., Bo¨ckenholt, U., Hilton, D. J., & Wallace, B. (1993). Determinants of diagnostic hypothesis generation: Effects of information, base rates, and experience. Journal of Experimental Psychology: Learning, Memory, and Cognition, 19, 1151–1164. Windschitl, P. D., & Wells, G. L. (1998). The alternative-outcomes effect. Journal of Personality and Social Psychology, 75, 1423–1441. Xu, F., & Tenenbaum, J. B. (2007). Word learning as Bayesian inference. Psychological Review, 114(2), 245–272.
C H A P T E R
N I N E
The Self-Organization of Cognitive Structure James A. Dixon, Damian G. Stephen, Rebecca Boncoddo, and Jason Anastas Contents 1. Introduction 1.1. Structures and Physical Interactions 1.2. Physical Interactions and Information Exchange 2. The Emergence of New Structure During Problem Solving 2.1. Manipulating Force-Tracing Actions 2.2. Force-Tracing Actions Predict Generalization of Alternation 2.3. Force Tracing and the Discovery of Alternation in Preschoolers 3. Explaining the Relationship Between Action and New Cognitive Structure 4. A Physical Approach to Cognition 4.1. A Brief Account of Self-Organization 4.2. Entropy Changes in the Lorenz Model 5. Gear-System Reprise: The Self-Organization of Alternation 5.1. Recurrence Quantification Analysis 5.2. Phase-Space Reconstruction 5.3. Changes in Dynamic Organization Predict the Discovery of Alternation 5.4. Manipulating Input Entropy 5.5. Interactions Across Scales: Power-Law Behavior 6. Dynamics of Induction in Card Sorting 7. General Discussion 7.1. The Problem of New Structure for Information-Exchange Approaches 7.2. A Physical Approach to New Structures in Cognition 7.3. Information Exchange as a Type of Physical Regime References
Psychology of Learning and Motivation, Volume 52 ISSN 0079-7421, DOI: 10.1016/S0079-7421(10)52009-7
#
344 345 346 347 350 352 352 354 356 356 357 360 360 361 363 365 367 372 375 376 377 380 381
2010 Elsevier Inc. All rights reserved.
343
344
James A. Dixon et al.
Abstract Explaining the emergence of new structure is a fundamental challenge for theories of cognition. In this chapter, we provide evidence for a physical account of the emergence of new cognitive structure, grounded in work on self-organization. After introducing the issue of new structure, we discuss the relation between a physical account of cognition and current approaches that ascribe causal power to information transfer. We then review results demonstrating a strong link between action and the development of a new representation during problem solving; these findings highlight the difficulty that information-transfer approaches encounter in explaining new structure. We next introduce self-organization, a phenomenon that occurs in open thermodynamic systems. Evidence for a central prediction from self-organization is reviewed: entropy should peak and then drop just prior to the emergence of a new cognitive structure. We further show that injecting entropy into the system sends the cognitive system through its transition to new structure more quickly. Finally, a set of converging predictions, changes in power-law behavior, are demonstrated for two tasks. The promise and challenges of developing a physical approach to cognition are discussed.
1. Introduction A fundamental challenge for theories in psychology and related disciplines is to explain the emergence of new structure ( Jensen, 1998; McClelland & Vallabha, 2009; Webster & Goodwin, 1996). New structures are seen in all domains with which psychology is concerned. From the changing morphology of neurons (Clark, Britland, & Connolly, 1993) to the coordinated behavior of a new walker (Adoph & Berger, 2006) to the acquisition of a new category (Smith, 2005), the sudden emergence of novel form is a ubiquitous phenomenon in the behavioral and cognitive sciences. In this chapter, we review recent work in which we have attempted to develop an account of the emergence of new structures in cognition. These investigations have, to a large extent, driven us into radical new territory methodologically and, more importantly, theoretically. First, we discuss some properties of cognitive structure and place the problem of new structure in the context of current metatheoretical assumptions about cognition. Next, we review evidence surrounding a phenomenon in which new structure emerges demonstrably and fairly rapidly. We then discuss the nonlinear dynamics of self-organization as a theoretical and methodological framework capable of addressing emergent structure. With
The Self-Organization of Cognitive Structure
345
these tools in hand, we review evidence that shows that new cognitive structures are produced through self-organization.
1.1. Structures and Physical Interactions Structures in psychology, from categories and concepts to motor actions, are temporal phenomena. That is, researchers attribute structure to the system when its behaviors appear systematically interrelated over time. For example, a child who responds similarly to items that share a common attribute (or collection of attributes) is considered to have a corresponding structure (i.e., category) for items of that type. Likewise, a child’s temporal pattern of movements determines the degree to which he or she is considered to have acquired ‘‘walking.’’ In all cases, the structure is thought to be a property of the system: some set of internal, physical relations obtain that allow the system to behave cohesively over time. This underlying assumption holds across the broad spectrum of theory in psychology. For example, in ecological theory, the system is ‘‘tuned’’ to detect affordances in the environment; that is, its physical elements are set in such a way as to respond to invariant properties of the energy array ( Jacobs & Michaels, 2007; Turvey, 1992). Cognitive theories propose that memory traces represent aspects of previously encountered situations; these traces are assumed to have a basis in physiological properties of tissues (e.g., McClelland, 1998). In summary, cognitive structures are inferred from temporal regularities in behavior. The ability of the cognitive system to maintain such cohesive behavior over time is due to physical relations that obtain internally within the system. Given that cognitive structures have a physical basis, it seems evident that when a new structure emerges, the system has undergone a set of physical changes. More specifically, physical interactions within the system have caused the change in its behavior, moving it from one structure to another. While it may be easier to imagine how this occurs for patterns of motor movement, for example, more traditional cognitive structures (e.g., categories, schemas, representations) are also assumed to have an underlying physical basis. An additional point that will be relevant to the discussion below is that while the evidence for a structure is behavior at one scale of measurement, the activity of the system across many scales supports that behavior. For example, the evidence for a child having the noun ‘‘dog’’ in their productive vocabulary is his or her repeated utterances that an observer judges to be the word ‘‘dog.’’ These utterances have an apparent characteristic scale; they are all within the usual range of human speech. But the physical support for this seemingly simple behavior is spread across many scales within the system, including chemical potentials across cell membranes, the collective activity of cell systems, the coordination of muscles, etc.
346
James A. Dixon et al.
1.2. Physical Interactions and Information Exchange Most approaches to cognition, while implicitly assuming a physical basis, do not explicitly address physical interactions as causal events in the system. Rather, they take the transfer of information as having causal power. For example, making an error (e.g., Kalish, Lewandowsky, & Davies, 2005), being presented with a new relation (e.g., Halford, Andrews, & Jensen, 2002), encountering new instances of a category (e.g., Nosofsky & Zaki, 2002) are all presumed to change aspects of the cognitive system, but in each case the explanation of why change occurs is in terms of the informational content of the event. To take one example, an event is an error because the system has information about what is expected, and can evaluate the informational content of the current event relative to that expectation. Naturally, all the activity that instantiates information transfer must be physical, but standard approaches make the strong assumption that the physical properties can be neglected. While this may be appropriate in some situations, clearly it is not always the case that causal physical interactions in a system have a sensible interpretation in terms of the transfer of information. Physical causality is often a uniquely useful level of analysis that cannot be recast in terms of information exchange among agents. For example, consider the formation of distinct layers of tissue during early embryonic development. During this process, cells sort themselves spatially and then differentiate, resulting in a heterogeneous structure. One might approach this problem from the perspective of information transfer among agents: the cells communicate with one another to specify where each cell should go. It turns out, however, that the formation of tissue layers depends on a variety of factors including the differential adhesive properties of the cells, literally the strength with which cells bind to their surroundings. Different subpopulations of cells tend to adhere more than others, resulting in a heterogeneous, layered outcome. In this case, the physical property, adhesion, determines the extent of cell motion, which affects the observed structure (Forgacs & Newman, 2005). To understand how cells sort themselves into layers, one must account for the physical properties of the cells themselves. Under what conditions is it possible to address causal effects in a physical system as a matter of information exchange? We suggest that at least two conditions must hold. First, the system must have elements or components that are functioning in a nearly independent manner. Information exchange only makes sense if there are separable entities in the system; it is not clear what it would mean to exchange information otherwise. Second, one component must be poised such that the physical action of the other has a particular, macroscopic effect. A relatively small physical event creates a larger-scale change in function. The small physical event looks like a signal
The Self-Organization of Cognitive Structure
347
being passed from one component to another. This relationship between components makes an informational analysis useful, because the action of one component, the agent, can be used to predict that of the other component without reference to the underlying physical forces. It looks as if one entity is sending a signal to another. Of course, it is important to keep in mind that entities’ passing information is only a metaphor. We do not wish to endow the mind with intelligent entities that cogitate, deliberate, and essentially perform the functions we hoped to explain initially. Cognitive psychology has often focused on situations in which these conditions seem to be met, making an analysis at the level of information exchange very useful. For example, research on spoken-word recognition has made progress by positing that the presented acoustic (and sometimes optic) array of energy triggers components to act in a way that can be modeled as an information-exchange system (e.g., Dahan, Magnuson, & Tanenhaus, 2001; Magnuson, Mirman, & Harris, 2009). A very small perturbation in the acoustic energy array creates a macroscopic change in the system, such as moving one’s eyes or hand, or entering into a new cognitive state. While such impressive successes are exciting to the field, they do not imply that all phenomena in psychology can be addressed without reference to physical forces in the system. Nor do they imply that an analysis based on information exchange could not be supplemented by considering the underlying physical activity (see Stephen, Mirman, Magnuson, & Dixon, 2009 for an example of how a physical account can add to an information-exchange account in spoken-word recognition). In summary, physical systems only behave in a way that can be characterized as transferring information under a limited set of conditions. When these conditions do not hold, conceptualizing the problem as one of information exchange will not be useful. Rather, these situations require an account developed from the perspective of physical activity. Given the broad range of issues with which psychology is concerned, it is perhaps not surprising that some phenomena are outside the bounds of an informationexchange analysis. Our purpose in discussing these assumptions is to make explicit the connection between the dominant conceptualization in the field, information transfer, and the physical approach.
2. The Emergence of New Structure During Problem Solving Fairly recently, we began researching a phenomenon that led us to reexamine the scope of information transfer as an explanatory framework (Dixon & Bangert, 2002; Dixon & Dohn, 2003). As will become clear, this
348
James A. Dixon et al.
work ultimately forced us to consider a physical account of cognition, grounded in thermodynamics. The basic phenomenon is quite simple; after repeatedly solving a problem with a lower-order strategy, children and adults spontaneously discover a new way to represent the problem. The crucial finding, however, was that participants’ actions during problem solving were intimately related to the emergence of the new representation and to properties of that representation subsequently. When we attempted to develop a theoretical account of these results within the information-transfer paradigm, we found that it could not accommodate this phenomenon in a principled way. In Section 2, we review evidence for the link between action and a new representation, and then summarize the theoretical problem that arises when one tries to explain new structure via information transfer in Section 3. The problem domain with which we have been concerned involves a set of interconnected gears as in Figure 1. The participant is asked to predict the turning direction of the final gear in the series, given the turning direction of the driving gear (i.e., the gear that turns initially). The display of interlocking gears is entirely static; the gears are never shown to actually turn or push on each other. The turning direction of the final gear is shown, but only after the other gears have been covered by a virtual screen. Initially, participants solve the problem by simulating the turning of the gears and the pushing of the interconnecting teeth. In this way, they trace the force from the driving gear to the final gear. We call this strategy force tracing. After performing force tracing for some number of problems, many participants suddenly shift to a new strategy in which they classify the gears in an alternating sequence (i.e., ‘‘clockwise,’’ ‘‘counterclockwise,’’ ‘‘clockwise,’’ and so on) without any reference to forces or simulation of the movement of the gears. Participants often mention that they have discovered a ‘‘pattern’’ or a ‘‘rule.’’ It is worth noting that the pattern they have discovered here is not present in the display. The gears do not alternate in any way; recall that they do not actually move. The only alternating pattern is created by the participants themselves—their movements alternate turning direction as a function of tracing the force. Interestingly, the force-tracing strategy results in very accurate performance. In fact, we found that concentrated, accurate performance with force tracing predicted the transition to alternation. Thus, errors do not appear to be driving the participants’ cognitive system toward this transition. Instead, it seems that repeatedly performing an appropriate strategy drives the cognitive system to a new structure (see Dixon & Bangert, 2005; Dixon & Kelley, 2006, 2007). Originally, we demonstrated this phenomenon with participants in three age groups: approximately 8, 13, and 20 years of age (Dixon & Bangert, 2002). Although older participants were more likely to discover alternation, we did not find any age differences in the effects of the predictors.
349
The Self-Organization of Cognitive Structure
Small, Target gear one pathway, no extraneous ?W?
?
? Q?
Jams!!
Jams!!
Driving gear
? H ?
Jams Chutes button
?
G
? H ?
?W?
?W?
?
G
?
Large, one pathway, no extraneous
?X ? ?M ?
?W?
? G?
? Q?
Jams!!
? H ?
Jams!!
?M?
? H ?
? X ?
Small, two pathways, no extraneous
Large, two pathways, no extraneous
? G?
? H ?
?W? Jams!!
Small, two pathways, extraneous
Figure 1 Examples of different types of gear-system problems. The green driving gear always was presented as turning clockwise. Participants were asked to predict the turning direction of the red target gear. The ‘‘chutes’’ were part of the story in which the game was presented, a train race against the computer. The black fuel, located on the target gear, slid off its shelf and down one of the chutes. Participants indicate which way the target gear would turn by selecting a chute. For two-pathway systems, it was possible for the gears to ‘‘jam’’ (i.e., not turn in either direction) because of opposing forces. The button labeled ‘‘Jams!’’ was a third response option.
Regardless of age, the concentrated, accurate use of force tracing predicted the onset of the alternation strategy. The results led us to hypothesize that actions were driving the formation of a new cognitive structure.
350
James A. Dixon et al.
2.1. Manipulating Force-Tracing Actions To test the hypothesized relationship between action and new cognitive structure, we manipulated whether participants discovering alternation via the force-tracing strategy (Dixon & Dohn, 2003). We predicted that the quality of the participants’ cognitive structure would depend on whether they had discovered alternation through force tracing versus either direct instruction (experiment 1) or through perceptual information embedded in the display (experiment 2). We assessed the quality of the resulting structure by examining how quickly participants transferred the new relation and how stable it was after transfer. For these experiments, we constructed a second problem domain that was structurally analogous to the gear-system domain. The problems involved a series of balance beams connected endto-end by a flexible joint (see Figure 2). Participants were told that the first beam in the system (leftmost in the top panel of Figure 2) would move such that one end went up (or down). The participants’ task was to the figure out whether the end of the final beam would move up or down. These problems were presented as simple line drawings that did not move. Subsequently, participants were given the gear-system task as in previous work. In the first experiment, we instructed participants in one condition on the alternation strategy. They were explicitly shown how to solve the problem by classifying the balance beams in an alternating sequence (i.e., ‘‘up,’’ ‘‘down,’’ ‘‘up,’’ and so on) and asked to solve each problem that way. Participants in the other condition were asked to solve the problem any way they chose. The majority of participants (83%) in the uninstructed condition initially solved the balance-beam problems with a force-tracing strategy. They made sinusoidal motions along the series of beams, simulating the movement across the system, and often describing the pushing of one beam on another. Many of these participants subsequently discovered the alternation strategy (during the balance-beam task). The question of interest was whether acquiring alternation through force-tracing actions would lead to better transfer of alternation than the directly instructed condition. We found that participants who had discovered alternation via the forcetracing strategy on the balance-beam task used alternation on the gear-system problems earlier than their counterparts in the instructed condition. Recall that participants in the instructed condition had been explicitly taught to use alternation and had solved all the balance-beam problems with alternation. ? Figure 2 An example of a connected balance-beam problem. Three beams are presented, connected by a flexible joint. The right side of the leftmost beam in the series goes ‘‘down’’ in this example problem. The participant was asked to predict the movement of the beam on the far right.
351
The Self-Organization of Cognitive Structure
In addition, we also found that the use of alternation was more stable for participants in the uninstructed condition. That is, after they first used alternation on the gear-system task, they rarely fell back to the force-tracing strategy, compared to their counterparts in the instructed condition. Although these results were consistent with the hypothesis that force-tracing actions contribute to the creation of a new structure (i.e., alternation), we were concerned that the observed differences might be attributable to the fact that the uninstructed participants ‘‘discovered’’ alternation, as opposed to being instructed on it. Perhaps, any set of circumstances that led to discovery would show similar effects, regardless of whether force-tracing actions were involved. To test this alternative hypothesis, and replicate the original effects, we again asked participants to first solve connected balance-beam problems. The only difference was that this time we did not instruct either group. Rather, we changed the balance-beam displays such that their fulcrums were either highly saturated or not saturated (i.e., dark or light). Half the participants received balance beams in which the saturation of the fulcrums alternated systematically (i.e., light, dark, light). The remaining participants received balance beams in which the saturation of the fulcrums changed randomly (i.e., light, light, dark). Figure 3 shows examples of the stimuli for these two conditions. We expected that participants in the alternating-fulcrum condition would discover alternation through the perceptual support provided in the display. Participants in the random-fulcrum condition were expected to discover through their own force-tracing actions (analogously to the uninstructed group in the previous experiment). Therefore, we predicted that participants in the random-fulcrum condition would transfer alternation more quickly and that they would use it more stably than their counterparts in the alternating-fulcrum condition. Consistent with our expectations, participants in the random-fulcrum condition used more force tracing on the balance-beam problems prior to discovering alternation than did participants in the alternating-fulcrum condition. Thus, the manipulation of the display affected performance on the balance-beam task as we had hoped. On the gear-system task, we found that participants in the random-fulcrum condition used alternation more Alternating fulcrums
? Random fulcrums
? Figure 3 The top panel shows an example of a problem from the alternating-fulcrum condition. The fulcrums of the beams alternated systematically (e.g., dark, light, dark). The bottom panel shows an example problem from the random-fulcrum condition.
352
James A. Dixon et al.
quickly than those in the alternating-fulcrum condition. We also found that their use of alternation was more stable than that of participants in the alternating-fulcrum condition.
2.2. Force-Tracing Actions Predict Generalization of Alternation In subsequent work (Trudeau & Dixon, 2007), we showed that the number of force-tracing actions a participant made prior to discovering alternation predicted the generalization of alternation to different types of problems within the gear-system task. Generalization is defined here as using alternation on a problem that differed from one’s discovery problem on at least one dimension. Consistent with the idea that force tracing is fundamental to the emergence of alternation, we found that the probability of generalization depended on how many force-tracing actions one made prior to discovering alternation. Although our results suggest that force tracing is a precursor to alternation, we were concerned that grade-school children and college students might already have an understanding of alternation available from other domains. To the degree that participants had a relevant and accessible understanding of alternation, the first use of alternation on the gear-system task would reflect a more delimited example of new structure, that is, new relative to this domain.
2.3. Force Tracing and the Discovery of Alternation in Preschoolers To gain some leverage on this issue, we adapted the gear-system task for preschool children (Boncoddo, Dixon, & Kelley, 2009). Preschoolers are, obviously, much less experienced at structured problem solving, and more crucially have less experience with systems that alternate. Thus, we would expect them to have less conceptual knowledge of alternation. The best evidence in the literature regarding their understanding of alternation comes from turn-taking in play and communication (Black & Logan, 1995; Tremblay-Leveau & Nadel, 1995). In these domains, it is clear that an understanding of alternation is still developing across the preschool years. Further, preschoolers are notoriously poor at transfer (see Chen & Klahr, 2008 for a recent review). Therefore, even if they had the relevant understanding of alternation, it is unlikely that they would be able to apply it to a novel domain with very different surface features. We altered the gear-system task for preschool children by including a short training session at the beginning in which we introduced two simple physical properties of the gears. The purpose of introducing these properties was to allow young children to construct the force-tracing strategy. Using
The Self-Organization of Cognitive Structure
353
toy gears, we demonstrated that ‘‘gears turn’’ and that their interconnected teeth ‘‘push’’ on one another. Throughout this brief introduction to gears, any time two gears were presented they were in orthogonal planes of motion (at right angles) and, therefore, the gears did not alternate turning direction. Figure 4 shows an example of the gear systems used to explain ‘‘turning’’ and ‘‘pushing.’’ Children participated in two sessions. We coded the number of forcetracing motions, alternating motions, and alternating verbalizations on each trial. A force-tracing motion was defined as a pair of clockwise and counterclockwise motions on adjacent gears. Alternation motions and verbalizations were defined as pointing to and labeling, respectively, the turning direction of the gears without tracing the force or referring to it. To facilitate comparison across gear systems of different sizes, the number of force-tracing motions, alternating motions, and alternating verbalizations were scaled by the number of gears in the system. We found that the proportion of force-tracing motions started at its highest value and then decreased steadily across trials, as can be seen in Figure 5. Alternation behaviors, particularly motions, increased over trials. We posited that if force tracing was foundational to the emergence of alternation, then the growth of alternation behaviors should be predicted from prior force-tracing motions. To test this hypothesis, we used to the number of prealternation force-tracing behaviors as a predictor of the rate of growth in alternation. We found that participants who made more force-tracing motions had faster rates of growth in alternation on later trials. Consistent with the idea that force
Figure 4 A preschool child receiving initial familiarization with the physical gears. Note that the gears are in orthogonal planes of motion, and thus do not alternate turning direction.
354
Motions and verbalizations per gear
James A. Dixon et al.
0.45 0.4 0.35 0.3 0.25 0.2 0.15 0.1 0.05 0
Session 1 Alternating motions Force-tracing
1
Motions and verbalizations per gear
Alternating verbalizations
2
3
4
5
6
7
8
9 10 11 12 13 14 15 16 17 18 19 20 Trial Session 2
0.45 0.4 0.35 0.3 0.25 0.2 0.15 0.1 0.05 0 1
2
3
4
5
6
7
8
9 10 11 12 13 14 15 16 17 18 19 20 Trial
Figure 5 The mean proportion of force-tracing motions, alternating motions, and alternating verbalizations per gear is shown as function of trial for both sessions. The number of motions (or verbalizations) made on each trial is scaled by the number of gears presented on that trial.
tracing provides the basis for the emergence of alternation, using force-tracing actions predicts future use of alternation.
3. Explaining the Relationship Between Action and New Cognitive Structure Taken together, these studies provide experimental and correlational evidence for the following proposition: performing the force-tracing actions drives the cognitive system through a transition and into a new structure, alternation. While this relationship between action and cognitive structure is interesting, it is quite difficult to explain from the perspective of an approach whose currency is information exchange. To illustrate the
The Self-Organization of Cognitive Structure
355
general problem, consider one example of how such an explanation might proceed. Given the task constraints (e.g., gears lying in a single plane, force generated from one gear) and one’s current conceptualization of the problem (e.g., ‘‘gears turn,’’ ‘‘teeth push’’), the cognitive system generates a set of actions. These actions simulate the ‘‘turning’’ and ‘‘pushing’’ of the gears. Turning and pushing are relatively local events, at the level of single gears and gear-to-gear junctions, respectively, but they create a pattern on a longer time scale as they are performed over a set of gears. Thus, information about the alternation relation is created by performing force tracing. One might suggest that now the encoding system just needs to detect this new relation and the new structure will follow. Unfortunately, this seemingly reasonable move is not sufficient to create a new behavioral structure (Bickhard & Terveen, 1995; Fodor, 2000). To see why, assume temporarily that an encoding system continuously sweeps the environment, looking for new relations as well as detecting known relations (leaving aside for the moment the considerable challenges such a system would face). On finding a new relation, the encoding system could, at best, create a new symbol for it. Passing this symbol on to the next component in the cognitive system (e.g., working memory) would signal that the relation had been encountered. The obvious problem is that only the encoding system ‘‘knows’’ what the signal means. Thus, even if we allow one component in the system to create new signals or symbols, we still need a way for that component to pass on the knowledge about what the signal means. Although this obviously has implications for theories of meaning (e.g., Harnad, 1990; Searle, 1980), one does not need to engage the issue that deeply to feel the force of the problem. For example, the meaning of the signal aside, a newly forged signal from the encoding system would be insufficient to tell any other component what to do; thus, no actions would be specified. The problem here comes, in part, from extending the metaphor of information exchange beyond its useful limits. Interactions in physical systems can be reduced to information exchange only if the downstream component is poised to respond to some physical event from its upstream counterpart. When the downstream component is not poised to respond in a particular way, as must be the case if something new has been sent from upstream, then the interactions in the system have to be addressed from a physical perspective. Put differently, when interactions with the environment create novel activity among the components in the system, the information-exchange metaphor is lost. The reason information exchange no longer can be used to characterize the activity of the system is that the codes are no longer shared by the components. The novel activity from one component to another has an effect, but understanding why it has an effect requires moving to a physical account.
356
James A. Dixon et al.
4. A Physical Approach to Cognition We realize that the idea of addressing cognition from a physical perspective may seem like a tall order initially. However, there is a firmly established literature in physics and related disciplines that provides methodological and theoretical tools for investigating complex systems (Haken, 1983; Nicolis & Prigogine, 1977; Skar, 2003; Soodak & Iberall, 1987; Yates, 2008). Research in this area is concerned with understanding the behavior of systems with many interacting parts that function over a broad range of spatial and temporal scales. While complex systems is an active area of research in its own right, a set of theoretically important relations that hold across a large number of systems have been identified. We discuss some of these relations in the next section. It is perhaps worth stating at the outset of this section that the physical approach we are pursuing here is not an account in terms of classical mechanics. That is, we do not deal directly with the position and mass of particular objects under particular forces. Thus, we are not proposing that small number of components, functioning like a mechanical system, comprise the cognition. Rather, work in this area is based largely on statistical mechanics and kinetics, in which properties of microscopic elements are related to macroscopic properties of the system.
4.1. A Brief Account of Self-Organization In our work, we have been concerned with the self-organization of new structure, a phenomenon that occurs in open physical systems. Open systems, such as organisms, exchange energy and matter with the external environment. These systems have structures that must dissipate entropy to maintain their organization. Systems that cannot dissipate entropy sufficiently cease to exist. Entropy here refers to the degree of disorder within the system. Put differently, entropy is inversely related to the number of constraints among the lower-order elements within the system (Kugler & Turvey, 1987). As constraints are broken, entropy increases. The second law of thermodynamics implies that all systems must inexorably tend toward a state of maximum entropy (i.e., heat death). Thus, to maintain themselves open systems must constantly off-load entropy into the environment. Nonlinear dynamics has shown that in open systems the process of dealing with entropy is enormously interesting (Prigogine & Stengers, 1984). New structures form within these systems to dissipate entropy. This process occurs in both inorganic systems (e.g., fluid convection) and organic ones (e.g., termite nest building). The ability to obtain new macroscopic structure from microscopic interactions offers an important means
The Self-Organization of Cognitive Structure
357
through which a complex system with a large number of degrees of freedom can reach new organizational states (Kugler, Kelso, & Turvey, 1982). We propose that cognitive structures are new organizations of the system that emerge to dissipate entropy (see Bickhard, 2004, 2008 for a related account from philosophy). This hypothesis has two direct implications for understanding changes in cognitive structure. First, on this account, new cognitive structures are emergent from the nonlinear dynamics. That is, the lower-order elements in the system exhibit spontaneous reorganization (i.e., new structure) as entropy overwhelms the dissipative ability of the current structure. Put differently, reorganization occurs when the constraints holding the current organization become sufficiently disintegrated, thereby allowing the microelements to interact. Second, the onset of these new structures should be predicted by a decrease in entropy. As the system is driven from the old attractor (loosely, a mode of behavior) to the new one, entropy decreases. Next, we introduce a well-understood model system, the Lorenz model, to illustrate more concretely the relationship between changes in entropy and structure.
4.2. Entropy Changes in the Lorenz Model The Lorenz model was developed as a mathematical representation of fluid convection (i.e., Rayleigh–Benard convection), a well-known example of self-organization in physics (Lorenz, 1963). As liquid is heated from below, complex, sustainable patterns of fluid motion occur (e.g., spirals, hexagons). Fluid convection is an everyday phenomenon—it occurs each time you heat a pan of water. What is stunning about fluid convection is that rich, complex structures emerge out of a completely homogeneous fluid that is being uniformly heated. The structure is not imposed from an external source nor derived from an internal one. Rather, macroscopic structure emerges from interactions among the microscopic elements of the system (Kugler & Turvey, 1987). The Lorenz model consists of three interrelated differential equations. Depending on how the initial parameters are set, a wide variety of different forms can occur. For our current purposes, we focus on the model shifting from one attractor to another, a change indicative of the system reaching a new dynamic regime. As can be seen in Figure 6, the system undergoes a radical change from operating in one manner (orbiting around a fixed point, labeled ‘‘preshift attractor’’) to operating in a very different manner (orbiting around two points, labeled ‘‘postshift attractor’’). The system shifts from one attractor to another, thereby exhibiting new structure. To illustrate the relationship between entropy and changes in structure, we quantified the entropy of the system’s behavior using a method called recurrence quantification analysis (RQA) (Webber & Zbilut, 1994, 2005). RQA provides a number of measures of dynamic organization, including
358
James A. Dixon et al.
180 Postshift attractor 160 140 120 100 80 60 40
Preshift attractor
20 0 −30 −20 −10
0
10
20
30
−40 −50
0
50 100
Figure 6 The figure shows the activity of the Lorenz model as a control parameter is increased. The model initially enters into a tight orbit around a single point, labeled the preshift attractor. It then leaves that attractor and eventually settles into a new regime, labeled the postshift attractor. The dimensions in the figure are the three variables that define the Lorenz model; each indexes a property of the convecting fluid.
entropy. It also measures the degree to which the system behavior is deterministic versus stochastic. The Lorenz model is completely deterministic, it contains no noise term. We explain RQA in more detail in a later section, but note here that the measure of entropy quantifies the amount of disorder or variability in the trajectory of the system. Figure 7 shows how entropy changes across the major shift between attractors. We divided the time series up into overlapping frames or epochs, each consisting of 800 time steps and lagged 300 times steps (a standard approach in RQA; see Webber & Zbilut, 2005). Entropy is relatively stable during the preshift attractor phase, increases dramatically as it leaves that attractor, and then decreases as it settles into the new attractor, eventually reaching a lower steady state. Each of these four periods is shown graphically by the dark lines on the Lorenz model. This pattern of change in entropy is one hallmark of the advent of new structure in nonlinear dynamic systems. We also plot RQA’s estimate of determinism (% determinism) to emphasize that these changes in entropy are not changes in the amount of noise in the model. Again, the model is completely deterministic. Given that we have introduced a variety of new concepts here, we pause briefly to clarify their relationships. Phase space is simply a spatial representation of all the possible states the system can take. In the Lorenz example, there are only three variables that determine behavior, so phase space has three dimensions. Attractors are regions of the phase space to which the
359
The Self-Organization of Cognitive Structure
9
100
% Determinism 95
8
90 7
80
6
Entropy
75
5
70
Entropy
% Determinism
85
4
65 3 60 2
55 50
1 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31
Time (frames)
Figure 7 The two curves show changes in entropy and % determinism, as measured by RQA, across the activity of the Lorenz model. The trajectory is recreated along the bottom of the figure, once for each of four relatively distinct periods defined by the vertical gray lines. The darkened region of each trajectory shows the activity of the system during that period.
system tends to return; thus, they are indicative of structure. A system without structure would wander through its phase space at random. As the system’s structure begins to lose its integrity, the system’s trajectory through its phase space becomes more entropic or variable. Put differently, the evidence for structure is the temporal cohesiveness of the system’s behavior, that is, the degree of organization of its behavior through phase space. Measuring the entropy of that behavior (i.e., the disorder of the trajectory through phase space) quantifies the entropy of the structure. In the Lorenz example, the new structure is clearly in evidence because we can plot the system’s movement in the three relevant dimensions that specify its phase space. However, the dynamics of systems with which we are concerned, such as living organisms, presumably occur in many more dimensions, most of which are not observable. Therefore, examining dynamic organization in complex systems has been a fundamental challenge for researchers interested in dynamical approaches to the biological and cognitive sciences. Recent work in nonlinear dynamics, however, has created some powerful solutions to this problem (see Riley & Van Orden, 2005 for an introduction to some of these methods), including RQA. In Section 5, we present studies to test the hypothesis that new cognitive structures in the gear-system task emerge through self-organization.
360
James A. Dixon et al.
5. Gear-System Reprise: The Self-Organization of Alternation As a first step, we again asked college-age participants to solve a series of gear-system problems presented on the computer (Stephen, Dixon, & Isenhower, 2009). In the first study, 38 participants solved 36 gear-system problems in a single session. For each trial, we coded the strategy (e.g., force tracing, alternation) they used. The displays were static, as in the previous work. In addition to coding strategies based on their overt verbalizations and motions, we tracked their real-time motion via an Optotrak system. Specifically, a set of infrared sensors was attached to the tip of the index finger on each participant’s dominant hand. Two arrays of cameras sampled the position of the sensor in three-dimensional space at 100 Hz. Thus, for each individual trial, we obtained a time series from which we extract measures of dynamic organization, including entropy, using RQA.
5.1. Recurrence Quantification Analysis We performed RQA on the angular velocity time series from the motion data for each trial. RQA has its roots in a method for visualizing the organization of a dynamic system, called a recurrence plot. To provide an example of a recurrence plot, we use one of the three variables from a portion of the ‘‘preshift’’ attractor in the Lorenz model (i.e., as it tightly orbits the fixed point). This variable, X, takes a wide range of values, across the 800 time steps we extracted. To make an illustrative recurrence plot, we create a two-dimensional matrix by crossing the time series with itself, resulting in an 800 800 matrix (see Figure 8). At each time step along the horizontal axis, the variable X takes some value. For example, at time step 100, X equals 9.5. Considering each time step on the horizontal axis as a column, we simply look at the rows (the vertical axis) and mark all the cells at which the value for X agrees with the current value (from the horizontal axis). This creates a pattern which is symmetrical around the diagonal (from the lower left corner to upper right corner). By convention, we do not plot agreements along the diagonal because they reflect the (uninteresting) fact that when the horizontal and vertical axes point to the same time step, the value of X agrees with itself. Diagonal lines that occur off the center diagonal are of considerably greater interest. These diagonal lines are ‘‘runs’’ of agreement, reflecting times during which the variable revisited previous values. Note that there are very few such lines in Figure 8. RQA extracts information about the organizational properties of the underlying attractor from the distribution of
361
The Self-Organization of Cognitive Structure
9.5
Time
Values of X
800
100
100
800 Time
9.5
Values of X
Figure 8 An example of a recurrence plot. A single time series, X, from the Lorenz model is used. Consider the horizontal axis as a set of columns. Each column is defined by the value of X at that moment in time. The recurrence plot simply tells us when the same value was obtained again at another time (i.e., on the y-axis or rows). For example, at time point 100, the value of X was 9.5. Ever time X equals 9.5 again, it is plotted (in bright blue). The approach can be made more powerful by relaxing the definition of ‘‘equals’’ such that coming near the value is considered a recurrence.
diagonal lines. However, RQA operates in phase space, rather than the univariate space in the example above.
5.2. Phase-Space Reconstruction Phase space can be defined as the set of all possible states of the system. In some cases, phase space involves dimensions that are literally spatial, but more generally the dimensions represent the degrees of freedom within the system under study. An important theorem in dynamics (Takens, 1981) shows that for nonlinear systems phase space can be reconstructed by lagging a single univariate time series against itself. The lagging procedure here first requires creating a new time series that we will treat as if it were an independent set of measurements of a new variable or dimension. We then remove the first n observations in this new time series (hence lagging), and shift it, such that the value at n þ 1 is treated as the first observation of the
362
James A. Dixon et al.
new dimension. Each dimension in the system requires another lagging of the time series. The organizational (i.e., topological) properties of the attractor can be revealed by repeatedly lagging a single time series against itself. While the mathematical proof of this theorem is quite involved, it is not difficult to grasp why this works. The nonlinear nature of the system means that a change in any variable will eventually be reflected in each of the other variables. The lagging procedure essentially unfolds the information that is embedded in a single time series. To give an example of this procedure, Figure 9 shows the actual phase space of the preshift Lorenz attractor (panel A) and the reconstructed phase space (panel B), created by lagging X against itself twice. As can be seen in the figure, the topological characteristics (akin to ordinal relations) of the attractor are recreated in the reconstructed phase plot. In a sense, the lagged variables stand in as surrogates for the unmeasured variables, Y and Z. RQA creates a recurrence matrix or plot, as described above, of the system’s movement in this phase space. The horizontal and vertical axes still represent the time series, but what recurs is the position of the point in phase space, as defined by its coordinates (three dimensions in the Lorenz example), rather than the value of a single variable. This approach can be made more powerful by relaxing the definition of ‘‘recurrence,’’ so that nearby points (defined within some set distance) are also counted as recurrent. Figure 10 shows the recurrence plot for the preshift portion of the Lorenz model. The diagonal lines show that the system revisits regions in the phase space, and does so increasingly over time. RQA is a systematic method for extracting information about the organization of the system from such recurrence plots. One measure in RQA, % determinism, assesses the degree to which the underlying system is deterministic versus stochastic. Determinism is computed as the (number of
A
Preshift attractor B Phase space reconstruction
Actual
20 15 10 5 0 12
10
8
10 6
4
2
5 0 0
10 9 8 7 6 5 4 3 2 1 15 0 0
10 2
5 4
6
8
10
0
Figure 9 (A) A detailed view of the actual phase-space trajectory of the Lorenz model (from a portion of the preshift attractor). (B) The same portion of the time series, but here the phase space has been reconstructed.
363
The Self-Organization of Cognitive Structure
Time
800
100 100
Time
800
Figure 10 A recurrence plot for the preshift portion of the Lorenz model. The diagonal lines indicate runs of time during which the system is in an attractor.
points in diagonal lines)/(total number of recurrent points). Another important measure is mean line, the average length of the diagonal lines. Mean line provides information about the stability of the system (characterized by long lines). Finally, entropy captures important information about the degree of disorder within the system. Entropy is computed according to Shannon’s (1948) equation applied to the distribution of line lengths. Thus, entropy roughly measures the variability in the amount of time spent in an attractor. A full account of RQA is obviously beyond our current scope; it produces a variety of other measures which are mostly tangential to the current discussion. In addition, it requires setting a small number of initial parameters based on preliminary analyses of the time series. A sophisticated literature has established guidelines for the setting these parameters (e.g., Abarbanel, 1996; Marwan, Romano, Thiel, & Kurths, 2007; Riley, Balasubramaniam, & Turvey, 1999; Webber & Zbilut, 2005). We followed these guidelines throughout. We hypothesized that participants’ representational shift from force tracing to alternation would be predicted by changes in dynamic organization. To address this hypothesis, we performed RQA on each participant’s motion data separately for each trial. Specifically, we analyzed the angular velocity time series, created by their tracing the force across the gears on each trial. We predicted that the discovery of alternation would be predicted by an increase and decrease in entropy.
5.3. Changes in Dynamic Organization Predict the Discovery of Alternation We modeled the probability of discovering alternation using event history analysis. In brief, event history (or survival) analysis allows us to model the probability of a discrete event (e.g., the first use of alternation) occurring
364
James A. Dixon et al.
over time as a function of a set of predictors (Singer & Willett, 2003). Event history naturally integrates both time-invariant and time-varying predictors. Time-varying predictors can change value across time (e.g., entropy, in the current example). Event history also handles participants who do not experience the event (often called right-censored cases)—these participants contribute to the analysis appropriately. We used measures of dynamic organization, entropy and mean line length, from the previous trials to predict participant’s discovery of alternation on the current trial. More specifically, we used mean line length from the previous trial, t 1, to predict discovery of alternation on the current trial, t. Mean line length indexes the amount of time spent contiguously in an attractor. To capture the potential rise-and-fall pattern in entropy, we created two predictors. The first predictor, which we call peak entropy, is the maximum value of entropy on trials t 2 through t 5. The second predictor, which we call prior entropy, is simply the value of entropy on the prior trial, t 1. As predicted, prior mean line, peak entropy, and prior entropy all had significant effects. Prior mean line had a positive effect, indicating that increases in the stability of the system precede discovery. More centrally, increases in peak entropy and decreases in prior entropy predict the discovery of alternation. To give a sense of the entropy results graphically, we aligned participants on their discovery trial and plotted the values of entropy on previous trials. As can be seen in Figure 11, entropy peaks and then drops prior to discovery. This study demonstrates that the discovery of a new cognitive structure can be predicted from measures of dynamic organization. 1.4
Entropy
1.3
1.2
1.1
1.0 −6
−5
−4 −3 Prediscovery trials
−2
−1
Figure 11 Mean entropy for trials leading up to the discovery of alternation. Trials on which discovery occurred would be aligned on the far right. ‘‘1’’ indicates the trial immediately prior to discovery, ‘‘2’’ indicates two trials prior, and so on.
The Self-Organization of Cognitive Structure
365
Of particular interest is the finding that a rise and fall in entropy predicts the new representation because this is a central feature of self-organization in nonlinear dynamics (see Dale, Roche, Snyder, & McCall, 2008 for a similar set of findings). A few things are worth noting here. First, the predictors are all based on performance on previous trials; the model is predicting future behavior, not current behavior. Thus, changes in dynamic organization anticipate discovery of alternation. Second, entropy does not reduce to simple performance metrics, such as response time, variability in motions, etc. The correlations between these measures and entropy are very small, explaining a trivial proportion of the variance. Finally, the measure of entropy we are using might properly be called ‘‘system entropy,’’ because it reflects the degree of disorder in the trajectory of the system through its phase space. Recall that open systems are constantly off-loading entropy into the environment. The rate at which the current structures in the system can dissipate entropy is limited. When the rate of entropy entering the system exceeds the ability of its current structures to off-load entropy, the amount of entropy in the system increases. As system entropy increases, the microelements that constitute the current structures increase their interactions, because entropy is the breaking apart of the current structure. On reaching a critical level of system entropy, these interactions potentially create new structures, structures that may better dissipate entropy.
5.4. Manipulating Input Entropy Given the importance of entropy in this account, we wanted to manipulate the amount of entropy entering the system. Injecting more entropy into the system should send it through the transition more quickly. A considerable body of literature in nonlinear dynamics has shown that injecting disorder into a system can produce new structure, because the system is pushed through its critical mode more quickly (Shinbrot & Muzzio, 2001; Wio, 2003). This phenomenon falls under a larger category of effects called stochastic resonance (see Gammaitoni, Hanggi, Jung, & Marchesoni, 2009 for a brief introduction). In this study, we introduced random motion into the display, thus injecting additional disorder into the system. Specifically, the gear system was presented in a 500 480 pixel window on a 23-in. monitor. In the experimental conditions, the entire window shifted: 30 pixels, in the lowshift condition, or 60 pixels, in the high-shift condition, every 1–2 s in a randomly selected direction. The unpredictable movement of the window was intended to increase the fluctuations in the participants’ force-tracing motions. In a control condition, the window was stationary (see Stephen, Dixon, et al., 2009 for details).
366
James A. Dixon et al.
Previous work in nonlinear dynamics allows us to make a counterintuitive prediction here. Increasing entropy should drive the system to its new organization more quickly. By increasing entropy, the system is more rapidly forced from one attractor to another. Therefore, participants assigned to the high-shift condition should discover alternation more quickly than participants in the low-shift condition, and those in the control condition should discover least quickly. We should also replicate the results of the previous study; increases and decreases in entropy, indicative of the onset of the new structure, should predict discovery of alternation. Figure 12 shows the effects of the window-shifting condition on the probability of discovering alternation over trials. The y-axis gives the values of the cumulative hazard function for each of the three conditions. Roughly, the cumulative hazard indexes the amount of ‘‘risk’’ that participants have been exposed to up to that point in time; here the event they are at risk for is the discovery of alternation. More specifically, the cumulative
3.0
High-window shift
Cumulative hazard
2.5
2.0
1.5 Low-window shift 1.0 No window shift 0.5
0 1
3
5
7
9
11 13 15 17 19 21 23 25 27 29 31 33 35 Trial
Figure 12 The curves show the cumulative hazard for the discovery of alternation for the high-, low-, and no window-shifting conditions as a function of trials. The hazard is calculated as the number of participants who discovered alternation on a particular trial divided by the number of participants who were ‘‘at risk’’ for discovery on that trial. Participants who had not previously used alternation are ‘‘at risk’’ for discovery. The hazard is an estimate of the probability of an event occurring on each trial. The cumulative hazard shows how much risk participants are subject to over trials in each condition. Because the hazard is computed per trial, the cumulative function can be greater than 1.
The Self-Organization of Cognitive Structure
367
hazard contains information about the proportion of participants discovering (i.e., the hazard) on each trial, as well as how those proportions accumulate over trials. The change in the function gives the proportion of participants in the risk set (i.e., participants who have not previously used alternation) who discovered on that trial. For example, in the high-window-shift condition, from trials 14 to 15, the curve increases by approximately 0.4, indicating that about 40% of the participants who had not yet used alternation discovered it on trial 15. The function also nicely illustrates how the probability of discovery accumulates at different rates in these three conditions. As predicted, the probability of discovering alternation is greatest in the high-window-shifting condition, intermediate in the low-window-shifting condition, and lowest in the no window-shifting (i.e., control) condition. An event history analysis showed that all three conditions were significantly different from each other. Further, we replicated the effects of prior mean line, peak and prior entropy. As in the previous study, a sharp rise and fall in system entropy predicted the transition to alternation. These results are consistent with the idea that new cognitive structures emerge through self-organization. Self-organization requires that a particular series of events occur. First, the system begins to take on more entropy than it can off-load. This results in the system’s structures becoming more disordered. As the constraints holding a structure together are broken, the microelements that made up the structure are able to interact. These interactions will be the source of the new structure. To persist, the new structure must more efficiently off-load entropy from the system. Thus, the new structure decreases system entropy as it coalesces. The results summarized thus far primarily focused on the entropy side of the self-organization account. We showed that higher values of peak entropy, the maximum value of entropy across a long lag (2–5 trials prior to the current trial), predicted the onset of alternation. We also showed that lower values of prior entropy, entropy on the immediately prior trial, predicted the onset of alternation. Finally, we demonstrated that manipulating the amount of entropy entering the system affected the probability of discovery. The greater the degree of random perturbations, the faster the transition to new structure.
5.5. Interactions Across Scales: Power-Law Behavior Another side of the self-organization account involves the interactions among microelements. Above, we mentioned that it is the interactions among the microelements that create the new structure. Therefore, it seems important to have a more direct assessment of these interactions. A complication here is that these microelements exist at multiple scales. This may initially seem like a radical notion, but consider that the physical architecture of biological systems is multiply nested (West, Brown, &
368
James A. Dixon et al.
Enquist, 1999). For example, organelles exist within cells, cells cohere to form cell systems, cell systems form organs, etc. While we typically focus on one or two such scales, the other scales are also active and contributing to behavior. Research in nonlinear dynamics has developed methods for estimating the activity across multiple, nested scales. These methods have proven to be very useful in understanding the behavior of complex systems, including transitions to new structure. Figure 13 provides a schematic of a nested structure. Each circle here represents the structure at a particular scale. Large circles are made up of smaller ones, which in turn are made up of yet smaller ones. The panels show the evolution of the system as constraints become loosened and smaller-scale elements begin to interact. The arrows between adjacent circles represent the activity of each structure; activity at each scale contributes to the behavior of the whole system. Assume, initially, that the degree of activity is always proportional to the size of the circles (i.e., largerscale structures contribute proportionally more to system behavior than smaller scales). This nested architecture, coupled with the homogeneity of activity across scales (i.e., the proportionality assumption above), creates a powerlaw relationship (Stanley et al., 2000). As one goes from coarser scales down to finer scales, the activity contributed by each scale decreases. The rate of decrease follows a power law. The power law emerges, because the ratio between activity at one scale and activity at the next smaller scale is constant.
A
B
C
D
E
F
Figure 13 An abstract example of nested structure. (A) Three large structures are shown. (B)–(D) The constraints among these structures begin to break apart, allow their constituent elements to become active. (E, F) This process continues further such that these elements also become less constrained, freeing their even smaller constituent parts.
369
The Self-Organization of Cognitive Structure
The power-law relationship is shown in the top panel of Figure 14. For reasons that will be explained below, activity is quantified by squared amplitude. A convenient way to express the power-law relationship is to logarithmically transform both axes, creating a double-log plot. The bottom panel of Figure 14 shows the power law from the left panel in double-log form. The slope of the function in the double-log plot gives the exponent of the original power law in the top panel. The power-law or Hurst exponent, H, quantifies the nested activity of the system. For example, if the activity of the system changed across scales, the exponent would quantify that change. Power-law behavior has proven to be an important index of selforganization in many domains. The power-law exponent increases as a system approaches a phase transition (i.e., a new organization). In fact, one of the major ways of classifying dynamic systems is by their critical exponent, the value of H at which the phase transition occurs (Hilborn,
Squared amplitude
120 100 80 60 40 20 0 0
0.5
1
1.5
2
2.5
3
3.5
0
0.5
1
Frequency
Log squared amplitude
2.5 2 1.5 1 0.5 0 −0.5 −1 −2.5
−2
−1.5
−0.5 −1 Log frequency
Figure 14 The top panel shows an example of a power-law relationship in original scales. Squared amplitude is plotted as a function of frequency. The lower panel shows the same relationship plotted in log–log scales.
370
James A. Dixon et al.
1994). To understand why the power-law exponent is an important index of change in multiscale systems, consider that an existing structure is comprised of multiscale elements adhering (temporarily) in some configuration. As the bonds that hold these elements together begin to loosen, the interactions among the elements increase. The power-law exponent quantifies these interactions and, therefore, provides an index of the degree to which the system is poised for a phase transition. As the system settles into a new configuration (i.e., new bonds form and constrain the elements), the power-law exponent decreases. A standard approach to estimating the power-law exponent is to conduct spectral analysis on the time series (Aks, 2005; Holden, 2005). Spectral analysis decomposes the time series into sine waves, ranging from low frequency to high frequency. The amplitude at each frequency can be considered as measuring the activity at a different scale. The top panel of Figure 14 shows a power-law relationship between frequency and squared amplitude. The bottom panel shows the same relationship plotted on axes of log squared-amplitude against log frequency (i.e., a double-log plot). The slope of a linear regression of this log–log plot is an estimate of the powerlaw exponent present in the spectrum. (A potential source of confusion is that this analysis is often conducted on the squared amplitude, which is unfortunately called ‘‘power.’’ The squared-amplitude-by-frequency plot is called the ‘‘power spectrum.’’) The log–log plot of the squared-amplitude spectrum usually has a negative slope, a, but conventionally, the powerlaw exponent is expressed as a without the negative sign. Power-law exponents are an important indicator of the potential for a phase transition, sometimes called ‘‘criticality.’’ As a system approaches a phase transition, its power-law exponent will approach a critical value. The critical value of this exponent is the threshold beyond which the system undergoes a phase transition. We used spectral analysis to estimate the power-law exponent for each trial for each participant in the two motion-tracking studies described above. We predicted that the power-law exponent would increase as participants used force tracing, but remained at risk for discovering alternation (Stephen & Dixon, 2009). At the same time, we predicted that the power-law exponent would decrease just prior to discovery. Figure 15 shows the results graphically. The top panel shows the mean power-law exponents for all trials on which participants were at risk for discovery (i.e., the trial on which participants discover alternation and all later trials do not contribute to the mean), aligned by trial number. The dark curve shows the power-law exponents for participants who discovered during the experiment; the lighter curve shows the exponents for participants who did not discover. As can be seen in the figure, the mean power-law exponent increases for participants at risk for discovery, and does so more dramatically for participants who actually discover.
371
The Self-Organization of Cognitive Structure
1
0.8 Power-law exponent
Discoverers 0.6
0.4 Nondiscoverers 0.2
0 1
6
11
16
21
26
31
36
Trial 1
0.6
0.4
Power-law exponent
0.8
0.2
−29
−25
−21
−17 −13 Prediscovery trial
−9
−5
0 −1
Figure 15 The top panel shows the mean power-law exponents as a function of trials with separate curves for participants who discovered alternation versus those who did not discover. The lower panel shows the mean power-law exponents (for discovers only) aligned relative to their ‘‘discovery trial,’’ the trial on which they first used alternation.
The bottom panel also shows the mean power-law exponents for participants who discovered alternation, but now aligned relative to the trial on which they discovered. The trial on which they discovered would be at the very far right of the horizontal axis, the trial immediately prior to discovery
372
James A. Dixon et al.
is labeled 1, two trials before discovery is 2, etc. As can be seen in the figure, the power-law exponent decreases prior to discovery, as predicted. Note that the dark curve in the top panel and curve in the bottom panel show the same data. Their very different forms result from averaging relative to different landmarks, the beginning of the session, and the discovery of alternation, respectively. In summary, we showed that the power-law exponent increased as participants repeatedly used force tracing prior to discovery of alternation. This increase was greater for participants who actually discovered at some time during the session. Further, the power-law exponent decreased just prior to discovery. This is an important result, because the power-law exponent is a measure of the physical interactions in nested, multiscale systems; biological entities have this type of multiscale architecture. Research in self-organization has shown that these interactions are the source of new structure. Quite literally, the physical activity across the many scales of the system changes the relationships among the microelements. When these new relationships result in a structure that better dissipates entropy, the structure persists. From the macroscopic perspective of an observer watching the participant, the shift to these new relationships appears as a new structure. The trial-to-trial changes in multiscale activity that are the source of these relationships have evolved more continuously. Because most of this activity is at smaller behavioral scales, it is not easily detected by an observer.
6. Dynamics of Induction in Card Sorting While there is much to be said for thoroughly researching a phenomenon (e.g., discovery in the gear-system task), we also wish to show that the approach we are developing has generality. To that end, we have recently begun investigating the dynamics of a classic rule-induction task, card sorting. Card-sorting tasks have been an important tool in research on cognitive control and executive function (Grant & Berg, 1948; Somsen, 2007; Zelazo, 2006; Zelazo, Muller, Frye, & Marcovitch, 2003). Our interest stemmed, in part, from the fact that in many of these tasks participants are asked to induce the rule based on the feedback they receive during sorting. We expected that inducing a rule would show the same signatures of self-organization that we found in the gear-system paradigm. Below, we report the results for the predicted changes in the power-law exponents, after presenting the methods. We asked participants to sort cards in one of two conditions. In the ruleinduction condition, participants were asked to sort a set of cards into piles, based on an unknown rule. In the rule-given condition, participants
The Self-Organization of Cognitive Structure
373
were told the rule and asked to sort the cards into piles according to that rule (e.g., ‘‘Put the same types of animals together’’). The set of cards varied on three dimensions: the type of animal on the card, the color of the animal, and the item it was wearing (e.g., hat, tie, etc.). There were 64 cards in all. Examples of the cards are shown in Figure 16. In both conditions, each trial continued until 10 cards had been placed correctly. A new trial was then started, with a new rule. The fact that the rule was going to change was known to participants in both conditions (those in the rule-given condition received a new rule; those in the ruleinduction condition were told that a new rule was in place). Feedback was provided for each card placement. Twenty-six college students completed
Figure 16 Examples of the cards used in the card-sorting task.
374
James A. Dixon et al.
the experiment, 13 in each condition. Participants wore a motion tracking device on the hand with which they sorted the cards. We continuously sampled their motion (at 60 Hz) during each trial. Because we were interested in the changing dynamic organization during each trial, we divided the time series from each trial up into epochs. (Recall that participants sorted according to a single rule within each trial, and, thus, inducing a rule happens within a single trial.) Each epoch was 800 cycles long, epochs overlapped by 300 cycles. This is a standard ‘‘moving window’’ approach for nonstationary time series. (Of course, the nonstationarity is of principal interest here.) In the current context, the method provides an estimate of the power-law exponent for each 800-cycle window ( 13 s). We predicted that for participants in the rule-induction condition, the power-law exponent would first increase, indicating that the interactions across scales were increasing, but then decrease, as the new structure emerged. For the participants in the rule-given condition, we did not expect to see this dramatic rise and fall in the power-law exponent. Figure 17 shows the mean power-law exponent aligned at the end of trials (in both conditions a trial ended when 10 cards had been correctly placed in succession).
Rule induction
Power-law exponent
1.15
Rule given
1.10
1.05
1.00 −12
−10
−8 −6 Prediscovery
−4
−2
Figure 17 The power-law exponents for participants in the rule-induction condition (red curve) and the rule-given condition (black curve) as a function of prediscovery epochs. That is, the epochs are aligned relative to the end of a trial; each trial ended when 10 cards had been sorted correctly.
The Self-Organization of Cognitive Structure
375
As can be seen in the figure, the mean power-law exponents for the ruleinduction condition increased then decreased over the course of the trial. The rule-given condition shows a much more modest pattern of changes, including a slight increase and decrease. There are fewer epochs in the rulegiven condition, because participants in this condition reached the 10-card criterion more quickly. A growth-curve model confirmed the differences apparent in the figure. Most importantly, the quadratic effect was greater for the rule-induction condition. Interestingly, the rule-given condition had a steep negative slope. One interpretation of this change in the power-law exponents is that over the course of sorting, the structure that supports the rule becomes increasingly organized. In summary, this simple experiment shows that inducing a rule during card sorting exhibits one of the signatures of self-organization, an increase and decrease in the power-law exponent. Participants who were explicitly given the rule show a different pattern in which the power-law exponents tend to decrease as a function of sorting. Importantly, the only difference between these two conditions was whether the rule was stated explicitly, all other aspects of the task were identical. These different initial starting conditions create very different patterns of dynamic organization. The induction condition shows the sharp initial increase in activity across scales, followed by a decrease as the new structure coalesces. In the rule-given condition, the power-law exponent decreases steadily, after a modest initial rise, indicating that the system is settling into a particular organization. Despite the fact that the criterion for ending a trial was identical in both conditions (i.e., 10 correctly sorted cards), the mean power-law exponent at the end of the trials is much lower for participants who were given the rule. Thus, although macroscopic performance in these two conditions is equivalent by the end of each trial, the underlying dynamics are quite different. Finally, we note that the pattern of changes in the power-law exponents is consonant with the pattern that precedes the discovery of alternation in the gear-system task. The current study extends this approach to a different paradigm, and also shows that we can experimentally manipulate the dynamics that evolve over the course of the task by changing the initial conditions.
7. General Discussion We reviewed a number of lines of evidence showing that actions were linked to the discovery of a new cognitive structure during problem solving. One line of evidence showed that the concentrated use of force-tracing actions predicted the discovery (i.e., first use) of alternation, the new cognitive structure. In a second line of evidence, we experimentally
376
James A. Dixon et al.
manipulated whether participants solved a set of base problems using forcetracing actions. These experiments showed that participants assigned to the force-tracing conditions for the base problems discovered alternation more quickly on a set of target problems, and that their use of alternation showed greater stability. A third line of evidence showed that the number of force-tracing actions a participant made prior to discovering alternation predicted the generalization of alternation to problems with different features. For example, a participant who initially used alternation on a ‘‘small’’ gear system would be considered to generalize if they used it on ‘‘large’’ system. Finally, we also showed that preschool children (aged 3.5–5 years) who had been familiarized with two properties of gear systems (i.e., gears turn and teeth push), often used force-tracing actions. Consistent with previous work with adults and older children, we found that the number of force-tracing actions predicted the rate at which the use of alternation increased over subsequent trials. Preschool children were of particular interest because they have limited understanding of alternating systems, and are extremely poor at transferring relations they do understand. Thus, it seems quite likely that preschoolers’ discovery of alternation reflects the creation of a new relation, rather than transfer.
7.1. The Problem of New Structure for Information-Exchange Approaches Although we were excited to establish a link between action and the emergence of new cognitive structure, developing a theory of how new structure might arise within the standard formulation of cognition proved to be an instructive challenge. Regardless of whether the new relation is first present in action or perception, it is not clear how, within the boundaries of information exchange, the rest of the system comes to be altered. How could one component sending a message to another component cause it to take new form? Or, even more problematically, how could the whole system take new form, given only signaling among its component parts? Perhaps a way to see the problem more clearly is to consider the allowable moves in an information-exchange system. First, signals are passed from one component or level in the system to another. Designating these between-component interactions as instances of information exchange (i.e., signals) means, at a minimum, that the receiving component is functionally poised to behave as if it has a catalog of messages it might receive from the ‘‘sender,’’ and a corresponding action that is executed for each message. The corresponding action may be to send a further signal to the next component in the system or to activate or inhibit some portion of the current component (e.g., the representation of a particular relation). Within a component, computation over these activations can be used to transform information or
The Self-Organization of Cognitive Structure
377
integrate multiple signals. The problem with regards to explaining the emergence of new structure is that the set of signals must be limited to those already known to the components in the system. There is no way to make new signals or distribute instructions about what the various components should do should they receive a new signal. An information-exchange approach may work well for systems that are functioning in a stable configuration, but they offer little traction on the problem of new structure. Of course, one might try to circumvent this problem by proposing a special set of processes whose sole purpose is to change the system at the appropriate moments. The problem here is that to change the system correctly, these processes must know a huge amount, more than the system knows itself. Consider what such a specialized process is being asked to do. It must somehow infer the need for a new structure and the appropriate form for that structure based on the internal activity of the system. Because the internal components of the system can only respond to the set of defined signals (which by definition do not include the new relation), the specialized change process would have to make its inferences without data about the new relation. Alternatively, one might suggest that the change process has direct access to the environment. In this way, the specialized process might detect the new relation and rearrange the system as needed. However, if the process works via information exchange, it would be subject to the same problems outlined above. Our goal in discussing these issues is not to cast aspersions on any individual research program, but rather to outline the dead ends and quagmires one encounters when approaching the problem of new structure from the perspective of information exchange. After trying in earnest to work within the information-exchange paradigm and overcome these problems, we became convinced that a different approach was necessary. A natural starting point was to consider how related fields have dealt with the problem of new structure in many bodied, multiscale systems (e.g., Skar, 2003).
7.2. A Physical Approach to New Structures in Cognition An extensive, and sometimes daunting, literature has addressed the emergence of structure in organic and inorganic systems. Considerable progress has been made in this area over the last 30 years (e.g., Haken, 1983; Kelso, 1995; Kugler & Turvey, 1987; Prigogine & Stengers, 1984; Schertzer, Lovejoy, Schmitt, Chigirinskaya, & Marsan, 1997; Yates, 2008). Theoretically, this work is grounded in the relationship between physical structures and energy, to put it broadly. A central goal is to understand how changes in the organization of physical materials emerge from the exchange of energy with the environment. For relatively homogeneous, inorganic materials, very precise theoretical accounts have been developed to describe phase transformations from one state to another (e.g., liquid ! gas) (Balluffi,
378
James A. Dixon et al.
Allen, & Carter, 2005). More heterogeneous structures, such as those generally found in biological systems, tend to require a more empirical approach, in which quantities derived from the theoretical principles are measured from the activity of the system. Given the emphasis on the relation between structure and energy, it is not surprising that entropy and its reciprocal property, free energy, are core constructs in this work. As the entropy within a system increases, the constraints that maintain its current structure begin to break apart. These constraints are physical bonds among the microelements that constitute the macroscopic structure. At a critical point, the constraints are insufficient to maintain the current structure, and the interactions among the microelements begin to dominate the behavior of the system. If these interactions drive the system into a new structure that better offloads entropy, that structure will persist. Alternatively, the system may just continue on toward equilibrium, and ultimately cease to have any structure. From the perspective of researchers in psychology, some of the most important recent advances have been methodological. A number of new methods for analyzing time series data have been developed (see Riley & Van Orden, 2005). These methods estimate theoretically important quantities, including entropy, power-law exponents, Lyapunov exponents, attractor strength, etc. A major advance is that most of the methods work on relatively short time series, such as one might be able to collect during an experimental trial. We have employed some of these measures in our recent work. As reviewed in the chapter, we densely sampled the motion of each participant’s hand during problem solving to obtain a time series that contained information about the dynamic organization of the system. While it may initially seem surprising that the motions of one’s hand would contain information about the whole system, consider that unless the effects in a system are additive, any of the system’s variable will contain information about the entire system’s dynamics (Abarbanel, 1996; Takens, 1981). In biological systems, additive effects would be a very rare exception. For example, interactions among neurons depend upon the nonadditive nature of their firing thresholds. The dynamic measures we obtained from RQA, including entropy, are not restricted to the hand, despite the fact that we measured hand movements. Rather, they reflect the changing organization of the system in which the hand is embedded. We demonstrated that a pattern of increase and decrease in the entropy, as measured by RQA, predicts the onset of new structure. One implication of the claim that any variable should contain information about the global dynamics of the system is that we should be able to show similar effects from a time series collected from another part of the body. Pragmatically, the time series should be from a part of the body that moves often during the task; the mathematical guarantee of reconstructing
The Self-Organization of Cognitive Structure
379
phase space from any variable is in the limit, and in the absence of white noise. In recent work, not reviewed above, we sampled eye position during the gear-system task (Stephen, Boncoddo, Magnuson, & Dixon, 2009). We analyzed the angular-change time series on each trial with RQA. Consistent with the self-organization account, the rise-and-fall pattern in entropy predicted the transition to a new structure, the discovery of the parity rule. Specifically, the turning direction is determined by whether the number of gears is odd or even (Dixon & Bangert, 2004). These results bolster the claim that other variables in the system also carry information about its dynamic organization. Minimally, they show that these effects are not at the level of manual motion. Eyes and hands move very differently. (We also replicated the power-law effects in this study. Power-law behavior increases as the transition becomes imminent, and decrease just prior to discovery of parity.) We also reviewed recent evidence which shows that injecting disorder into the system accelerated the transition to new structure. By varying the degree to which the window containing the gear problem moved unpredictably (on the computer screen), we manipulated the entropy entering the system. Participants transitioned to the new structure most quickly in the high-shift condition, somewhat less quickly in the low-shift condition, and least quickly in the control (i.e., no-shift) condition. These experimental effects showed that entropy plays a causal role in moving the system into a new structure. A large body of work has demonstrated similar effects in other domains (Shinbrot & Muzzio, 2001). We also replicated the predictive effects of system entropy; entropy increased and decreased prior to the transition. Because the activity across scales plays an important role in the transition to new structure, we used another measure, the power-law exponent, to assess the activity across the scales of the system. The power-law exponent is a well-established measure, typically used for systems with fractal architectures; that is, systems that have multiscale, nested structures. As systems approach a phase transition, the power-law exponent increases to a critical value (often called the critical exponent) and then decreases as the new structure coalesces (Grebogi, Ott, Romeiras, & Yorke, 1987). We showed that for the gearsystem task the power-law exponent increased and decreased systematically as the transition to new structure became imminent. We extended these effects using a separate paradigm, in which participants were asked to sort cards by a rule while we tracked the motion of their hand. Participants who had to induce the rule showed significantly greater rise and fall in their power-law exponents than those who were given the rule. These effects show that the induction of a rule, like the discovery of the alternation relation, is predicted by changes in the across-scale activity of the system.
380
James A. Dixon et al.
In our view, approaching the problem of new cognitive structure from this perspective has a number of unique strengths. First, the approach rests on an established theoretical basis from which to make predictions about the relationship between measured quantities and macroscopic behavior, including but not limited to the transition to new structure. To be more explicit, we take as a starting point the assertion that cognition has a physical basis, a position that we believe is noncontroversial (and to the best of our knowledge is the only alternative to dualism). As such, it seems unavoidable that the physics that underlies the structuring and restructuring of material things would also be at work in cognition. Second, the approach naturally addresses both the stability and flexibility of cognitive structures. It is not necessary to propose special processes that detect the need for change and engineer an appropriate reconfiguration of the system. Finally, because a physical approach to cognition takes energy as the fundamental driving force in the system, it has a natural basis for motivation. That is, the system does not need a separate account of why it acts. Rather, the management of energy and entropy by the multiscale structures, in accord with thermodynamic laws, is what makes the system move.
7.3. Information Exchange as a Type of Physical Regime While it may seem as if a physical approach to cognition and the information-exchange approach are fundamentally at odds, we suggest that information exchange usefully characterizes a particular dynamic regime. Specifically, physical interactions can be usefully characterized in terms of information exchange when components in a system are poised such that changes in one component trigger predictable, macroscale changes in other components. This special type of organization runs on physical interactions, of course, but because the system is prepared to react in a particular way (e.g., release a chemoattractant, close a receptor site, depolarize, execute a saccade, say ‘‘Blimey!’’), the physical details can be neglected without much loss of explanatory power. Interestingly, a very similar type of dynamic configuration has been the focus of considerable work recently in physics and related fields. This class of systems is said to exhibit ‘‘self-organized criticality’’ (SOC) (Bak, 1996; Jensen, 1998). SOC systems continually reset themselves to a critical point, the point just prior to a phase transition. Put differently, these systems self-organize in a way that poises them to react to a particular physical event; the onset of that event sends them through a macroscale transition. Thus, SOC systems act in a way that can be usefully reduced to an information-exchange system (see Gilden, 2001; Van Orden, Holden, & Turvey, 2003, 2005 for evidence of SOC dynamics in human cognition; Farrell, Wagenmakers, & Ratcliff, 2006; Wagenmakers, Farrell, & Ratcliff, 2004, 2005 for an alternative view). Such a reduction is not without its costs, of course. The SOC properties of the system tell us
The Self-Organization of Cognitive Structure
381
something fundamental about the nature of cognition when it is set up to accomplish a particular task (Van Orden et al., 2003, 2005). Our point here is that information exchange can sit naturally within a physical approach. If the SOC regime underlies the performance of systems that are set up in an information-exchange manner, a key question is how those dynamics emerge from the system. SOC systems must develop and potentially change; they do not emerge full blown from nothing. The changes that eventually result in an SOC regime have their own dynamics which do not exhibit SOC properties. In summary, the work presented here shows a series of initial steps toward a physical account of cognition, specifically focused on the problem of new structure. We are keenly aware of the many challenges that such an approach faces, such as an adequate account of language, intentionality, and meaning to name just a few. Of course, these challenges are not unique to a physical approach; all theoretical approaches to cognition must eventually address these issues. We would add that all approaches to cognition must also address the very difficult problem of new structure.
REFERENCES Abarbanel, H. D. I. (1996). Analysis of observed chaotic data. New York, NY: Springer-Verlag. Adolph, K. E., & Berger, S. A. (2006). Motor development. In W. Damon & R. Lerner (Series Eds.) & D. Kuhn & R. S. Siegler (Vol. Eds.), Handbook of child psychology, Vol. 2: Cognition, perception, and language (6th ed., pp. 161–213). New York, NY: Wiley. Aks, D. J. (2005). 1/f dynamic in complex visual search: Evidence for self-organized criticality in human perception. In M. A. Riley & G. C. Van Orden (Eds.), Tutorials in contemporary nonlinear methods for the behavioral sciences (pp. 326–359). Retrieved February 23, 2006 from http://www.nsf.gov/sbe/bcs/pac/nmbs/nmbs.pdf. Bak, P. (1996). How nature works. New York, NY: Springer-Verlag. Balluffi, R. W., Allen, S. M., & Carter, W. C. (2005). Kinetics of materials. Hoboken, NJ: Wiley. Bickhard, M. H. (2004). The dynamic emergence of representation. In H. Clapin, P. Staines, & P. Slezak (Eds.), Representation in mind: New approaches to mental representation (pp. 71–90). Amsterdam: Elsevier. Bickhard, M. H. (2008). Issues in process metaphysics. Ecological Psychology, 20, 252–256. Bickhard, M. H., & Terveen, L. (1995). Foundational issues in artificial intelligence and cognitive science: Impasse and solution. Amsterdam: Elsevier. Black, B., & Logan, A. (1995). Links between communication patterns in mother–child, father–child, and child–peer interactions and children’s social status. Child Development, 66, 255–271. Boncoddo, R., Dixon, J. A., & Kelley, E. (2009). The emergence of a novel representation from action: Evidence from preschoolers. Developmental Science, (in press). Chen, Z., & Klahr, D. (2008). Remote transfer of scientific-reasoning and problem-solving strategies in children. In R. V. Kail (Ed.), Advances in child development and behavior, Vol. 36 (pp. 419–470). San Diego, CA: Elsevier Academic Press. Clark, P., Britland, S., & Connolly, P. (1993). Growth cone guidance and neuron morphology on micropatterned laminin surfaces. Journal of Cell Science, 105, 203–212.
382
James A. Dixon et al.
Dahan, D., Magnuson, J. S., & Tanenhaus, M. K. (2001). Time course of frequency effects in spoken-word recognition: Evidence from eye movements. Cognitive Psychology, 42, 317–367. Dale, R., Roche, J., Snyder, K., & McCall, R. (2008). Exploring action dynamics as an index of paired-associate learning. PLoS One, 3, e1728. doi:10.1371/journal. pone.0001728. Dixon, J. A., & Bangert, A. S. (2002). The prehistory of discovery: Precursors of representational change in solving gear-system problems. Developmental Psychology, 38, 918–933. Dixon, J. A., & Bangert, A. S. (2004). On the spontaneous discovery of a mathematical relation during problem solving. Cognitive Science, 28, 433–449. Dixon, J. A., & Bangert, A. S. (2005). From regularities to concepts: The development of children’s understanding of a mathematical relation. Cognitive Development, 20, 65–86. Dixon, J. A., & Dohn, M. C. (2003). Redescription disembeds relations: Evidence from relational transfer and use in problem solving. Memory & Cognition, 31, 1082–1093. Dixon, J. A., & Kelley, E. (2006). The probabilistic epigenesis of knowledge. In R. V. Kail (Ed.), Advances in child development and behavior, Vol. 34(pp. 323–361). New York, NY: Academic Press. Dixon, J. A., & Kelley, E. (2007). Theory revision and redescription: Complementary processes in knowledge acquisition. Current Directions in Psychological Science, 16, 111–115. Farrell, S., Wagenmakers, E.-J., & Ratcliff, R. (2006). 1/f noise in human cognition: Is it ubiquitous, and what does it mean? Psychonomic Bulletin & Review, 13, 737–741. Fodor, J. A. (2000). The mind doesn’t work that way: The scope and limits of computational psychology. Cambridge, MA: MIT Press. Forgacs, G., & Newman, S. A. (2005). Biological physics of the developing embryo. Cambridge, MA: Cambridge University Press. Gammaitoni, P., Hanggi, P., Jung, P., & Marchesoni, F. (2009). Stochastic resonance: A remarkable idea that changed our perception of noise. European Physical Journal B, 69, 1–3. Gilden, D. L. (2001). Cognitive emissions of 1/f noise. Psychological Review, 108, 33–56. Grant, D. A., & Berg, E. A. (1948). A behavioral analysis of reinforcement and ease of shifting to new responses in a Weigl-type card sorting. Journal of Experimental Psychology, 38, 401–411. Grebogi, C., Ott, E., Romeiras, F., & Yorke, J. A. (1987). Critical exponents for crisisinduced intermittency. Physical Review A, 36, 5365–5380. Haken, H. (1983). Synergetics. Berlin: Springer-Verlag. Halford, G. S., Andrews, G., & Jensen, I. (2002). Integration of category induction and hierarchical classification: One paradigm at two levels of complexity. Journal of Cognition and Development, 3, 143–177. Harnad, S. (1990). The symbol grounding problem. Physica D, 42, 335–346. Hilborn, R. C. (1994). Chaos and nonlinear dynamics: An introduction for scientists and engineers. New York, NY: Oxford University Press. Holden, J. G. (2005). Gauging the fractal dimension of response times from cognitive tasks. In M. A. Riley & G. C. Van Orden (Eds.), Tutorials in contemporary nonlinear methods for the behavioral sciences (pp. 267–318). Retrieved February 23, 2006 from http://www.nsf. gov/sbe/bcs/pac/nmbs/nmbs.pdf. Jacobs, D. M., & Michaels, C. F. (2007). Direct learning. Ecological Psychology, 19, 321–349. Jensen, H. J. (1998). Self-organized criticality: Emergent complex behavior in physical and biological systems. Cambridge, MA: Cambridge University Press. Kalish, M. L., Lewandowsky, S., & Davies, M. (2005). Error-driven knowledge restructuring in categorization. Journal of Experimental Psychology: Learning, Memory, and Cognition, 31, 846–861. Kelso, J. A. S. (1995). Dynamic patterns: The self-organization of brain and behavior. Cambridge, MA: MIT Press.
The Self-Organization of Cognitive Structure
383
Kugler, P. N., Kelso, J. A. S., & Turvey, M. T. (1982). On the control and coordination of naturally developing systems. In J. A. S. Kelso & J. E. Clark (Eds.), The development of movement control and coordination (pp. 5–78). Chichester: Wiley. Kugler, P. N., & Turvey, M. T. (1987). Information, natural law, and the self-assembly of rhythmic movement. Hillsdale, NJ: Lawrence Erlbaum Associates. Lorenz, E. N. (1963). Deterministic nonperiodic flow. Journal of the Atmospheric Sciences, 20, 130–141. Magnuson, J. S., Mirman, D., & Harris, H. D. (2009). Computational models of spoken word recognition. In M. Spivey, K. McRae & M. Joanisse (Eds.), The Cambridge handbook of psycholinguistics. Cambridge, MA: Cambridge University Press (in press). Marwan, N., Romano, M. C., Thiel, M., & Kurths, J. (2007). Recurrence plots for the analysis of complex systems. Physics Reports, 438, 237–329. McClelland, J. L. (1998). Role of the hippocampus in learning and memory: A computational analysis. In K. H. Pribram (Ed.), Brain and values: Is a biological science of values possible (pp. 535–547). Mahwah, NJ: Lawrence Erlbaum Associates. McClelland, J. L., & Vallabha, G. (2009). Connectionist models of development: Mechanistic dynamical models with emergent dynamical properties. In J. P. Spencer, M. S. C. Thomas & J. L. McClelland (Eds.), Toward a unified theory of development: Connectionism and dynamic systems theory re-considered. New York, NY: Oxford University Press. Nicolis, G., & Prigogine, I. (1977). Self-organization in nonequilibrium systems: From dissipative structures to order through fluctuations. New York, NY: Wiley. Nosofsky, R. M., & Zaki, S. R. (2002). Exemplar and prototype models revisited: Response strategies, selective attention, and stimulus generalization. Journal of Experimental Psychology: Learning, Memory, and Cognition, 28, 924–940. Prigogine, I., & Stengers, I. (1984). Order out of chaos. New York, NY: Bantam. Riley, M. A., Balasubramaniam, R., & Turvey, M. T. (1999). Recurrence quantification analysis of postural fluctuations. Gait & Posture, 9, 65–78. Riley, M. A., & Van Orden, G. C. (2005). Tutorials in contemporary nonlinear methods for the behavioral sciences. Retrieved February 23, 2006 from http://www.nsf.gov/sbe/bcs/pac/ nmbs/nmbs.pdf. Schertzer, D., Lovejoy, S., Schmitt, F., Chigirinskaya, Y., & Marsan, D. (1997). Multifractal cascade dynamics and turbulent intermittency. Fractals, 5, 427–471. Searle, J. R. (1980). Minds, brains, and programs. Behavioral and Brain Sciences, 3, 417–457. Shannon, C. E. (1948). A mathematical theory of communication. Bell System Technical Journal, 27(379–423), 623–656. Shinbrot, T., & Muzzio, F. J. (2001). Noise to order. Nature, 410, 251–258. Singer, J. D., & Willett, J. B. (2003). Applied longitudinal data analysis: Modeling change and event occurrence. New York, NY: Oxford University Press. Skar, J. (2003). Introduction: Self-organization as an actual theme. Philosophical Transactions of the Royal Society A, 361, 1049–1056. Smith, L. B. (2005). Emerging idea about categories. In L. Gershkoff-Stowe & D. H. Rakison (Eds.), Building object categories in developmental time (pp. 159–173). Mahwah, NJ: Lawrence Erlbaum Associates. Somsen, R. J. M. (2007). The development of attention regulation in the Wisconsin Card Sorting Task. Developmental Science, 10, 664–680. Soodak, H., & Iberall, A. S. (1987). Thermodynamics and complex systems. In F. E. Yates (Ed.), Self-organizing systems: The emergence of order (pp. 459–469). New York, NY: Plenum. Stanley, H. E., Amaral, L. A. N., Gopikrishnan, P., Ivanov, P. Ch., Keitt, T. H., & Plerou, V. (2000). Scale invariance and universality: Organizing principles in complex systems. Physica A, 281, 60–68.
384
James A. Dixon et al.
Stephen, D. G., Boncoddo, R., Magnuson, J. S., & Dixon, J. A. (2009). The dynamics of insight: Mathematical discovery as a phase transition. Memory & Cognition, 37, 1132–1149. Stephen, D. G., & Dixon, J. A. (2009). The self-organization of insight: Entropy and power laws in problem solving. Journal of Problem Solving, 2, 72–101. Stephen, D. G., Dixon, J. A., & Isenhower, R. W. (2009). Dynamics of representational change: Entropy, action, and cognition. Journal of Experimental Psychology: Human Perception and Performance, (in press). Stephen, D. G., Mirman, D., Magnuson, J. S., & Dixon, J. A. (2009). Le´vy-like diffusion in eye movements during spoken-language comprehension. Physical Review E, 79, 056114. Takens, F. (1981). Detecting strange attractors in turbulence. Lecture Notes in Mathematics, 898, 366–381. Tremblay-Leveau, H., & Nadel, J. (1995). Young children communication skills in triads. International Journal of Behavioral Development, 18, 227–242. Trudeau, J. T., & Dixon, J. A. (2007). Embodiment and abstraction: Actions create relational representations. Psychonomic Bulletin & Review, 14, 994–1000. Turvey, M. T. (1992). Affordances and prospective control: An outline of the ontology. Ecological Psychology, 4, 173–187. Van Orden, G. C., Holden, J. G., & Turvey, M. T. (2003). Self-organization of cognitive performance. Journal of Experimental Psychology: General, 132, 331–350. Van Orden, G. C., Holden, J. G., & Turvey, M. T. (2005). Human cognition and 1/f scaling. Journal of Experimental Psychology: General, 134, 117–123. Wagenmakers, E.-J., Farrell, S., & Ratcliff, R. (2004). Estimation and interpretation of 1/f noise in human cognition. Psychonomic Bulletin & Review, 11, 579–615. Wagenmakers, E.-J., Farrell, S., & Ratcliff, R. (2005). Human cognition and a pile of sand: A discussion on serial correlations and self-organized criticality. Journal of Experimental Psychology: General, 134, 108–116. Webber, C. L., Jr., & Zbilut, J. P. (1994). Dynamical assessment of physiological systems and states using recurrence plot strategies. Journal of Applied Physiology, 76, 965–973. Webber, C. L., Jr., & Zbilut, J. P. (2005). Recurrence quantification analysis of nonlinear dynamical systems. In M. A. Riley & G. C. Van Orden (Eds.), Tutorials in contemporary nonlinear methods for the behavioral sciences (pp. 26–94). Retrieved February 23, 2006 from http://www.nsf.gov/sbe/bcs/pac/nmbs/nmbs.pdf. Webster, G., & Goodwin, B. (1996). Form and transformation: Generative and relational principles in biology. Cambridge, MA: Cambridge University Press. West, G. B., Brown, J. H., & Enquist, B. J. (1999). The fourth dimension of life: Fractal geometry and allometric scaling of organisms. Science, 284, 1677–1679. Wio, H. (2003). On the role of non-Gaussian noises on noise-induced phenomena. In M. Gell-Mann & C. Tsallis (Eds.), Nonextensive entropy: Interdisciplinary applications (pp. 177–193). Oxford: Oxford University Press. Yates, F. E. (2008). Homeokinetics/homeodynamics: A physical heuristic for life and complexity. Ecological Psychology, 20, 148–179. Zelazo, P. D. (2006). The dimensional change card sort (DCCS): A method of assessing executive function in children. Nature Protocols, 1, 297–301. Zelazo, P. D., Muller, U., Frye, D., & Marcovitch, S. (2003). The development of executive function in early childhood. Monographs of the Society for Research in Child Development, 68(3) Serial No. 274.
Subject Index
A Ability test strategies paper folding test, 271, 272 Vandenberg mental rotation test, 273, 274 Algorithm efficiency theories item-general practice effects, 190–191 item-specific practice effects, 191 Algorithmic analysis conditional probability, 307 consistency-checking process, 309 distribution shape effect, 310 episodic memory, 307 hypothesis generation activation values, 313 consistency-checking process, 314 termination rule, 313 information search and hypothesis testing Bayesian diagnosticity, 324–325 diagnostic vs. pseudodiagnostic search, 322 dissimilarity heuristic, 322–323 empirical findings, 320 HyGene’s information search behavior, 325–331 memory strength, 322 MSdiff heuristic, 323 sampling biases, 331–334 leading-contender hypotheses, 309 observed data, 307 probability judgments alternative-outcomes effect, 316 behavioral data, 317 comparison process, 315 encoding quality, 317 HyGene predictions, 318 long-term memory, 319 memory-theoretic basis, 315 support theory, 314 prototype-extraction process, 307 representational instantiation Bayesian inference, 312 cubing function, 311 echo content vector, 312 mini-vectors, 310 representational systems, 310 SOC, 306 unspecified probe, 308 WM capacity, 306 Amodal perception, 239
Analysis of f variance (ANOVA), 45–46, 50, 83 Anatomy learning, spatial intelligence, 284, 285 Artifact categorization fluidity and flexibility, 23 measures, 22 naming and nonlinguistic understanding, 23 nonlinguistic categories, 23–24 participants and judgment, 24 Artifact names application to objects diachronic and synchronic processes, 11–12 family resemblance, 8–10 social processes, 12–15 Wittgenstein’s analysis, 9–10 homonymy, 3 implications across languages, 25–26 for bilingualism, 28–30 categorization, 22–25 for developmental trajectory, 26–28 for Whorfian hypothesis, 30–31 interpretation issues application constraints, 16–18 conceptual combination, 19–21 morphemes, 21 polysemy, 18–19 observed instances and patterns form and function, 6–8 form-based extension, 5–6 function-based extension, 4–5 ATRIUM model, 127 Attention-based theories, 189–190 Automaticity algorithm efficiency theories, 190–191 attention-based theories, 189–190 memory-based theories algorithm complexity effects, 193–194 CMPL theory, 191 cognitive tasks, 222 conceptual combination, 197–206 EBRW theory, 192 forgetting, 220–221 generality of, 215–218 individual differences in, 218–220 item-specific practice effects, 194 lexical ambiguity, 212–215 object-relative vs. subject-relative clauses, 208–212
385
386
Subject Index
Automaticity (cont.) supralexical and superordinate instances, 224 syntactic ambiguity, 195–197 verb bias effects, 206–208 property-list accounts, 187–188 B Backward blocking, 104 Backward reasoning. See Diagnostic reasoning Bayes’ theorem, 302 Behavioral picture studies scene-selective regions, 247–248 view-boundaries importance blank background vs. meaningful background, 245 object boundary, 246 Bilingualism monolingual Belgian speakers, 30 second-language learners, 29–30 strategy reports, 29 Blicket detector, 104 Blind observers, 251–255 Boundary extension, scene representation cross-modal, 255–256 divided attention effects, 242–244 monocular tunnel vision, 256–258 scene-selective regions, 247–248 source monitoring error, 240–242 stimulus duration effects, 242–244 British National Corpus, 211–212 C Category construction, 111 Category learning analytic and holistic styles, 123–125 behavioral evidence, 118 category representations, 120 comparison across species criterion block, 141 FR category, 139 information-processing demands, 138 nonhuman primates, 137 stimulus generalization, 140 developmental effects criterial attribute, 146 FR structure, 148 optimal verbal rule, 141 transfer stimuli, 146 earlier research, 121–122 indirect learning COVIS, 150 transfer stimuli, 151 interference effects rule-defined categories, 148 verbal working memory, 150
multiple-systems theory dopamine-mediated learning, 125 procedural learning system, 126 rule-based categories, 127 neuroimaging data Gaussian blur stimuli, 128 information-integration stimuli, 129 occipital cortex, 128 rule-based category learning, 129 neuroscience, 119 predictions, 152–153 relationship between single-system models, 155–157 verbal, nonverbal learning and COVIS, 154–155 research, 119 rules and similarity categorization decisions, 122 exemplar-similarity learning, 122 verbal and nonverbal theory dot-pattern categories, 132 hypothesis testing ability, 131 incidental learning, 132 non-rule-based category, 131 parallel operation, 135–136 working memory, 132 Causal-based categorization additional dependent variables, 111 assessment, classification effects interpretation, classification analysis, 44–51 natural vs. novel categories, 42–44 terminology, 51 causal status effect background causes, 63–64 causal link strength, 59–63 functional feature, 71–72 number of dependents, 68–70 test items ratings, 70–71 theoretical implications, 72–75 unobserved ‘‘essential’’ features, 64–68 causal structures and uncertain models, 105 classification as diagnostic reasoning, 94–97 as prospective reasoning, 97–99 theoretical implications, 99–100 coherence effect background causes, 86–87 causal link strength, 83–86 higher order effects, 87–90 robustness, 90–91 theoretical implications, 92–93 computational models dependency model, 51–53 generative model, 53–57 developmental questions, 110 developmental studies
Subject Index
children’s categorization, 104 feature weights and interactions, 100–104 empirical and causal information, 110 hidden causal structure, categories, 106 knowledge structures, 41 natural categories and additional tests, 108–109 processing issues, 109 reasoning and boundaries, causal models, 107 relational centrality and multiple-cause effects evidence, for and against, 77–79 theoretical implications, 79–83 Causal link strength, 59–63, 83–86. See also Causal status effect; Coherence effect Causal status effect background causes, 63–64 causal link strength chain-100 vs. chain-75 condition, 59–60 feature likelihood ratings, 62–63 regression analyses results, 60–62 empirical phenomenon, 58 functional feature, 71–72 number of dependents category membership, 68 indirect dependents, 69 participants learned categories, 58–59 test items ratings, 70–71 theoretical implications category membership, 73 dependency model, 72–73 generative perspective, 72 probabilistic causal links, 73 subjective category validity, 74 unobserved ‘‘essential’’ features, 64–68 Classification analysis, interpretation assessment, feature weights, 45 category membership estimation, 48 item ratings, 49–50 missing feature method, 46–48 regression and statistical analysis, 46 regression vs. missing feature method, 50–51 Cognition aspects, artifacts across languages, 25–26 artifact categorization, 22–25 bilingualism, 28–30 developmental trajectory, 26–28 Whorfian hypothesis, 30–31 Cognitive structure, self-organization card-sorting power-law exponents, 372 rule-given condition, 373 rule-induction task, 372 encoding system, 355 evidence, 344 force-tracing actions balance-beam problems, 350 random-fulcrum condition, 351 generalization of alternation, 352 information exchange, 346–347, 376–377
387 Lorenz model, 357 lower-order elements, 357 nonlinear dynamics, 344 physical approach across-scale activity, 379 biological systems, 378 multiscale structures, 380 phase transformations, 377 phase transition, 379 thermodynamic laws, 380 physical interactions, 345 preschoolers alternating verbalizations, 353 force-tracing motions, 354 physical properties, gears, 352 problem solving alternation strategy, 349 force tracing, 348 gear-system, 349 information transfer, 347 properties, 344 thermodynamics, 356 Coherence effect background causes, 86–87 causal link strength control conditions ratings, 85–86 predictions, generative model, 85 regression weights, 84 two-way interactions terms, 83–84 higher order effects common-cause vs. common-effect network, 87–88 interaction weights and feature weights, 89–90 regression weights, 88 robustness, 90–91 theoretical implications, 92–93 Common-cause and common-effect network, 75–76, 87–88 Comparison task, meta-representational competence, 290 Compensatory-encoding model, 188 Competition of Verbal and Implicit Systems (COVIS), 127 Complementary approach, 268 Complex spatial task strategies rule-based reasoning, 278–281 task decomposition mechanical reasoning, 276 verbal protocols, 278, 279 Component power laws (CMPL) theory, 191 Computational analysis Bayes’ theorem, 302 cognitive resources, 304 comparison process, 305 cued recall, 304 data, 303 decision maker, 304
388
Subject Index
Computational analysis (cont.) decision-making research, 302 focal hypothesis, 303 hypothesis-guided information search, 305 normative models, 302 top-down visual search, 306 visual attention processes, 305 working memory, 304 Computational models. See Dependency model; Generative model Conceptual centrality. See Mutability Conceptual combination compound nouns, 19–20 modifier, 20 modifier-noun combinations, 21 morphemes, 21 noun phrases, 20 reading comprehension EBRW, 205 practice effects, 200 subordinate vs. dominant condition, 202, 203 transfer effect, 201 Cross-linguistic variability, 21–22, 25–26, 32 Cross-modal boundary extension, 255–256 Cross section test, 277 D Deaf observers, 251–255 Dependency model, 60, 63–64, 66, 69–70, 72, 92, 100, 105–106 applications, 52–53 category features, 52 principle, 51 Developmental trajectory Dutch-speaking Belgian children, 26–27 regression analyses, 27 Russian vs. English naming, 27 Diagnostic reasoning causal category structures, 95–96 transformation experiments, 96–97 underlying and observable features, 94–95 Direct object/sentence complement (DO/SC) ambiguities, 206–208 E Egocentric reference frame, 239 Empirical–statistical information, 43–44, 58, 110 Exemplar-based random walk (EBRW) theory higher-skill readers, 219 prediction account of variability, 217 advantage of frequency, 216 algorithm speed, 217 prior interpretation, 192 relative clause, 209 reordered access model, 214–215
retrieval speed, 216 supralexical and superordinate instances, 224 Expertise examination, 268–270 Explicit causal reasoning children’s categorization, 104 diagnostic reasoning causal category structures, 95–96 transformation experiments, 96–97 underlying and observable features, 94–95 prospective reasoning artifact categories, 98–99 causal reasoning and observable features, 97 follow-up experiment, 97–98 pseudoessential underlying features, 97 F Feature weights, causal-based effects causal status and isolated features effect, 102–103 children’s classification, 100 coherence effect, 100–101 deterministic links, 103 logistic regression, 103–104 promicin, 100 test pairs, 101 Flexible strategy choice, spatial intelligence ability test strategies, 271–275 rule-based reasoning, 278–281 task decomposition, 275–278 Form and function, artifact names, 4–5 Forward reasoning. See Prospective reasoning G Gear system problems alternation discovery event history analysis, 363 mean entropy, 364 system entropy, 365 input entropy cumulative hazard function, 366 self-organization, 367 stochastic resonance, 365 phase space determinism, 362 initial parameters, 363 lagging procedure, 361 mean line, 363 recurrence matrix, 362 power-law behavior interactions, 367 multiscale architecture, 372 nested structure, 368 phase transition, 369 spectral analysis, 370 trial-to-trial changes, 372 recurrence quantification analysis diagonal lines, 361 Lorenz model, 360
389
Subject Index
Generalized Context Model (GCM), 155 Generative model, 99, 105–106, 108–110 background strengths, 86–87 category’s statistical distribution, 54–56 causal Markov condition, 57 cause and effect, probabilistic contrast, 56 coherence effect, 80–85 quantitative predictions and probability, 53 Graph comprehension, meta-representational competence, 286, 287 H Hidden causal structure, 106–107 High-spatial visualizers, 281 Hypothesis generation (HyGene) information search behavior baseline ecology, 325 complex ecologies, 330 ecological structure effects, 325 generation-strength manipulation, 327 metalytis, 327 nondiagnostic test, 328 positive-test balance, 329 probability distribution, 325 proportion of times, 328 predecision processes, 301 principles, 304–305 sampling biases cognitive system, 331 conditional probabilities, 332 memory-retrieval processes, 334 model’s test-selection behavior, 332 prespecified cue level, 331 selection criteria, 332 statistical relationships, 331 structure, 308 ubiquitous process, 300 I Independent vs. interactive cue models, 44 Inference tasks, meta-representational competence, 291 Information exchange physical interactions, 380 SOC regime, 381 SOC systems, 380 Information reduction hypothesis, 189 Instance theory component power laws theory, 191 exemplar-based random walk theory, 192 Integrative theory categorization decision, 338 hypothesis generation, 300 hypothesis testing option generation, 300, 301 top-down guided search, 306
information search, 300 judgment and decision-making, 339 memory theory, 338 theoretical framework, 301 Interpretation issues, artifact names application constrained agentive nouns, 17–18 inanimate objects, 17 nouns, 16–17 conceptual combination, 19–21 polysemy, 18–19 Intuition task, meta-representational competence, 290, 291 Isolated feature effect, 78–79 L Laboratory categorization models, 12 Lateral occipital cortex (LOC), 248 Lexical ambiguity, reading comprehension biased and nonbiased words, 213 reordered access model, 214–215 Linear regression analyses, 45, 47, 50, 103 Logistic regression analysis, 50, 101–103 Lorenz model deterministic vs. stochastic, 358 fluid convection, 357 phase space, 358 recurrence quantification analysis, 357 system’s behavior, 359 Low-spatial visualizers, 281 M Main verb/reduced relative (MV/RR) ambiguities, 195 Memory-based automaticity CMPL theory, 191 cognitive tasks, 222 direct tests algorithm complexity effects, 193–194 conceptual combination, 197–206 item-specific practice effects, 194 syntactic ambiguity, 195–197 EBRW theory, 192 forgetting, significant factor, 220–221 generality of, 215–218 indirect evidence lexical ambiguity, 212–215 object-relative vs. subject-relative clauses, 208–212 verb bias effects, 206–208 individual differences in, 218–220 supralexical and superordinate instances, 224 Meta-representational competence anatomy learning, 284, 285 cross section trials, 283 graph comprehension, 286, 287
390
Subject Index
Meta-representational competence (cont.) mean scores, students’ evaluations, 288 orientation references, 285 responding pattern, 287 weather forecasting, 289, 290 Meteorological studies, meta-representational competence, 289–291 Missing feature method, 45–51, 58, 106, 108 Monocular tunnel vision, 256–258 Monolingual speakers, American English, 25–26 Multiple-cause effects, 75–83, 99–100, 105, 109–110. See also Relational centrality effects Multiple systems, category learning analytic and holistic styles, 123–125 earlier research family resemblance (FR), 121 prototype theory, 122 rules and similarity categorization decisions, 122 exemplar-similarity learning, 122 Multisource model amodal perception, 239 boundary extension divided attention effects, 242–244 and scene-selective regions of brain, 247–248 source monitoring error, 240–242 stimulus duration effects, 242–244 categorization, 240 peripersonal space clinical implications, 258–259 cross-modal boundary extension, 255–256 haptic exploration, 251–255 monocular tunnel vision, 256–258 proxy view, 239–240 terminologies, 239 view-boundaries importance, 244–247 Mutability, 43, 51, 74 Myastars, 42–43 N Naming patterns, artifacts. See Artifact names Natural vs. novel categories advantages, novel categories, 43–44 causal-based effects, 42 three-element causal chain, 42–43 Neuroimaging picture studies scene-selective regions, 247–248 view-boundaries importance blank background vs. meaningful background, 245 object boundary, 246 Nonverbal system category membership, 133 image-based code, 135
procedural learning, 134 visual interference, 135 visuo-spatial working memory, 134 O Object-relative (OR) clauses corpus analyses, 209 mean counts, 211 Orientation references, meta-representational competence, 285 P Paper folding test, 267, 272 Parahippocampal place area (PPA), 247, 248 Polysemy, 18–19, 207 Probability judgment alternative-outcomes effect, 316 behavioral data, 317 comparison process, 305, 315 consequences, 301 encoding quality, 317 HyGene predictions, 318 long-term memory, 319 subadditivity, 315 support theory, 314 unspecified probe, 313 Process-based theories algorithm efficiency theories item-general practice effects, 190–191 item-specific practice effects, 191 attention-based theories, 189–190 hypothetical response time, 188, 189 memory-based theories CMPL theory, 191 EBRW theory, 192 Prominence acoustic correlates, in comprehension, 176–180 metalinguistic judgments, 179 syllabic prominence, determining factors, 177 words duration and intensity, 178–180 acoustic correlates, in production, 170–176 information signal, 172 lengthening word, 170 Multiple Source view, 172 predictability, 173–175 tone and break indices (ToBI) labeling system, 171–172 using Tic Tac Toe game, 172–173 word repetition, 171 continuous representations, 166–170 accessibility, 167 conditions and display, 167–168 discourse structure and, 167 ratings, 169 foregrounding a word, 164
391
Subject Index
information theoretic approach, 165, 172 Multiple Source view of prominence, 166 prosody, 164 Property-list accounts, 187–188 Prosody, 164 Prospective reasoning artifact categories, 98–99 causal reasoning and observable features, 97 follow-up experiment, 97–98 pseudoessential underlying features, 97 R Reading comprehension defining automaticity in algorithm efficiency theories, 190–191 attention-based theories, 189–190 CMPL theory, 191 EBRW theory, 192 property-list accounts, 187–188 memory-based automatization algorithm complexity effects, 193–194 cognitive tasks, 222 conceptual combination, 197–206 forgetting, significant factor, 220–221 generality of, 215–218 individual differences in, 218–220 item-specific practice effects, 194 lexical ambiguity, 212–215 object-relative vs. subject-relative clauses, 208–212 supralexical and superordinate instances, 224 syntactic ambiguity, 195–197 verb bias effects, 206–208 redefining automaticity, 225–226 Recurrence quantification analysis (RQA), 357 Relational centrality effects evidence, for and against, 77–78 isolated feature effect, 78–79 theoretical implications common-cause and common-effect feature, 79–82 multiple-cause effect, 79 structure-mapping theory, 82–83 support theory, 82 Reordered access model, 214–215 Retrosplenial cortex (RSC), 247, 248 Rule-based reasoning Pylyshyn’s argument, 281 visualization vs. analytical strategies, 280, 281 RULEX model, 127 S Scene perception multisource model amodal perception, 239 categorization, 240
clinical implications, 258–259 cross-modal, 255–256 dual-task condition, 243 haptic exploration, 251–255 lateral occipital cortex, 248, 249 memory-only condition, 243 monocular tunnel vision, 256–258 parahippocampal place area, 247–249 proxy view, 239–240 retrosplenial cortex, 247–249 search only condition, 243 source monitoring error, 240–242 terminologies, 239 view-boundaries importance, 244–247 spatial cognition anecdote, 237–238 definition, 235–237 Self-organized criticality (SOC), 380 Sighted observers, 251–255 Social processes, naming artifacts goals and referent, 13 languages, 12–13 speakers and addressees, 14–15 Source monitoring, 240–242 Spatial intelligence expertise examination, 268–270 flexible strategy choice ability test strategies, 271–275 rule-based reasoning, 278–281 task decomposition, 275–278 meta-representational competence anatomy learning, 284, 285 cross section trials, 283 graph comprehension, 286, 287 mean scores, students’ evaluations, 288 orientation references, 285 responding pattern, 287 weather forecasting, 289, 290 paper folding task, 267, 272 Vandenberg mental rotation test, 267, 274 Specific language impairment (SLI), 153 Subjective category validity, 74 Subject-relative clauses corpus analyses, 209 mean counts, 211 Support theory, 82 SUSTAIN model, 156 Syntactic ambiguity direct object/sentence complement, 206–208 main verb/reduced relative, 195 T Task decomposition, spatial intelligence mechanical reasoning, 276 verbal protocols, 278, 279 Theory-drawing task, 42–43, 106, 108
392
Subject Index
Tone and break indices (ToBI) labeling system, 171–172 V Vandenberg mental rotation test, 267, 274 Verbal protocols, 278, 279 Verbal system, 133
Verb bias effects, 206–208 Visual exploration, 253, 254 Visual–spatial working memory, 276 W Whorfian hypothesis, 30–31 Wide-angle views, 258
CONTENTS OF RECENT VOLUMES
Volume 40 Different Organization of Concepts and Meaning Systems in the Two Cerebral Hemispheres Dahlia W. Zaidel The Causal Status Effect in Categorization: An Overview Woo-kyoung Ahn and Nancy S. Kim Remembering as a Social Process Mary Susan Weldon Neurocognitive Foundations of Human Memory Ken A. Paller Structural Influences on Implicit and Explicit Sequence Learning Tim Curran, Michael D. Smith, Joseph M. DiFranco, and Aaron T. Daggy Recall Processes in Recognition Memory Caren M. Rotello Reward Learning: Reinforcement, Incentives, and Expectations Kent C. Berridge Spatial Diagrams: Key Instruments in the Toolbox for Thought Laura R. Novick Reinforcement and Punishment in the Prisoner’s Dilemma Game Howard Rachlin, Jay Brown, and Forest Baker Index
Volume 41 Categorization and Reasoning in Relation to Culture and Expertise Douglas L. Medin, Norbert Ross, Scott Atran, Russell C. Burnett, and Sergey V. Blok On the Computational basis of Learning and Cognition: Arguments from LSA Thomas K. Landauer Multimedia Learning Richard E. Mayer Memory Systems and Perceptual Categorization Thomas J. Palmeri and Marci A. Flanery
Conscious Intentions in the Control of Skilled Mental Activity Richard A. Carlson Brain Imaging Autobiographical Memory Martin A. Conway, Christopher W. Pleydell-Pearce, Sharon Whitecross, and Helen Sharpe The Continued Influence of Misinformation in Memory: What Makes Corrections Effective? Colleen M. Seifert Making Sense and Nonsense of Experience: Attributions in Memory and Judgment Colleen M. Kelley and Matthew G. Rhodes Real-World Estimation: Estimation Modes and Seeding Effects Norman R. Brown Index
Volume 42 Memory and Learning in Figure–Ground Perception Mary A. Peterson and Emily Skow-Grant Spatial and Visual Working Memory: A Mental Workspace Robert H. Logie Scene Perception and Memory Marvin M. Chun Spatial Representations and Spatial Updating Ranxiano Frances Wang Selective Visual Attention and Visual Search: Behavioral and Neural Mechanisms Joy J. Geng and Marlene Behrmann Categorizing and Perceiving Objects: Exploring a Continuum of Information Use Philippe G. Schyns From Vision to Action and Action to Vision: A Convergent Route Approach to Vision, Action, and Attention Glyn W. Humphreys and M. Jane Riddoch Eye Movements and Visual Cognitive Suppression David E. Irwin
393
394
What Makes Change Blindness Interesting? Daniel J. Simons and Daniel T. Levin Index
Volume 43 Ecological Validity and the Study of Concepts Gregory L. Murphy Social Embodiment Lawrence W. Barsalou, Paula M. Niedinthal, Aron K. Barbey, and Jennifer A. Ruppert The Body’s Contribution to Language Arthur M. Glenberg and Michael P. Kaschak Using Spatial Language Laura A. Carlson In Opposition to Inhibition Colin M. MacLeod, Michael D. Dodd, Erin D. Sheard, Daryl E. Wilson, and Uri Bibi Evolution of Human Cognitive Architecture John Sweller Cognitive Plasticity and Aging Arthur F. Kramer and Sherry L. Willis Index
Volume 44 Goal-Based Accessibility of Entities within Situation Models Mike Rinck and Gordon H. Bower The Immersed Experiencer: Toward an Embodied Theory of Language Comprehension Rolf A. Zwaan Speech Errors and Language Production: Neuropsychological and Connectionist Perspectives Gary S. Dell and Jason M. Sullivan Psycholinguistically Speaking: Some Matters of Meaning, Marking, and Morphing Kathryn Bock Executive Attention, Working Memory Capacity, and a Two-Factor Theory of Cognitive Control Randall W. Engle and Michael J. Kane Relational Perception and Cognition: Implications for Cognitive Architecture and the Perceptual-Cognitive Interface Collin Green and John E. Hummel An Exemplar Model for Perceptual Categorization of Events Koen Lamberts
Contents of Recent Volumes
On the Perception of Consistency Yaakov Kareev Causal Invariance in Reasoning and Learning Steven Sloman and David A. Lagnado Index
Volume 45 Exemplar Models in the Study of Natural Language Concepts Gert Storms Semantic Memory: Some Insights From Feature-Based Connectionist Attractor Networks Ken McRae On the Continuity of Mind: Toward a Dynamical Account of Cognition Michael J. Spivey and Rick Dale Action and Memory Peter Dixon and Scott Glover Self-Generation and Memory Neil W. Mulligan and Jeffrey P. Lozito Aging, Metacognition, and Cognitive Control Christopher Hertzog and John Dunlosky The Psychopharmacology of Memory and Cognition: Promises, Pitfalls, and a Methodological Framework Elliot Hirshman Index
Volume 46 The Role of the Basal Ganglia in Category Learning F. Gregory Ashby and John M. Ennis Knowledge, Development, and Category Learning Brett K. Hayes Concepts as Prototypes James A. Hampton An Analysis of Prospective Memory Richard L. Marsh, Gabriel I. Cook, and Jason L. Hicks Accessing Recent Events Brian McElree SIMPLE: Further Applications of a Local Distinctiveness Model of Memory Ian Neath and Gordon D. A. Brown What is Musical Prosody? Caroline Palmer and Sean Hutchins Index
395
Contents of Recent Volumes
Volume 47 Relations and Categories Viviana A. Zelizer and Charles Tilly Learning Linguistic Patterns Adele E. Goldberg Understanding the Art of Design: Tools for the Next Edisonian Innovators Kristin L. Wood and Julie S. Linsey Categorizing the Social World: Affect, Motivation, and Self-Regulation Galen V. Bodenhausen, Andrew R. Todd, and Andrew P. Becker Reconsidering the Role of Structure in Vision Elan Barenholtz and Michael J. Tarr Conversation as a Site of Category Learning and Category Use Dale J. Barr and Edmundo Kronmu¨ller Using Classification to Understand the Motivation-Learning Interface W. Todd Maddox, Arthur B. Markman, and Grant C. Baldwin Index
Volume 48 The Strategic Regulation of Memory Accuracy and Informativeness Morris Goldsmith and Asher Koriat Response Bias in Recognition Memory Caren M. Rotello and Neil A. Macmillan What Constitutes a Model of Item-Based Memory Decisions? Ian G. Dobbins and Sanghoon Han Prospective Memory and Metamemory: The Skilled Use of Basic Attentional and Memory Processes Gilles O. Einstein and Mark A. McDaniel Memory is More Than Just Remembering: Strategic Control of Encoding, Accessing Memory, and Making Decisions Aaron S. Benjamin The Adaptive and Strategic Use of Memory by Older Adults: Evaluative Processing and ValueDirected Remembering Alan D. Castel Experience is a Double-Edged Sword: A Computational Model of the Encoding/ Retrieval Trade-Off With Familiarity
Lynne M. Reder, Christopher Paynter, Rachel A. Diana, Jiquan Ngiam, and Daniel Dickison Toward an Understanding of Individual Differences In Episodic Memory: Modeling The Dynamics of Recognition Memory Kenneth J. Malmberg Memory as a Fully Integrated Aspect of Skilled and Expert Performance K. Anders Ericsson and Roy W. Roring Index
Volume 49 Short-term Memory: New Data and a Model Stephan Lewandowsky and Simon Farrell Theory and Measurement of Working Memory Capacity Limits Nelson Cowan, Candice C. Morey, Zhijian Chen, Amanda L. Gilchrist, and J. Scott Saults What Goes with What? Development of Perceptual Grouping in Infancy Paul C. Quinn, Ramesh S. Bhatt, and Angela Hayden Co-Constructing Conceptual Domains Through Family Conversations and Activities Maureen Callanan and Araceli Valle The Concrete Substrates of Abstract Rule Use Bradley C. Love, Marc Tomlinson, and Todd M. Gureckis Ambiguity, Accessibility, and a Division of Labor for Communicative Success Victor S. Ferreira Lexical Expertise and Reading Skill Sally Andrews Index
Volume 50 Causal Models: The Representational Infrastructure for Moral Judgment Steven A. Sloman, Philip M. Fernbach, and Scott Ewing Moral Grammar and Intuitive Jurisprudence: A Formal Model of Unconscious Moral and Legal Knowledge John Mikhail Law, Psychology, and Morality Kenworthey Bilz and Janice Nadler
396
Protected Values and Omission Bias as Deontological Judgments Jonathan Baron and Ilana Ritov Attending to Moral Values Rumen Iliev, Sonya Sachdeva, Daniel M. Bartels, Craig Joseph, Satoru Suzuki, and Douglas L. Medin Noninstrumental Reasoning over Sacred Values: An Indonesian Case Study Jeremy Ginges and Scott Atran Development and Dual Processes in Moral Reasoning: A Fuzzy-trace Theory Approach Valerie F. Reyna and Wanda Casillas Moral Identity, Moral Functioning, and the Development of Moral Character Darcia Narvaez and Daniel K. Lapsley ‘‘Fools Rush In’’: A JDM Perspective on the Role of Emotions in Decisions, Moral and Otherwise Terry Connolly and David Hardman Motivated Moral Reasoning Peter H. Ditto, David A. Pizarro, and David Tannenbaum In the Mind of the Perceiver: Psychological Implications of Moral Conviction Christopher W. Bauman and Linda J. Skitka Index
Volume 51 Time for Meaning: Electrophysiology Provides Insights into the Dynamics of Representation and Processing in Semantic Memory Kara D. Federmeier and Sarah Laszlo
Contents of Recent Volumes
Design for a Working Memory Klaus Oberauer When Emotion Intensifies Memory Interference Mara Mather Mathematical Cognition and the Problem Size Effect Mark H. Ashcraft and Michelle M. Guillaume Highlighting: A Canonical Experiment John K. Kruschke The Emergence of Intention Attribution in Infancy Amanda L. Woodward, Jessica A. Sommerville, Sarah Gerson, Annette M. E. Henderson, and Jennifer Buresh Reader Participation in the Experience of Narrative Richard J. Gerrig and Matthew E. Jacovina Aging, Self-Regulation, and Learning from Text Elizabeth A. L. Stine-Morrow and Lisa M. S. Miller Toward a Comprehensive Model of Comprehension Danielle S. McNamara and Joe Magliano Index