The NEURAL BASES of MULTISENSORY PROCESSES
Edited by Micah M. Murray and Mark T. Wallace
FRONTIERS IN NEUROSCIENCE
FR...
65 downloads
1141 Views
17MB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
The NEURAL BASES of MULTISENSORY PROCESSES
Edited by Micah M. Murray and Mark T. Wallace
FRONTIERS IN NEUROSCIENCE
FRONTIERS IN NEUROSCIENCE
The NEURAL BASES of MULTISENSORY PROCESSES
FRONTIERS IN NEUROSCIENCE Series Editors Sidney A. Simon, Ph.D. Miguel A.L. Nicolelis, M.D., Ph.D.
Published Titles Apoptosis in Neurobiology Yusuf A. Hannun, M.D., Professor of Biomedical Research and Chairman, Department of Biochemistry and Molecular Biology, Medical University of South Carolina, Charleston, South Carolina Rose-Mary Boustany, M.D., tenured Associate Professor of Pediatrics and Neurobiology, Duke University Medical Center, Durham, North Carolina Neural Prostheses for Restoration of Sensory and Motor Function John K. Chapin, Ph.D., Professor of Physiology and Pharmacology, State University of New York Health Science Center, Brooklyn, New York Karen A. Moxon, Ph.D., Assistant Professor, School of Biomedical Engineering, Science, and Health Systems, Drexel University, Philadelphia, Pennsylvania Computational Neuroscience: Realistic Modeling for Experimentalists Eric DeSchutter, M.D., Ph.D., Professor, Department of Medicine, University of Antwerp, Antwerp, Belgium Methods in Pain Research Lawrence Kruger, Ph.D., Professor of Neurobiology (Emeritus), UCLA School of Medicine and Brain Research Institute, Los Angeles, California Motor Neurobiology of the Spinal Cord Timothy C. Cope, Ph.D., Professor of Physiology, Wright State University, Dayton, Ohio Nicotinic Receptors in the Nervous System Edward D. Levin, Ph.D., Associate Professor, Department of Psychiatry and Pharmacology and Molecular Cancer Biology and Department of Psychiatry and Behavioral Sciences, Duke University School of Medicine, Durham, North Carolina Methods in Genomic Neuroscience Helmin R. Chin, Ph.D., Genetics Research Branch, NIMH, NIH, Bethesda, Maryland Steven O. Moldin, Ph.D., University of Southern California, Washington, D.C. Methods in Chemosensory Research Sidney A. Simon, Ph.D., Professor of Neurobiology, Biomedical Engineering, and Anesthesiology, Duke University, Durham, North Carolina Miguel A.L. Nicolelis, M.D., Ph.D., Professor of Neurobiology and Biomedical Engineering, Duke University, Durham, North Carolina The Somatosensory System: Deciphering the Brain’s Own Body Image Randall J. Nelson, Ph.D., Professor of Anatomy and Neurobiology, University of Tennessee Health Sciences Center, Memphis, Tennessee The Superior Colliculus: New Approaches for Studying Sensorimotor Integration William C. Hall, Ph.D., Department of Neuroscience, Duke University, Durham, North Carolina Adonis Moschovakis, Ph.D., Department of Basic Sciences, University of Crete, Heraklion, Greece
New Concepts in Cerebral Ischemia Rick C. S. Lin, Ph.D., Professor of Anatomy, University of Mississippi Medical Center, Jackson, Mississippi DNA Arrays: Technologies and Experimental Strategies Elena Grigorenko, Ph.D., Technology Development Group, Millennium Pharmaceuticals, Cambridge, Massachusetts Methods for Alcohol-Related Neuroscience Research Yuan Liu, Ph.D., National Institute of Neurological Disorders and Stroke, National Institutes of Health, Bethesda, Maryland David M. Lovinger, Ph.D., Laboratory of Integrative Neuroscience, NIAAA, Nashville, Tennessee Primate Audition: Behavior and Neurobiology Asif A. Ghazanfar, Ph.D., Princeton University, Princeton, New Jersey Methods in Drug Abuse Research: Cellular and Circuit Level Analyses Barry D. Waterhouse, Ph.D., MCP-Hahnemann University, Philadelphia, Pennsylvania Functional and Neural Mechanisms of Interval Timing Warren H. Meck, Ph.D., Professor of Psychology, Duke University, Durham, North Carolina Biomedical Imaging in Experimental Neuroscience Nick Van Bruggen, Ph.D., Department of Neuroscience Genentech, Inc. Timothy P.L. Roberts, Ph.D., Associate Professor, University of Toronto, Canada The Primate Visual System John H. Kaas, Department of Psychology, Vanderbilt University, Nashville, Tennessee Christine Collins, Department of Psychology, Vanderbilt University, Nashville, Tennessee Neurosteroid Effects in the Central Nervous System Sheryl S. Smith, Ph.D., Department of Physiology, SUNY Health Science Center, Brooklyn, New York Modern Neurosurgery: Clinical Translation of Neuroscience Advances Dennis A. Turner, Department of Surgery, Division of Neurosurgery, Duke University Medical Center, Durham, North Carolina Sleep: Circuits and Functions Pierre-Hervé Luppi, Université Claude Bernard, Lyon, France Methods in Insect Sensory Neuroscience Thomas A. Christensen, Arizona Research Laboratories, Division of Neurobiology, University of Arizona, Tuscon, Arizona Motor Cortex in Voluntary Movements Alexa Riehle, INCM-CNRS, Marseille, France Eilon Vaadia, The Hebrew University, Jerusalem, Israel Neural Plasticity in Adult Somatic Sensory-Motor Systems Ford F. Ebner, Vanderbilt University, Nashville, Tennessee Advances in Vagal Afferent Neurobiology Bradley J. Undem, Johns Hopkins Asthma Center, Baltimore, Maryland Daniel Weinreich, University of Maryland, Baltimore, Maryland The Dynamic Synapse: Molecular Methods in Ionotropic Receptor Biology Josef T. Kittler, University College, London, England Stephen J. Moss, University College, London, England
Animal Models of Cognitive Impairment Edward D. Levin, Duke University Medical Center, Durham, North Carolina Jerry J. Buccafusco, Medical College of Georgia, Augusta, Georgia The Role of the Nucleus of the Solitary Tract in Gustatory Processing Robert M. Bradley, University of Michigan, Ann Arbor, Michigan Brain Aging: Models, Methods, and Mechanisms David R. Riddle, Wake Forest University, Winston-Salem, North Carolina Neural Plasticity and Memory: From Genes to Brain Imaging Frederico Bermudez-Rattoni, National University of Mexico, Mexico City, Mexico Serotonin Receptors in Neurobiology Amitabha Chattopadhyay, Center for Cellular and Molecular Biology, Hyderabad, India TRP Ion Channel Function in Sensory Transduction and Cellular Signaling Cascades Wolfgang B. Liedtke, M.D., Ph.D., Duke University Medical Center, Durham, North Carolina Stefan Heller, Ph.D., Stanford University School of Medicine, Stanford, California Methods for Neural Ensemble Recordings, Second Edition Miguel A.L. Nicolelis, M.D., Ph.D., Professor of Neurobiology and Biomedical Engineering, Duke University Medical Center, Durham, North Carolina Biology of the NMDA Receptor Antonius M. VanDongen, Duke University Medical Center, Durham, North Carolina Methods of Behavioral Analysis in Neuroscience Jerry J. Buccafusco, Ph.D., Alzheimer’s Research Center, Professor of Pharmacology and Toxicology, Professor of Psychiatry and Health Behavior, Medical College of Georgia, Augusta, Georgia In Vivo Optical Imaging of Brain Function, Second Edition Ron Frostig, Ph.D., Professor, Department of Neurobiology, University of California, Irvine, California Fat Detection: Taste, Texture, and Post Ingestive Effects Jean-Pierre Montmayeur, Ph.D., Centre National de la Recherche Scientifique, Dijon, France Johannes le Coutre, Ph.D., Nestlé Research Center, Lausanne, Switzerland The Neurobiology of Olfaction Anna Menini, Ph.D., Neurobiology Sector International School for Advanced Studies, (S.I.S.S.A.), Trieste, Italy Neuroproteomics Oscar Alzate, Ph.D., Department of Cell and Developmental Biology, University of North Carolina, Chapel Hill, North Carolina Translational Pain Research: From Mouse to Man Lawrence Kruger, Ph.D., Department of Neurobiology, UCLA School of Medicine, Los Angeles, California Alan R. Light, Ph.D., Department of Anesthesiology, University of Utah, Salt Lake City, Utah Advances in the Neuroscience of Addiction Cynthia M. Kuhn, Duke University Medical Center, Durham, North Carolina George F. Koob, The Scripps Research Institute, La Jolla, California
Neurobiology of Huntington’s Disease: Applications to Drug Discovery Donald C. Lo, Duke University Medical Center, Durham, North Carolina Robert E. Hughes, Buck Institute for Age Research, Novato, California Neurobiology of Sensation and Reward Jay A. Gottfried, Northwestern University, Chicago, Illinois The Neural Bases of Multisensory Processes Micah M. Murray, CIBM, Lausanne, Switzerland Mark T. Wallace, Vanderbilt Brain Institute, Nashville, Tennessee
The NEURAL BASES of MULTISENSORY PROCESSES Edited by
Micah M. Murray Center for Biomedical Imaging Lausanne, Switzerland
Mark T. Wallace Vanderbilt University Nashville, Tennessee
Boca Raton London New York
CRC Press is an imprint of the Taylor & Francis Group, an informa business
CRC Press Taylor & Francis Group 6000 Broken Sound Parkway NW, Suite 300 Boca Raton, FL 33487-2742 © 2012 by Taylor & Francis Group, LLC CRC Press is an imprint of Taylor & Francis Group, an Informa business No claim to original U.S. Government works
International Standard Book Number-13: 978-1-4398-1219-8 (eBook - PDF) This book contains information obtained from authentic and highly regarded sources. Reasonable efforts have been made to publish reliable data and information, but the author and publisher cannot assume responsibility for the validity of all materials or the consequences of their use. The authors and publishers have attempted to trace the copyright holders of all material reproduced in this publication and apologize to copyright holders if permission to publish in this form has not been obtained. If any copyright material has not been acknowledged please write and let us know so we may rectify in any future reprint. Except as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced, transmitted, or utilized in any form by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying, microfilming, and recording, or in any information storage or retrieval system, without written permission from the publishers. For permission to photocopy or use material electronically from this work, please access www.copyright.com (http:// www.copyright.com/) or contact the Copyright Clearance Center, Inc. (CCC), 222 Rosewood Drive, Danvers, MA 01923, 978-750-8400. CCC is a not-for-profit organization that provides licenses and registration for a variety of users. For organizations that have been granted a photocopy license by the CCC, a separate system of payment has been arranged. Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and are used only for identification and explanation without intent to infringe. Visit the Taylor & Francis Web site at http://www.taylorandfrancis.com and the CRC Press Web site at http://www.crcpress.com
Contents Series Preface.................................................................................................................................. xiii Introduction....................................................................................................................................... xv Editors..............................................................................................................................................xix Contributors.....................................................................................................................................xxi
Section I Anatomy Chapter 1 Structural Basis of Multisensory Processing: Convergence.........................................3 H. Ruth Clemo, Leslie P. Keniston, and M. Alex Meredith Chapter 2 Cortical and Thalamic Pathways for Multisensory and Sensorimotor Interplay........ 15 Céline Cappe, Eric M. Rouiller, and Pascal Barone Chapter 3 What Can Multisensory Processing Tell Us about the Functional Organization of Auditory Cortex?..................................................................................................... 31 Jennifer K. Bizley and Andrew J. King
Section II Neurophysiological Bases Chapter 4 Are Bimodal Neurons the Same throughout the Brain?............................................. 51 M. Alex Meredith, Brian L. Allman, Leslie P. Keniston, and H. Ruth Clemo Chapter 5 Audiovisual Integration in Nonhuman Primates: A Window into the Anatomy and Physiology of Cognition....................................................................................... 65 Yoshinao Kajikawa, Arnaud Falchier, Gabriella Musacchia, Peter Lakatos, and Charles E. Schroeder Chapter 6 Multisensory Influences on Auditory Processing: Perspectives from fMRI and Electrophysiology.......................................................................................99 Christoph Kayser, Christopher I. Petkov, Ryan Remedios, and Nikos K. Logothetis Chapter 7 Multisensory Integration through Neural Coherence............................................... 115 Andreas K. Engel, Daniel Senkowski, and Till R. Schneider Chapter 8 The Use of fMRI to Assess Multisensory Integration.............................................. 131 Thomas W. James and Ryan A. Stevenson ix
x
Contents
Chapter 9 Perception of Synchrony between the Senses........................................................... 147 Mirjam Keetels and Jean Vroomen Chapter 10 Representation of Object Form in Vision and Touch................................................ 179 Simon Lacey and Krish Sathian
Section III Combinatorial Principles and Modeling Chapter 11 Spatial and Temporal Features of Multisensory Processes: Bridging Animal and Human Studies................................................................................................... 191 Diana K. Sarko, Aaron R. Nidiffer, Albert R. Powers III, Dipanwita Ghose, Andrea Hillock-Dunn, Matthew C. Fister, Juliane Krueger, and Mark T. Wallace Chapter 12 Early Integration and Bayesian Causal Inference in Multisensory Perception......... 217 Ladan Shams Chapter 13 Characterization of Multisensory Integration with fMRI: Experimental Design, Statistical Analysis, and Interpretation........................................................ 233 Uta Noppeney Chapter 14 Modeling Multisensory Processes in Saccadic Responses: Time- Window-of- Integration Model......................................................................... 253 Adele Diederich and Hans Colonius
Section IV Development and Plasticity Chapter 15 The Organization and Plasticity of Multisensory Integration in the Midbrain........ 279 Thomas J. Perrault Jr., Benjamin A. Rowland, and Barry E. Stein Chapter 16 Effects of Prolonged Exposure to Audiovisual Stimuli with Fixed Stimulus Onset Asynchrony on Interaction Dynamics between Primary Auditory and Primary Visual Cortex.............................................................................................. 301 Antje Fillbrandt and Frank W. Ohl Chapter 17 Development of Multisensory Temporal Perception................................................. 325 David J. Lewkowicz Chapter 18 Multisensory Integration Develops Late in Humans................................................ 345 David Burr and Monica Gori
xi
Contents
Chapter 19 Phonetic Recalibration in Audiovisual Speech......................................................... 363 Jean Vroomen and Martijn Baart Chapter 20 Multisensory Integration and Aging......................................................................... 381 Jennifer L. Mozolic, Christina E. Hugenschmidt, Ann M. Peiffer, and Paul J. Laurienti
Section V Clinical Manifestations Chapter 21 Neurophysiological Mechanisms Underlying Plastic Changes and Rehabilitation following Sensory Loss in Blindness and Deafness.......................... 395 Ella Striem-Amit, Andreja Bubic, and Amir Amedi Chapter 22 Visual Abilities in Individuals with Profound Deafness: A Critical Review............ 423 Francesco Pavani and Davide Bottari Chapter 23 Peripersonal Space: A Multisensory Interface for Body–Object Interactions..........449 Claudio Brozzoli, Tamar R. Makin, Lucilla Cardinali, Nicholas P. Holmes, and Alessandro Farnè Chapter 24 Multisensory Perception and Bodily Self-Consciousness: From Out-of-Body to Inside-Body Experience............................................................................................ 467 Jane E. Aspell, Bigna Lenggenhager, and Olaf Blanke
Section VI Attention and Spatial Representations Chapter 25 Spatial Constraints in Multisensory Attention.......................................................... 485 Emiliano Macaluso Chapter 26 Cross-Modal Spatial Cueing of Attention Influences Visual Perception.................. 509 John J. McDonald, Jessica J. Green, Viola S. Störmer, and Steven A. Hillyard Chapter 27 The Colavita Visual Dominance Effect.................................................................... 529 Charles Spence, Cesare Parise, and Yi-Chuan Chen Chapter 28 The Body in a Multisensory World........................................................................... 557 Tobias Heed and Brigitte Röder
xii
Contents
Section VII Naturalistic Multisensory Processes: Motion Signals Chapter 29 Multisensory Interactions during Motion Perception: From Basic Principles to Media Applications............................................................................................... 583 Salvador Soto-Faraco and Aleksander Väljamäe Chapter 30 Multimodal Integration during Self-Motion in Virtual Reality................................603 Jennifer L. Campos and Heinrich H. Bülthoff Chapter 31 Visual–Vestibular Integration for Self-Motion Perception........................................ 629 Gregory C. DeAngelis and Dora E. Angelaki
Section VIII Naturalistic Multisensory Processes: Communication Signals Chapter 32 Unity of the Senses for Primate Vocal Communication........................................... 653 Asif A. Ghazanfar Chapter 33 Convergence of Auditory, Visual, and Somatosensory Information in Ventral Prefrontal Cortex....................................................................................................... 667 Lizabeth M. Romanski Chapter 34 A Multisensory Perspective on Human Auditory Communication.......................... 683 Katharina von Kriegstein
Section IX Naturalistic Multisensory Processes: Flavor Chapter 35 Multimodal Chemosensory Interactions and Perception of Flavor.......................... 703 John Prescott Chapter 36 A Proposed Model of a Flavor Modality.................................................................. 717 Dana M. Small and Barry G. Green Chapter 37 Assessing the Role of Visual and Auditory Cues in Multisensory Perception of Flavor......................................................................................................................... 739 Massimiliano Zampini and Charles Spence
Series Preface FRONTIERS IN NEUROSCIENCE The Frontiers in Neuroscience Series presents the insights of experts on emerging experimental technologies and theoretical concepts that are or will be at the vanguard of neuroscience. The books cover new and exciting multidisciplinary areas of brain research and describe breakthroughs in fields such as insect sensory neuroscience, primate audition, and biomedical imaging. The most recent books cover the rapidly evolving fields of multisensory processing and reward. Each book is edited by experts and consists of chapters written by leaders in a particular field. Books are richly illustrated and contain comprehensive bibliographies. Chapters provide substantial background material relevant to the particular subject. The goal is for these books to be the references neuroscientists use in order to acquaint themselves with new methodologies in brain research. We view our task as series editors to produce outstanding products and to contribute to the field of neuroscience. We hope that, as the volumes become available, the effort put in by us, the publisher, the book editors, and individual authors will contribute to further development of brain research. To the extent that you learn from these books, we will have succeeded. Sidney A. Simon, PhD Miguel A.L. Nicolelis, MD, PhD
xiii
Introduction The field of multisensory research continues to grow at a dizzying rate. Although for those of us working in the field this is extraordinarily gratifying, it is also a bit challenging to keep up with all of the exciting new developments in such a multidisciplinary topic at such a burgeoning stage. For those a bit peripheral to the field, but with an inherent interest in the magic of multisensory interactions to shape our view of the world, the task is even more daunting. Our objectives for this book are straightforward—to provide those working within the area a strong overview of the current state-of-the field, while at the same time providing those a bit outside of the field with a solid introduction to multisensory processes. We feel that the current volume meets these objectives, largely through a choice of topics that span the single cell to the clinic and through the expertise of our authors, each of whom has done an exceptional job explaining their research to an interdisciplinary audience. The book is organized thematically, with the themes generally building from the more basic to the more applied. Hence, a reader interested in the progression of ideas and approaches can start at the beginning and see how the basic science informs the clinical and more applied sciences by reading each chapter in sequence. Alternatively, one can choose to learn more about a specific theme and delve directly into that section. Regardless of your approach, we hope that this book will serve as an important reference related to your interests in multisensory processes. The following narrative provides a bit of an overview to each of the sections and the chapters contained within them. Section I (Anatomy) focuses on the essential building blocks for any understanding of the neural substrates of multisensory processing. In Chapter 1, Clemo and colleagues describe how neural convergence and synaptology in multisensory domains might account for the diversity of physiological response properties, and provide elegant examples of structure/function relationships. Chapter 2, from Cappe and colleagues, details the anatomical substrates supporting the growing functional evidence for multisensory interactions in classical areas of unisensory cortex, and which highlights the possible thalamic contributions to these processes. In Chapter 3, Bizley and King focus on the unisensory cortical domain that has been best studied for these multisensory influences—auditory cortex. They highlight how visual inputs into the auditory cortex are organized, and detail the possible functional role(s) of these inputs. Section II, organized around Neurophysiological Bases, provides an overview of how multisensory stimuli can dramatically change the encoding processes for sensory information. Chapter 4, by Meredith and colleagues, addresses whether bimodal neurons throughout the brain share the same integrative characteristics, and shows marked differences in these properties between subcortex and cortex. Chapter 5, from Kajikawa and colleagues, focuses on the nonhuman primate model and bridges what is known about the neural integration of auditory–visual information in monkey cortex with the evidence for changes in multisensory-mediated behavior and perception. In Chapter 6, Kayser and colleagues also focus on the monkey model, with an emphasis now on auditory cortex and the merging of classical neurophysiological analyses with neuroimaging methods used in human subjects (i.e., functional magnetic resonance imaging (fMRI)). This chapter emphasizes not only early multisensory interactions, but also the transformations that take place as one ascends the processing hierarchy as well as the distributed nature of multisensory encoding. The final four chapters in this section then examine evidence from humans. In Chapter 7, Engel and colleagues present compelling evidence for a role of coherent oscillatory activity in linking unisensory and multisensory brain regions and improving multisensory encoding processes. This is followed by a contribution from James and Stevenson (Chapter 8), which focuses on fMRI measures of multisensory integration and which proposes a new criterion based on inverse effectiveness in evaluating and xv
xvi
Introduction
interpreting the BOLD signal. Chapter 9, by Keetels and Vroomen, reviews the psychophysical and neuroimaging evidence associated with the perception of the temporal relationships (i.e., synchrony and asynchrony) between multisensory cues. Finally, this section closes with a chapter from Lacey and Sathian (Chapter 10), which reviews our current neuroimaging knowledge concerning the mental representations of objects across vision and touch. Section III, Combinatorial Principles and Modeling, focuses on efforts to gain a better mechanistic handle on multisensory operations and their network dynamics. In Chapter 11, Sarko and colleagues focus on spatiotemporal analyses of multisensory neurons and networks as well as commonalities across both animal and human model studies. This is followed by a contribution from Shams, who reviews the psychophysical evidence for multisensory interactions and who argues that these processes can be well described by causal inference and Bayesian modeling approaches. In Chapter 13, Noppeney returns to fMRI and illustrates the multiple methods of analyses of fMRI datasets, the interpretational caveats associated with these approaches, and how the combined use of methods can greatly strengthen the conclusions that can be drawn. The final contribution (Chapter 14), from Diederich and Colonius, returns to modeling and describes the time-window-ofintegration (TWIN) model, which provides an excellent framework within which to interpret the speeding of saccadic reaction times seen under multisensory conditions. Section IV encompasses the area of Development and Plasticity. Chapter 15, from Perrault and colleagues, describes the classic model for multisensory neural studies, the superior colliculus, and highlights the developmental events leading up to the mature state. In Chapter 16, Fillbrandt and Ohl explore temporal plasticity in multisensory networks and shows changes in the dynamics of interactions between auditory and visual cortices following prolonged exposure to fixed auditory– visual delays. The next two contributions focus on human multisensory development. In Chapter 17, Lewkowicz details the development of multisensory temporal processes, highlighting the increasing sophistication in these processes as infants grow and gain experience with the world. Chapter 18, by Burr and Gori, reviews the neurophysiological, behavioral and imaging evidence that illustrates the surprisingly late development of human multisensory capabilities, a finding that they posit is a result of the continual need for cross-modal recalibration during development. In Chapter 19, Vroomen and Baart also discuss recalibration, this time in the context of language acquisition. They argue that in the process of phonetic recalibration, the visual system instructs the auditory system to build phonetic boundaries in the presence of ambiguous sound sources. Finally, Chapter 20 focuses on what can be considered the far end of the developmental process—normal aging. Here, Mozolic and colleagues review the intriguing literature suggesting enhanced multisensory processing in aging adults, and highlight a number of possible reasons for these apparent improvements in sensory function. Section V, Clinical Manifestations, addresses how perception and action are affected by altered sensory experience. In Chapter 21, Striem-Amit and colleagues focus on sensory loss, placing particular emphasis on plasticity following blindness and on efforts to introduce low-cost sensory substitution devices as rehabilitation tools. The functional imaging evidence they review provides a striking example of training-induced plasticity. In Chapter 22, Pavani and Bottari likewise consider sensory loss, focusing on visual abilities in profoundly deaf individuals. One contention in their chapter is that deafness results in enhanced speed of reactivity to visual stimuli, rather than enhanced visual perceptual abilities. In Chapter 23, Brozzoli and colleagues use the case of visuotactile interactions as an example of how multisensory brain mechanisms can be rendered plastic both in terms of sensory as well as motor processes. This plasticity is supported by the continuous and active monitoring of peripersonal space, including both one’s own body and the objects in its vicinity. In Chapter 24, Aspell and colleagues address the topic of bodily self-consciousness both in neurological patients and healthy participants, showing how the perception of one’s “self” can be distorted by multisensory conflicts. Section VI encompasses the topic of Attention and Spatial Representations. A contribution from Macaluso opens this section by reviewing putative neural mechanisms for multisensory links in the
Introduction
xvii
control of spatial attention as revealed by functional neuroimaging in humans. He puts particular emphasis on there likely being multiple functional–anatomic routes for these links, which in turn can provide a degree of flexibility in the manner by which sensory information at a given location is selected and processed. In Chapter 26, McDonald and colleagues follow this with a review of studies showing how nonvisual cues impact the subsequent processing (i.e., sensitivity, perceptual awareness, and subjective experiences) of visual stimuli, demonstrating how such effects can manifest within the first 200 ms of visual processing. Chapter 27, by Spence and colleagues, provides a review of the Colavita visual dominance effect, including the proposition of an account for this effect based on biased competition. Finally, in Chapter 28 Heed and Röder conclude this section with a consideration of how the body schema is established and how an established body schema in turn impacts the manner in which multisensory stimuli are treated. Section VII focuses on Naturalistic Multisensory Processes in the context of motion signals. In Chapter 29, Soto-Faraco and Väljamäe open this section with a consideration of how motion information conveyed by audition and vision is integrated. First, they address the basic phenomenology and behavioral principles. They then review studies examining the neurophysiologic bases for the integration of multisensory motion signals. Finally, they discuss how laboratory findings can be extended to media applications. In Chapter 30, Campos and Bülthoff address the topic of selfmotion perception. They describe and evaluate experimental settings and technologies for studying self-motion, including the empirical findings that these methods and paradigms have produced. The section concludes with a contribution from DeAngelis and Angelaki (Chapter 31), who review their studies of visual–vestibular interactions in the dorsal medial superior temporal area (MSTd) of macaque monkeys. Their review progresses from the characterization of heading-sensitive multisensory neurons, to a mathematical description of the visual–vestibular integration within MSTd neurons, and finally to describing the links between neuronal and behavioral processes. Section VIII continues the focus on Naturalistic Multisensory Processes, now with a particular concentration on multisensory contributions to the perception and generation of communication signals. In Chapter 32, Ghazanfar challenges Geschwind’s proposition that speech functions in humans are intrinsically linked to the unique ability of humans to form multisensory associations. He reviews the multisensory contributions to communication signals in nonhuman primates as well as the role of auditory cortex in processing such signals. In Chapter 33, Romanski details the auditory, visual, and somatosensory anatomical projections to the prefrontal cortex (VLPFC) as well as neuronal responsiveness within this region with respect to communication signals and object processing. The section closes with Chapter 34 by von Kriegstein that considers how unisensory auditory communication is impacted by previous multisensory auditory–visual encoding as well as by auditory-driven activity within nominally visual brain regions. One implication is that the processing of auditory communication signals is achieved using not only auditory but also visual brain areas. The final section, Section IX, Naturalistic Multisensory Processes, concentrates on how the perception of flavor is generated. In a pair of complementary chapters, psychophysical and neural models of flavor perception are reviewed. In Chapter 35, Prescott focuses on psychophysical findings and covers processes ranging from basic sensation through learned olfactory–taste associations, as well as the roles of synthetic versus fused perceptions, attention, and hedonics. Chapter 36, by Small and Green, focuses largely on evidence from functional brain imaging. They propose that a distributed network of regions is responsible for generating the perceived flavors of objects. Finally, in Chapter 37, Zampini and Spence conclude with a review of evidence for the impact of visual and acoustic features on the perception of flavor. They distinguish between preingestive effects of vision, which are more likely linked to expectancy, and effects of audition that coincide with ingestion. In parallel, they discuss how auditory and visual influences can occur without awareness, highlighting the necessity for increased neuroscientific investigation of these processes. We hope that the reader enjoys this book as much as we have enjoyed assembling it. We have both learned much during this endeavor, and have gained an even deeper fascination and appreciation for
xviii
Introduction
our chosen field of inquiry. We are delighted by the diversity of experimental models, methodological approaches, and conceptual frameworks that are used in the study of multisensory processes, and that are reflected in the current volume. Indeed, in our opinion, the success of our field and its rapid growth are attributable to this highly multidisciplinary philosophy, and bode well for the future of multisensory science. Micah M. Murray Lausanne, Switzerland Mark T. Wallace Nashville, Tennessee
Editors Micah M. Murray earned a double BA in psychology and English from The Johns Hopkins University. In 2001, he received his PhD with honors from the Neuroscience Department, Albert Einstein College of Medicine of Yeshiva University. He worked as a postdoctoral scientist in the Neurology Clinic and Rehabilitation Department, University Hospital of Geneva, Switzerland. Since 2003 he has held a position within the Department of Clinical Neurosciences and Department of Radiology at the University Hospital of Lausanne, Switzerland. Currently, he is an associate professor within these departments, adjunct associate professor at Vanderbilt University, as well as associate director of the EEG Brain Mapping Core of the Center for Biomedical Imaging in Lausanne, Switzerland. Dr. Murray has a contiguous record of grant support from the Swiss National Science Foundation. He has received awards for his research from the Leenaards Foundation (2005 Prize for the Promotion of Scientific Research), the faculty of Biology and Medicine at the University of Lausanne (2008 Young Investigator Prize), and from the Swiss National Science Foundation (bonus of excellence in research). His research has been widely covered by the national and international media. He currently holds editorial board positions at Brain Topography (editor-in-chief), Journal of Neuroscience (associate editor), Frontiers in Integrative Neuroscience (associate editor), Frontiers in Auditory Cognitive Neuroscience (associate editor), and the Scientific World Journal. Dr. Murray has authored more than 80 articles and book chapters. His group’s research primarily focuses on multisensory interactions, object recognition, learning and plasticity, electroencephalogram-correlated functional MRI (EEG/fMRI) methodological developments, and systems/cognitive neuroscience in general. Research in his group combines psychophysics, EEG, fMRI, and transcranial magnetic simulation in healthy and clinical populations. Mark T. Wallace received his BS in biology from Temple University in 1985, and his PhD in neuroscience from Temple University in 1990, where he was the recipient of the Russell Conwell Presidential Fellowship. He did a postdoctoral fellowship with Dr. Barry Stein at the Medical College of Virginia, where he began his research looking at the neural mechanisms of multisensory integration. Dr. Wallace moved to the Wake Forest University School of Medicine in 1995. In 2006, Dr. Wallace came to Vanderbilt University, and was named the director of the Vanderbilt Brain Institute in 2008. He is professor of hearing and speech sciences, psychology, and psychiatry, and the associate director of the Vanderbilt Silvio O. Conte Center for Basic Neuroscience Research. He is a member of the Center for Integrative and Cognitive Neuroscience, the Center for Molecular Neuroscience, the Vanderbilt Kennedy Center, and the Vanderbilt Vision Research Center. Dr. Wallace has received a number of awards for both research and teaching, including the Faculty Excellence Award of Wake Forest University and being named the Outstanding Young Investigator in the Basic Sciences. Dr. Wallace has an established record of research funding from the National Institutes of Health, and is the author of more than 125 research presentations and publications. He currently serves on the editorial board of several journals including Brain Topography, Cognitive Processes, and Frontiers in Integrative Neuroscience. His work has employed a multidisciplinary approach to examining multisensory processing, and focuses upon the neural architecture of multisensory integration, its development, and its role in guiding human perception and performance.
xix
Contributors Brian L. Allman Department of Anatomy and Neurobiology Virginia Commonwealth University School of Medicine Richmond, Virginia Amir Amedi Department of Medical Neurobiology, Institute for Medical Research Israel–Canada Hebrew University–Hadassah Medical School Jerusalem, Israel Dora E. Angelaki Department of Anatomy and Neurobiology Washington University School of Medicine St. Louis, Missouri Jane E. Aspell Laboratory of Cognitive Neuroscience Ecole Polytechnique Fédérale de Lausanne Lausanne, Switzerland Martijn Baart Department of Medical Psychology and Neuropsychology Tilburg University Tilburg, the Netherlands Pascal Barone Centre de Recherche Cerveau et Cognition (UMR 5549) CNRS, Faculté de Médecine de Rangueil Université Paul Sabatier Toulouse 3 Toulouse, France Jennifer K. Bizley Department of Physiology, Anatomy, and Genetics University of Oxford Oxford, United Kingdom
Olaf Blanke Laboratory of Cognitive Neuroscience Ecole Polytechnique Fédérale de Lausanne Lausanne, Switzerland Davide Bottari Center for Mind/Brain Sciences University of Trento Rovereto, Italy Claudio Brozzoli Institut National de la Santé et de la Recherche Médicale Bron, France Andreja Bubic Department of Medical Neurobiology, Institute for Medical Research Israel–Canada Hebrew University–Hadassah Medical School Jerusalem, Israel Heinrich H. Bülthoff Department of Human Perception, Cognition, and Action Max Planck Institute for Biological Cybernetics Tübingen, Germany David Burr Dipartimento di Psicologia Università Degli Studi di Firenze Florence, Italy Jennifer L. Campos Department of Psychology Toronto Rehabilitation Institute University of Toronto Toronto, Ontario, Canada Céline Cappe Laboratory of Psychophysics Ecole Polytechnique Fédérale de Lausanne Lausanne, Switzerland
xxi
xxii
Lucilla Cardinali Institut National de la Santé et de la Recherche Médicale Bron, France Yi-Chuan Chen Crossmodal Research Laboratory Department of Experimental Psychology University of Oxford Oxford, United Kingdom H. Ruth Clemo Department of Anatomy and Neurobiology Virginia Commonwealth University School of Medicine Richmond, Virginia Hans Colonius Department of Psychology Oldenburg University Oldenburg, Germany Gregory C. DeAngelis Department of Brain and Cognitive Sciences Center for Visual Science University of Rochester Rochester, New York Adele Diederich School of Humanities and Social Sciences Jacobs University Bremen, Germany Andreas K. Engel Department of Neurophysiology and Pathophysiology University Medical Center Hamburg–Eppendorf Hamburg, Germany Arnaud Falchier Nathan S. Kline Institute for Psychiatric Research Orangeburg, New York Alessandro Farnè Institut National de la Santé et de la Recherche Médicale Bron, France
Contributors
Antje Fillbrandt Leibniz Institute for Neurobiology Magdeburg, Germany Matthew C. Fister Vanderbilt Kennedy Center Vanderbilt University Nashville, Tennessee Asif A. Ghazanfar Departments of Psychology and Ecology and Evolutionary Biology Neuroscience Institute Princeton University Princeton, New Jersey Dipanwita Ghose Department of Psychology Vanderbilt University Nashville, Tennessee Monica Gori Department of Robotics Brain and Cognitive Science Italian Institute of Technology Genoa, Italy Barry G. Green The John B. Pierce Laboratory and Yale University New Haven, Connecticut Jessica J. Green Duke University Durham, North Carolina Tobias Heed Biological Psychology and Neuropsychology University of Hamburg Hamburg, Germany Andrea Hillock-Dunn Department of Hearing and Speech Sciences Vanderbilt University Nashville, Tennessee Steven A. Hillyard University of California San Diego San Diego, California
xxiii
Contributors
Nicholas P. Holmes Institut National de la Santé et de la Recherche Médicale Bron, France
Simon Lacey Department of Neurology Emory University Atlanta, Georgia
Christina E. Hugenschmidt Center for Diabetes Research Wake Forest University School of Medicine Winston-Salem, North Carolina
Peter Lakatos Nathan S. Kline Institute for Psychiatric Research Orangeburg, New York
Thomas W. James Department of Psychological and Brain Sciences Indiana University Bloomington, Indiana
Paul J. Laurienti Department of Radiology Wake Forest University School of Medicine Winston-Salem, North Carolina
Yoshinao Kajikawa Nathan S. Kline Institute for Psychiatric Research Orangeburg, New York Christoph Kayser Max Planck Institute for Biological Cybernetics Tübingen, Germany Mirjam Keetels Department of Medical Psychology and Neuropsychology Tilburg University Tilburg, The Netherlands Leslie P. Keniston Department of Anatomy and Neurobiology Virginia Commonwealth University School of Medicine Richmond, Virginia Andrew J. King Department of Physiology, Anatomy and Genetics University of Oxford Oxford, United Kingdom Katharina von Kriegstein Max Planck Institute for Human Cognitive and Brain Sciences Leipzig, Germany Juliane Krueger Neuroscience Graduate Program Vanderbilt University Nashville, Tennessee
Bigna Lenggenhager Laboratory of Cognitive Neuroscience Ecole Polytechnique Fédérale de Lausanne Lausanne, Switzerland David J. Lewkowicz Department of Psychology Florida Atlantic University Boca Raton, Florida Nikos K. Logothetis Max Planck Institute for Biological Cybernetics Tübingen, Germany Emiliano Macaluso Neuroimaging Laboratory Santa Lucia Foundation Rome, Italy Tamar R. Makin Institut National de la Santé et de la Recherche Médicale Bron, France John J. McDonald Simon Fraser University Burnaby, British Columbia, Canada M. Alex Meredith Department of Anatomy and Neurobiology Virginia Commonwealth University School of Medicine Richmond, Virginia
xxiv
Contributors
Jennifer L. Mozolic Department of Psychology Warren Wilson College Asheville, North Carolina
Albert R. Powers III Neuroscience Graduate Program Vanderbilt University Nashville, Tennessee
Gabriella Musacchia Nathan S. Kline Institute for Psychiatric Research Orangeburg, New York
John Prescott School of Psychology University of Newcastle Ourimbah, Australia
Aaron R. Nidiffer Department of Hearing and Speech Sciences Vanderbilt University Nashville, Tennessee Uta Noppeney Max Planck Institute for Biological Cybernetics Tübingen, Germany Frank W. Ohl Leibniz Institute for Neurobiology Magdeburg, Germany Cesare Parise Department of Experimental Psychology Crossmodal Research Laboratory University of Oxford Oxford, United Kingdom Francesco Pavani Department of Cognitive Sciences and Education Center for Mind/Brain Sciences University of Trento Rovereto, Italy Ann M. Peiffer Department of Radiology Wake Forest University School of Medicine Winston-Salem, North Carolina Thomas J. Perrault Jr. Department of Neurobiology and Anatomy Wake Forest School of Medicine Winston-Salem, North Carolina Christopher I. Petkov Institute of Neuroscience University of Newcastle Newcastle upon Tyne, United Kingdom
Ryan Remedios Max Planck Institute for Biological Cybernetics Tübingen, Germany Brigitte Röder Biological Psychology and Neuropsychology University of Hamburg Hamburg, Germany Lizabeth M. Romanski Department of Neurobiology and Anatomy University of Rochester Rochester, New York Eric M. Rouiller Unit of Physiology and Program in Neurosciences Department of Medicine, Faculty of Sciences University of Fribourg Fribourg, Switzerland Benjamin A. Rowland Department of Neurobiology and Anatomy Wake Forest School of Medicine Winston-Salem, North Carolina Diana K. Sarko Department of Hearing and Speech Sciences Vanderbilt University Nashville, Tennessee Krish Sathian Department of Neurology Emory University Atlanta, Georgia Till R. Schneider Department of Neurophysiology and Pathophysiology University Medical Center Hamburg–Eppendorf Hamburg, Germany
xxv
Contributors
Charles E. Schroeder Nathan S. Kline Institute for Psychiatric Research Orangeburg, New York Daniel Senkowski Department of Neurophysiology and Pathophysiology University Medical Center Hamburg–Eppendorf Hamburg, Germany Ladan Shams Department of Psychology University of California, Los Angeles Los Angeles, California Dana M. Small The John B. Pierce Laboratory and Department of Psychiatry Yale University School of Medicine New Haven, Connecticut Salvador Soto-Faraco Departament de Tecnologies de la Informació i les Comunicacions Institució Catalana de Reserca i Estudis Avançats Universitat Pompeu Fabra Barcelona, Spain Charles Spence Department of Experimental Psychology Crossmodal Research Laboratory University of Oxford Oxford, United Kingdom Barry E. Stein Department of Neurobiology and Anatomy Wake Forest School of Medicine Winston-Salem, North Carolina
Ryan A. Stevenson Department of Psychological and Brain Sciences Indiana University Bloomington, Indiana Viola S. Störmer Max Planck Institute of Human Development Berlin, Germany Ella Striem-Amit Department of Medical Neurobiology, Institute for Medical Research Israel–Canada Hebrew University–Hadassah Medical School Jerusalem, Israel Aleksander Väljamäe Institute of Audiovisual Studies Universitat Pompeu Fabra Barcelona, Spain Jean Vroomen Department of Medical Psychology and Neuropsychology Tilburg University Tilburg, The Netherlands Mark T. Wallace Vanderbilt Brain Institute Vanderbilt University Nashville, Tennessee Massimiliano Zampini Centre for Mind/Brain Sciences University of Trento Rovereto, Italy
Section I Anatomy
1
Structural Basis of Multisensory Processing Convergence H. Ruth Clemo, Leslie P. Keniston, and M. Alex Meredith
CONTENTS 1.1 Introduction...............................................................................................................................3 1.2 Multiple Sensory Projections: Sources......................................................................................3 1.2.1 Multiple Sensory Projections: Termination Patterns.....................................................6 1.2.2 Supragranular Termination of Cross-Modal Projections.............................................. 7 1.3 Do All Cross-Modal Projections Generate Multisensory Integration?.....................................9 1.4 Synaptic Architecture of Multisensory Convergence.............................................................. 10 1.5 Summary and Conclusions...................................................................................................... 11 Acknowledgments............................................................................................................................. 12 References......................................................................................................................................... 12
1.1 INTRODUCTION For multisensory processing, the requisite, defining step is the convergence of inputs from different sensory modalities onto individual neurons. This arrangement allows postsynaptic currents evoked by different modalities access to the same membrane, to collide and integrate there on the common ground of an excitable bilayer. Naturally, one would expect a host of biophysical and architectural features to play a role in shaping those postsynaptic events as they spread across the membrane, but much more can be written about what is unknown of the structural basis for multisensory integration than of what is known. Historically, however, what has primarily been the focus of anatomical investigations of multisensory processing has been the identification of sources of inputs that converge in multisensory regions. Although a few recent studies have begun to assess the features of convergence (see below), most of what is known about the structural basis of multisensory processing lies in the sources and pathways essentially before convergence.
1.2 MULTIPLE SENSORY PROJECTIONS: SOURCES Multisensory processing is defined as the influence of one sensory modality on activity generated by another modality. However, for most of its history, the term “multisensory” had been synonymous with the term “bimodal” (describing a neuron that can be activated by the independent presentation of stimuli from more than one modality). Hence, studies of multisensory connections first identified areas that were bimodal, either as individual neurons (Horn and Hill 1966) or areal responses to different sensory stimuli (e.g., Toldi et al. 1984). Not surprisingly, the bimodal (and trimodal) areas of the superior temporal sulcus (STS) in monkeys (e.g., Benevento et al. 1977; Bruce et al. 1981; Hikosaka et al. 1988) were readily identified. Among the first comprehensive 3
4
The Neural Bases of Multisensory Processes
assessments of multisensory pathways were those that injected tracers into the STS and identified the different cortical sources of inputs to that region. With tracer injections into the upper “polysensory” STS bank, retrogradely labeled neurons were identified in adjoining auditory areas of the STS, superior temporal gyrus, and supratemporal plane, and in visual areas of the inferior parietal lobule and the lateral intraparietal sulcus, with a somewhat more restricted projection from the parahippocampal gyrus and the inferotemporal visual area, as illustrated in Figure 1.1 (Seltzer and Pandya 1994; Saleem et al. 2000). Although inconclusive about potential somatosensory inputs to the STS, this study did mention the presence of retrogradely labeled neurons in the inferior parietal lobule, an area that processes both visual and somatosensory information (e.g., Seltzer and Pandya 1980). Like the STS, the feline anterior ectosylvian sulcus (AES) is located at the intersection of the temporal, parietal, and frontal lobes, contains multisensory neurons (e.g., Rauschecker and Korte 1993; Wallace et al. 1992; Jiang et al. 1994), and exhibits a higher-order visual area within its lower (ventral) bank (Mucke et al. 1982; Olson and Graybiel 1987). This has led to some speculation that these regions might be homologous. However, a fourth somatosensory area (SIV) representation (Clemo and Stein 1983) is found anterior along the AES, whereas somatosensory neurons are predominantly found in the posterior STS (Seltzer and Pandya 1994). The AES also contains distinct modality-specific regions (somatosensory SIV, visual AEV, and auditory FAES) with multisensory neurons found primarily at the intersection between these different representations (Meredith 2004; Wallace et al. 2004; Carriere et al. 2007; Meredith and Allman 2009), whereas the subdivisions of the upper STS bank are largely characterized by multisensory neurons (e.g., Benevento et al. 1977; Bruce et al. 1981; Hikosaka et al. 1988). Further distinctions between the STS and the AES reside in the cortical connectivity of the latter, as depicted in Figure 1.2. Robust somatosensory inputs reach the AES from somatosensory areas SI–SIII (Burton and Kopf 1984; Reinoso-Suarez and Roda 1985) and SV (Mori et al. 1996; Clemo and Meredith 2004); inputs to AEV arrive from the extrastriate visual area posterolateral lateral suprasylvian (PLLS), with smaller contributions from the anterolateral lateral suprasylvian (ALLS) and the posteromedial lateral suprasylvian (PMLS) visual areas (Olson and Graybiel 1987); auditory inputs to the FAES project from the rostral suprasylvian sulcus (RSS), second auditory area (AII), and posterior auditory field (PAF) (Clemo et al. 2007; Lee and Winer 2008). The laminar origin of these projections is provided in only a few of these reports.
CS
Superior
STS
LF
Posterior
FIGURE 1.1 Cortical afferents to monkey STS. On this lateral view of monkey brain, the entire extent of STS is opened (dashed lines) to reveal upper and lower banks. On upper bank, multisensory regions TP0–4 are located (not depicted). Auditory inputs (black arrows) from adjoining superior temporal gyrus, planum temporale, preferentially target anterior portions of upper bank. Visual inputs, primarily from parahippocampal gyrus (medium gray arrow) but also from inferior parietal lobule (light gray arrow), also target upper STS bank. Somatosensory inputs were comparatively sparse, limited to posterior aspects of STS, and may arise from part of inferior parietal lobule (light gray arrow). Note that inputs intermingle within their areas of termination.
Superior
Structural Basis of Multisensory Processing
5
AES
Posterior
FIGURE 1.2 Cortical afferents to cat AES. On this lateral view of cat cortex, the AES is opened (dashed lines) to reveal dorsal and ventral banks. The somatosensory representation SIV on the anterior dorsal bank receives inputs (light gray arrow) from somatosensory areas SI, SII, SII and SV. The auditory field of the AES (FAES) in the posterior end of the sulcus receives inputs (black arrows) primarily from the rostral suprasylvian auditory field, and sulcal portion of the anterior auditory field as well as portions of dorsal zone of the auditory cortex, AII, and PAF. The ectosylvian visual (AEV) area in the ventral bank receives visual inputs (dark gray arrow) primarily from PLLS and, to a lesser extent, from adjacent ALLS and PMLS visual areas. Note that the SIV, FAES, and AEV domains, as well as their inputs, are largely segregated from one another.
The AES is not alone as a cortical site of convergence of inputs from representations of different sensory modalities, as the posterior ectosylvian gyrus (an auditory–visual area; Bowman and Olson 1988), PLLS visual area (an auditory–visual area; Yaka et al. 2002; Allman and Meredith 2007), and the rostral suprasylvian sulcus (an auditory–somatosensory area; Clemo et al. 2007) have had their multiple sensory sources examined. Perhaps the most functionally and anatomically studied multisensory structure is not in the cortex, but the midbrain. This six-layered region contains spatiotopic representations of visual, auditory, and somatosensory modalities within its intermediate and deep layers (for review, see Stein and Meredith 1993). Although unisensory, bimodal, and trimodal neurons are intermingled with one another in this region, the multisensory neurons predominate (63%; Wallace and Stein 1997). Despite their numbers, structure–function relationships have been determined for only a few multisensory neurons. The largest, often most readily identifiable on cross section (or via recording) are the tectospinal and tectoreticulospinal neurons, with somata averaging 35 to 40 µm in diameter whose dendritic arbors can extend up to 1.4 mm (Moschovakis and Karabelas 1985; Behan et al. 1988). These large multipolar neurons have a high incidence of multisensory properties, usually as visual–auditory or visual–somatosensory bimodal neurons (Meredith and Stein 1986). Another form of morphologically distinct superior colliculus (SC) neuron also shows multisensory properties: the nitric oxide synthase (NOS)-positive interneuron. These excitatory local circuit neurons have been shown to receive bimodal inputs largely from the visual and auditory modalities (FuentesSantamaria et al. 2008). Thus, unlike most other structures identified as multisensory, the SC contains morphological classes of neurons that highly correlate with multisensory activity. Ultimately, this could contribute to understanding how multisensory circuits are formed and their relation to particular features of multisensory processing. Because the SC is a multisensory structure, anatomical tracers injected into it have identified numerous cortical and subcortical areas representing different sensory modalities that supply its inputs. However, identification of the sources of multiple sensory inputs to this, or any, area provides little more than anatomical confirmation that projections from different sensory modalities were involved. More pertinent is the information relating to the other end of the projection, the
6
The Neural Bases of Multisensory Processes
axon terminals, whose influence is responsible for the generation of multisensory effects on the postsynaptic membrane. Despite the fact that axon terminals are at the physical point of multisensory convergence, few studies of multisensory regions outside of the SC have addressed this specific issue.
1.2.1 Multiple Sensory Projections: Termination Patterns Unlike much of the multisensory cortex, the pattern of terminal projections to the SC is well described, largely through the efforts of Harting’s group (Harting and Van Leishout 1991; Harting et al. 1992, 1997). Historically, this work represented a conceptual leap from the identification of multisensory sources to the convergent arrangement of those inputs that potentially generate multisensory effects. These and other orthograde studies (e.g., Illing and Graybiel 1986) identified a characteristic, patchy arrangement of input terminals that occupied specific domains within the SC. Somatosensory inputs, whether from the somatosensory cortex or the trigeminal nucleus, terminated in an interrupted series of puffs across the mediolateral extent of the middle portion of the intermediate layers (Harting and Van Leishout 1991; Harting et al. 1992, 1997). On the other hand, visual inputs from, for example, the AEV, avoided the central aspects of the intermediate layers while occupying patches above and below. These relationships among distributions of axon terminals from different sensory modalities are depicted in Figure 1.3. This patchy, discontinuous pattern of termination characterized most projections to the deeper SC layers and was so consistent that some investigators came to regard them as a device by which the different inputs were compartmentalized within individually distinct functional domains (Illing and Graybiel 1986). Although this interpretation has some validity, it is also true (as mentioned above) that some of the multisensory neurons exhibit dendritic arbors of up to 1.4 mm. With this extensive branching pattern (as illustrated in Figure 1.3), it would be difficult for a neuron to avoid contacting the domains of different sensory inputs to the SC. In fact, it would appear that a multisensory tectoreticulospinal neuron would likely sample repeated input domains from several modalities, and it is difficult to imagine why there are not more SC trimodal neurons (9%; Wallace and Stein 1997). Ultimately,
SO
SGI
Visual - AEV Somatosensory - SIV 1 mm
FIGURE 1.3 Sensory segregation and multisensory convergence in SC. This coronal section through cat SC shows alternating cellular and fibrous layers (SO, stratum opticum; SGI, stratum griseum intermediale). Terminal boutons form a discontinuous, patchy distribution across multisensory layers with somatosensory (dark gray, from SIV) and visual (light gray, from AEV) inputs that largely occupy distinct, nonoverlapping domains. (Redrawn from Harting, J.K. et al., J. Comp. Neurol., 324, 379–414, 1992.) A tectoreticulospinal neuron (redrawn from Behan, M. et al., J. Comp. Neurol., 270, 171–184, 1988.) is shown, to scale, repeating across the intermediate layer where dendritic arbor virtually cannot avoid contacting multiple input domains from different modalities. Accordingly, tectoreticulospinal neurons are known for their multisensory properties.
7
Structural Basis of Multisensory Processing 1.
2.
3.
24
4.
AES SIV
1 mm
FAES injection
1 3 Inj.
FIGURE 1.4 Supragranular cross-modal projections from auditory FAES (black injection site) to somato sensory SIV. Coronal sections through SIV correlate with levels shown on lateral diagram of ferret cortex; location of AES is indicated by arrow. On each coronal section, SIV region is denoted by dashed lines roughly perpendicular to pial surface, and location of layer IV (granular layer) is indicated by dashed line essentially parallel to the gray-white border. Each dot is equivalent to one bouton labeled from FAES; note that a preponderance of labeled axon terminals are found in the supragranular layers. (Redrawn from Dehner, L.R. et al., Cereb. Cortex, 14, 387–403, 2004.)
these different input patterns suggest a complex spatial relationship with the recipient neurons and may provide a useful testing ground on which to determine the synaptic architecture underlying multisensory processing. With regard to cortical multisensory areas, only relatively recent studies have examined the termination patterns of multiple sensory projections (e.g., projections from auditory and visual sources to a target area) or cross-modal projections (e.g., projections from an auditory source to a visual target area). It had been observed that tracer injections into the anterior dorsal bank of the AES, where the somatosensory area SIV is located, produced retrograde labeling in the posterior aspects of the AES, where auditory field AES is found (Reinoso-Suarez and Roda 1985). This potential crossmodal projection was further examined by Dehner et al. (2004), who injected tracers in auditory FAES and identified orthograde projection terminals in SIV (see Figure 1.4). These experiments were repeated with the tracer systematically placed in different portions of the FAES, showing the constancy of the projection’s preference for terminating in the upper, supragranular layers of SIV (Dehner et al. 2004). Functionally, such a cross-modal projection between auditory and somatosensory areas would be expected to generate bimodal auditory–somatosensory neurons. However, such bimodal neurons have rarely been observed in SIV (Clemo and Stein 1983; Rauschecker and Korte 1993; Dehner et al. 2004) and stimulation of FAES (through indwelling electrodes) failed to elicit a single example of orthodromic activation via this cross-modal pathway (Dehner et al. 2004). Eventually, single- and combined-modality stimulation revealed that somatosensory SIV neurons received subthreshold influences from auditory inputs, which was described as a “new” form of multisensory convergence that was distinct from the well-known bimodal patterns identified in the SC and elsewhere (Dehner et al. 2004). These functional distinctions are depicted in Figure 1.5, where hypothetical circuits that produce different multisensory effects are illustrated. Ultimately, these experiments (Dehner et al. 2004) indicate that bimodal neurons are not the only form of multisensory neuron.
1.2.2 Supragranular Termination of Cross-Modal Projections The possibility that cross-modal projections underlying subthreshold multisensory processing might be generalizable to brain regions other than the SIV was examined in several subsequent investigations. Somatosensory area SIV was found to exhibit a reciprocal cross-modal projection to auditory FAES, where subthreshold somatosensory effects were observed in approximately 25%
8
The Neural Bases of Multisensory Processes
A
Bimodal
Subthreshold
Responds “A” Responds “B” Integrates “A+B”
Responds “A” No response “B” “B” facilitates “A”
B
A
Unisensory
B
Responds “A” No response “B” “B” no effect on “A” A
FIGURE 1.5 Different patterns of sensory convergence result in different forms of processing. In each panel, neuron (gray) receives inputs (black) from sensory modalities “A” and/or “B.” In bimodal condition (left), neuron receives multiple inputs from both modalities, such that it can be activated by stimulus “A” alone or by stimulus “B” alone. Furthermore, when both “A + B” are stimulated together, inputs converge on the same neuron and their responses integrate. In subthreshold condition (center), neuron still receives inputs from both modalities, but inputs from modality “B” are so reduced and occur at low-priority locations that stimulation of “B” alone fails to activate the neuron. However, when “B” is combined with “A,” activity is modulated (facilitation or suppression). In contrast, unisensory neurons (right) receive inputs from only a single modality “A” and stimulation of “B” has no effect alone or in combination with “A.”
of the samples (Meredith et al. 2006). These projections also showed a preference for supragranular termination, as illustrated in Figure 1.6. In another study (Clemo et al. 2008), several auditory corticocortical projections were demonstrated to terminate in the visual PLLS area, but only those projections from FAES were present within the entire extent of the PLLS corresponding with the distribution of subthreshold multisensory neurons (Allman and Meredith 2007). These projections from FAES to PLLS showed an overwhelming preference for termination in the supragranular (a) Boutons in RSS from: AI AAF AII
SIV
SV
PLLS
PAF
PMLS
FAES
AEV
(b) Boutons in PLLS from: A1 sAAF
PAF
FAES
(c) Boutons in FAES from: SIV RSS
FIGURE 1.6 Corticocortical projections to multisensory areas preferentially terminate in supragranular layers. In “A,” all panels represent coronal sections through RSS with layer IV approximated by dashed line. For each area injected (e.g., AI, SIV, AEV, etc.), each dot represents one labeled axon terminal (bouton). (Redrawn from Clemo, H.R. et al., J. Comp. Neurol., 503, 110–127, 2007; Clemo, H.R. et al., Exp. Brain Res., 191, 37–47, 2008; Meredith, M.A. et al., Exp. Brain Res., 172:472–484, 2006.)
Structural Basis of Multisensory Processing
9
layers (see Figure 1.6). Thus, it might seem that cross-modal projections that have supragranular terminations underlie a specific form of multisensory processing. However, in the auditory field of the rostral suprasylvian sulcus (which is part of the rostral suprasylvian sulcal cortex; Clemo et al. 2007), projections from somatosensory area SIV have a similar supragranular distribution, but both subthreshold and bimodal forms of multisensory neurons are present. Therefore, it is not conclusive that the supragranular projections and subthreshold multisensory processing correlate. It is clear, however, that cross-modal corticocortical projections are strongly characterized by supragranular patterns of termination.
1.3 DO ALL CROSS-MODAL PROJECTIONS GENERATE MULTISENSORY INTEGRATION? Some of the cross-modal projections illustrated in the previous section would be described as modest, at best, in their density of termination in the target region. In fact, it has been suggested that this comparative reduction in projection strength may be one feature of convergence that underlies subthreshold multisensory effects (Allman et al. 2009). Other reports of cortical crossmodal projections, specifically those between the auditory and visual cortex in monkeys (Falchier et al. 2002; Rockland and Ojima 2003), have also been characterized by the same sparseness of projection. Nevertheless, in these cases, it seems to be broadly accepted that such sparse projections would not only underlie overt auditory activity in the visual cortex, but would lead to multisensory integration there as well (Falchier et al. 2002). Data from unanesthetized, paralyzed animals have been cited in support of such interpretations (Murata et al. 1965; Bental et al. 1968; Spinelli et al. 1968; Morrell 1972; Fishman and Michael 1973), but it has been argued that these results are inconsistent with auditory sensory activity (Allman et al. 2008). Thus, although the functional effects of such sparse cross-modal projections are under dispute, the presence of these projections among the repertoire of corticocortical connections now seems well established. Therefore, a recent study (Allman et al. 2008) was initiated to examine the functional effects of a modest cross-modal projection from auditory to visual cortices in ferrets. Tracer injections centered on A1 of ferret cortex were shown to label terminal projections in the supragranular layers of visual area 21. However, single-unit recordings were unable to identify the result of that crossmodal convergence in area 21: no bimodal neurons were observed. Furthermore, tests to reveal subthreshold multisensory influences were also unsuccessful. Ultimately, only when local inhibition was pharmacologically blocked (via iontophoresis of bicuculine methiodide, the antagonist of gamma-aminobutyric acid-alpha (GABA-a) was there a statistically significant indication of cross-modal influence on visual processing. These results support the notion that multisensory convergence does lead to multisensory processing effects, but those effects may be subtle and manifest themselves in nontraditional forms (e.g., nonbimodal; Allman et al. 2008). In fact, this interpretation is consistent with the results of a recent study of the effects of auditory stimulation on visual processing in V1 of awake, behaving monkeys (Wang et al. 2008): no bimodal neurons were observed, but responses to visual–auditory stimuli were significantly shorter in latency when compared with those elicited by visual stimuli alone. From another perspective, these data provide additional support to the notion that multisensory convergence is not restricted to bimodal neurons. The well-known pattern of convergence under lying bimodal neurons has already been modified, as shown in Figure 1.4, to include subthreshold multisensory neurons whose functional behavior might be defined by an imbalance of inputs from the two different modalities. When considering the result of multisensory convergence in area 21, it is not much of a design modification to reduce those subthreshold inputs even further, such that they might be effective under specific contexts or conditions. Moreover, reducing the second set of inputs further toward zero essentially converts a multisensory circuit (albeit a weak one) into a unisensory circuit. Thus, it seems logical to propose that patterns of connectivity that produce multisensory properties span a continuum from, at one end, the profuse levels of inputs from different modalities
10
The Neural Bases of Multisensory Processes Bimodal
A
Responds “A” Responds “B” Integrates “A+B”
Subthreshold
B
A
Responds “A” No response “B” ‘B’ facilitates “A”
B
A
Responds “A” No response “B” ‘B’ facilitates “A”
Multisensory continuum
Unisensory
B
A
Responds “A” No response “B” ‘B’ no effect on “A” Unisensory
FIGURE 1.7 Patterns of sensory convergence (black; from modality “A” or “B”) onto individual neurons (gray) result in different forms of processing (similar to Figure 1.4). Synaptic arrangement depicted in middle panel is adjusted such that inputs from modality “B” are light (left center) or very sparse (right center), suggesting a slight difference of effect of modality “B” on responses elicited by “A.” In addition, because each of these effects result from simple yet systematic changes in synaptic arrangement, these patterns suggest that multisensory convergence occurs over a continuum of synaptic arrangements that, on one end, produces bimodal multisensory properties, whereas on the other, it underlies only unisensory processing.
that produce bimodal neurons to, at the other end, the complete lack of inputs from a second modality that defines unisensory neurons (see Figure 1.7).
1.4 SYNAPTIC ARCHITECTURE OF MULTISENSORY CONVERGENCE Implicit in the conclusions derived from the studies cited above is the notion that heavy cross-modal projections underlie bimodal multisensory processing at the target site, whereas modest projections subserve subthreshold multisensory processing. Although this general notion correlating projection strength with specific forms of multisensory effects awaits quantification, it is consistent with the overarching neurophysiological principle that different patterns of connectivity underlie different circuits and behaviors. Another basic feature of neuronal connectivity is the priority of the location at which synapses occur. It is well accepted that synapses located on a neuron’s soma are more likely to influence its spiking activity than synapses occurring out on the dendrites, or those on proximal dendrites will have a higher probability of affecting activity than those occurring at more distal sites. Therefore, the synaptic architecture of multisensory processing should also be considered when assessing the functional effects of cross-modal (and multisensory) projections. However, virtually nothing is known about the structure of multisensory convergence at the neuronal level. In fact, the only electron micrographic documentation of multisensory convergence comes not from the cortex, but from brainstem studies of somatosensory inputs to the dorsal cochlear nucleus (Shore et al. 2000). Although the significance of this observation of multisensory convergence at the first synapse in the auditory projection stream cannot be overstated, the technique of electron microscopy is poorly adapted for making comparisons of multiple synaptic contacts along the same neuron. Confocal laser microscopy, coupled with multiple-fluorescent labeling techniques, can visualize entire neurons as well as magnify areas of synaptic contact to submicron resolution (e.g., see Vinkenoog et al. 2005). This technique was used in a recent study of auditory FAES cross-modal
11
Structural Basis of Multisensory Processing
SIV
FAES 1 µm
1 µm 10 µm
FIGURE 1.8 (See color insert.) Confocal images of a somatosensory SIV neuron (red) contacted by boutons that originated in auditory FAES (green). A three-dimensional rendering of a trimmed confocal stack containing a calretinin-positive SIV neuron (red; scale bar, 10 μm) that was contacted by two axons (green) labeled from auditory area FAES. Each of the axo-dendritic points of contact are enlarged on the right (white arrows; scale bar, 1.0 μm) to reveal the putative bouton swelling. (From Keniston, L.P. et al., Exp. Brain Res., 202, 725–731, 2010. With permission.)
projections to somatosensory area SIV (Keniston et al. 2010). First, a tracer (fluoroemerald, linked to biotinylated dextran amine) was injected into the auditory FAES and allowed to transport to SIV. Next, because inhibitory interneurons represent only about 20% of cortical neurons, immunofluorescent tags of specific subclasses of interneurons would make them stand out against the neuropil. Therefore, immunocytochemical techniques were used to rhodamine-label SIV interneurons containing a calcium-binding protein (e.g., parvalbumin, calbindin, calretinin). Double- labeled tissue sections were examined by a laser-scanning confocal microscope (TCS SP2 AOBS, Leica Microsystems) and high-magnification image stacks were collected, imported into Volocity (Improvision, Lexington, Massachusetts), and deconvolved (AutoQuant, Media Cybernetics). A synaptic contact was defined as an axon swelling that showed no gap between it and the immunopositive neuron. Of the 33 immunopositive neurons identified, a total of 59 contacts were observed with axon terminals labeled from the FAES, two of which are illustrated in Figure 1.8. Sixty-four percent (21 of 33) of interneurons showed one or more contacts; the average was 2.81 (±1.4), with a maximum of 5 found on one neuron. Thus, the anatomical techniques used here visualized crossmodal convergence at the neuronal level as well as obtained some of the first insights into the synaptic architecture of multisensory connections.
1.5 SUMMARY AND CONCLUSIONS Historically, anatomical studies of multisensory processing focused primarily on the source of inputs to structures that showed responses to more than one sensory modality. However, because convergence is the defining step in multisensory processing, it would seem most important to understand
12
The Neural Bases of Multisensory Processes
how the terminations of those inputs generate multisensory effects. Furthermore, because multisensory processing is not restricted to only bimodal (or trimodal) neurons, the synaptic architecture of multisensory convergence may be revealed to be as distinct and varied as the perceptions and behaviors these multisensory circuits subserve.
ACKNOWLEDGMENTS This study was supported by NIH grant NS039460.
REFERENCES Allman, B.L., and M.A. Meredith. 2007. Multisensory processing in “unimodal” neurons: Cross-modal subthreshold auditory effects in cat extrastriate visual cortex. Journal of Neurophysiology 98:545–549. Allman, B.L., R.E. Bittencourt-Navarrete, L.P. Keniston, A.E. Medina, M.Y. Wang, and M.A. Meredith. 2008. Do cross-modal projections always result in multisensory integration? Cerebral Cortex 18:2066–2076. Allman, B.L., L.P. Keniston, and M.A. Meredith. 2009. Not just for bimodal neurons anymore: The contribution of unimodal neurons to cortical multisensory processing. Brain Topography 21:157–167. Behan, M., P.P. Appell, and M.J. Graper. 1988. Ultrastructural study of large efferent neurons in the superior colliculus of the cat after retrograde labeling with horseradish peroxidase. Journal of Comparative Neurology 270:171–184. Benevento, L.A., J.H. Fallon, B. Davis, and M. Rezak. 1977. Auditory–visual interaction in single cells in the cortex of the superior temporal sulcus and the orbital frontal cortex of the macaque monkey. Experimental Neurology 57:849–872. Bental, E., N. Dafny, and S. Feldman. 1968. Convergence of auditory and visual stimuli on single cells in the primary visual cortex of unanesthetized unrestrained cats. Experimental Neurology 20:341–351. Bowman, E.M., and C.R. Olson. 1988. Visual and auditory association areas of the cat’s posterior ectosylvian gyrus: Cortical afferents. Journal of Comparative Neurology 272:30–42. Bruce, C., R. Desimone, and C.G. Gross. 1981. Visual properties of neurons in a polysensory area in superior temporal sulcus of the macaque. Journal of Neurophysiology 46:369–384. Burton, H., and E.M. Kopf. 1984. Ipsilateral cortical connections from the second and fourth somatic sensory areas in the cat. Journal of Comparative Neurology 225:527–553. Carriere, B.N., D.W. Royal, T.J. Perrault, S.P. Morrison, J.W. Vaughan, B.E. Stein, and M.T. Wallace. 2007. Visual deprivation alters the development of cortical multisensory integration. Journal of Neurophysiology 98:2858–2867. Clemo, H.R., and M.A. Meredith. 2004. Cortico-cortical relations of cat somatosensory areas SIV and SV. Somatosensory & Motor Research 21:199–209. Clemo, H.R., and B.E. Stein. 1983. Organization of a fourth somatosensory area of cortex in cat. Journal of Neurophysiology 50:910–925. Clemo, H.R., B.L. Allman, Donlan M.A., and M.A. Meredith. 2007. Sensory and multisensory representations within the cat rostral suprasylvian cortex. Journal of Comparative Neurology 503:110–127. Clemo, H.R., G.K. Sharma, B.L. Allman, and M.A. Meredith. 2008. Auditory projections to extrastriate visual cortex: Connectional basis for multisensory processing in ‘unimodal’ visual neurons. Experimental Brain Research 191:37–47. Dehner, L.R., L.P. Keniston, H.R. Clemo, and M.A. Meredith. 2004. Cross-modal circuitry between auditory and somatosensory areas of the cat anterior ectosylvian sulcal cortex: A ‘new’ inhibitory form of multisensory convergence. Cerebral Cortex 14:387–403. Falchier, A., C. Clavagnier, P. Barone, and H. Kennedy. 2002. Anatomical evidence of multimodal integration in primate striate cortex. Journal of Neuroscience 22:5749–5759. Fishman, M.C., and P. Michael. 1973. Integration of auditory information in the cat’s visual cortex. Vision Research 13:1415–1419. Fuentes-Santamaria, V., J.C. Alvarado, B.E. Stein, and J.G. McHaffie. 2008. Cortex contacts both output neurons and nitrergic interneurons in the superior colliculus: Direct and indirect routes for multisensory integration. Cerebral Cortex 18:1640–1652. Harting, J.K., and D.P. Van Leishout. 1991. Spatial relationships of axons arising from the substantia nigra, spinal trigeminal nucleus, and the pedunculopontine tegmental nucleus within the intermediate gray of the cat superior colliculus. Journal of Comparative Neurology 305:543–558.
Structural Basis of Multisensory Processing
13
Harting, J.K., B.V. Updyke, and D.P. Van Lieshout. 1992. Corticotectal projections in the cat: Anterograde transport studies of twenty-five cortical areas. Journal of Comparative Neurology 324:379–414. Harting, J.K., S. Feig, and D.P. Van Lieshout. 1997. Cortical somatosensory and trigeminal inputs to the cat superior colliculus: Light and electron microscopic analyses. Journal of Comparative Neurology 388:313–326. Hikosaka, K., E. Iwai, H. Saito, and K. Tanaka. 1988. Polysensory properties of neurons in the anterior bank of the caudal superior temporal sulcus of the macaque monkey. Journal of Neurophysiology 60:1615–1637. Horn, G., and R.M. Hill. 1966. Responsiveness to sensory stimulation of units in the superior colliculus and subjacent tectotegmental regions of the rabbit. Experimental Neurology 14:199–223. Illing, R.-B., and A.M. Graybiel. 1986. Complementary and non-matching afferent compartments in the cat’s superior colliculus: Innervation of the acetylcholinesterase-poor domain of the intermediate gray layer. Neuroscience 18:373–394. Jiang, H., F. Lepore, M. Ptito, and J.P. Guillemot. 1994, Sensory interactions in the anterior ectosylvian cortex of cats. Experimental Brain Research 101:385–396. Keniston, L.P., S.C. Henderson, and M.A. Meredith. 2010. Neuroanatomical identification of crossmodal auditory inputs to interneurons in somatosensory cortex. Experimental Brain Research 202:725–731. Lee, C.C., and J.A. Winer. 2008. Connections of cat auditory cortex: III. Corticocortical system. Journal of Comparative Neurology 507:1920–1943. Meredith, M.A. 2004. Cortico-cortical connectivity and the architecture of cross-modal circuits. In Handbook of Multisensory Processes. C. Spence, G. Calvert, and B. Stein, eds. 343–355. Cambridge, MA: MIT Press. Meredith, M.A., and B.L. Allman. 2009. Subthreshold multisensory processing in cat auditory cortex. Neuro report 20:126–131. Meredith, M.A., and B.E. Stein. 1986. Visual, auditory, and somatosensory convergence on cells in superior colliculus results in multisensory integration. Journal of Neurophysiology 56:640–662. Meredith, M.A, L.P. Keniston, L.R. Dehner, and H.R. Clemo. 2006. Crossmodal projections from somatosensory area SIV to the auditory field of the anterior ectosylvian sulcus (FAES) in cat: Further evidence for subthreshold forms of multisensory processing. Experimental Brain Research 172:472–484. Monteiro, G., H.R. Clemo, and M.A. Meredith. 2003. Auditory cortical projections to the rostral suprasylvian sulcal cortex in the cat: Implications for its sensory and multisensory organization. NeuroReport 14: 2139–2145. Mori, A., T. Fuwa, A. Kawai et al. 1996. The ipsilateral and contralateral connections of the fifth somatosensory area (SV) in the cat cerebral cortex. Neuroreport 7:2385–2387. Morrell, F. 1972. Visual system’s view of acoustic space. Nature 238:44–46. Moschovakis, A.K., and A.B. Karabelas. 1985. Observations on the somatodendritic morphology and axonal trajectory of intracellularly HRP-labeled efferent neurons located in the deeper layers of the superior colliculus of the cat. Journal of Comparative Neurology 239:276–308. Mucke, L., M. Norita, G. Benedek, and O. Creutzfeldt. 1982. Physiologic and anatomic investigation of a visual cortical area situated in the ventral bank of the anterior ectosylvian sulcus of the cat. Experimental Brain Research 46:1–11. Murata, K., H. Cramer, and P. Bach-y-Rita. 1965. Neuronal convergence of noxious, acoustic, and visual stimuli in the visual cortex of the cat. Journal of Neurophysiology 28:1223–1239. Olson, C.R., and A.M. Graybiel. 1987. Ectosylvian visual area of the cat: Location, retinotopic organization, and connections. Journal of Comparative Neurology 261:277–294. Rauschecker, J.P., and M. Korte. 1993. Auditory compensation for early blindness in cat cerebral cortex. Journal of Neuroscience 13:4538–4548. Reinoso-Suarez, F., and J.M. Roda. 1985. Topographical organization of the cortical afferent connections to the cortex of the anterior ectosylvian sulcus in the cat. Experimental Brain Research 59:313–324. Rockland, K.S., and H. Ojima. 2003. Multisensory convergence in calcarine visual areas in macaque monkey. International Journal of Psychophysiology 50:19–26. Saleem, K.S, W. Suzuki, K. Tanaka, and T. Hashikawa. 2000. Connections between anterior inferotemporal cortex and superior temporal sulcus regions in the macaque monkey. Journal of Neuroscience 20:5083–5101. Seltzer, B., and D.N. Pandya. 1980. Converging visual and somatic sensory input to the intraparietal sulcus of the rhesus monkey. Brain Research 192:339–351. Seltzer, B., and D.N. Pandya. 1994. Parietal, temporal, and occipital projections to cortex of the superior temporal sulcus in the rhesus monkey: A retrograde tracer study. Journal of Comparative Neurology 343:445–463.
14
The Neural Bases of Multisensory Processes
Shore, S.E., Z. Vass, N.L. Wys, and R.A. Altschuler. 2000. Trigeminal ganglion innervates the auditory brainstem. Journal of Comparative Neurology 419:271–285. Spinelli, D.N., A. Starr, and T.W. Barrett. 1968. Auditory specificity in unit recordings from cat’s visual cortex. Experimental Neurology 22:75–84. Stein, B.E., and M.A. Meredith. 1993. Merging of the Senses. Cambridge, MA: MIT Press. Toldi, J., O. Feher, and L. Feuer. 1984. Dynamic interactions of evoked potentials in a polysensory cortex of the cat. Neuroscience 13:945–952. Vinkenoog, M., M.C. van den Oever, H.B. Uylings, and F.G. Wouterlood. 2005. Random or selective neuroanatomical connectivity. Study of the distribution of fibers over two populations of identified interneurons in cerebral cortex. Brain Research. Brain Research Protocols 14:67–76. Wallace, M.T., and B.E. Stein. 1997. Development of multisensory neurons and multisensory integration in cat superior colliculus. Journal of Neuroscience 17:2429–2444. Wallace, M.T., M.A. Meredith, and B.E. Stein. 1992. The integration of multiple sensory inputs in cat cortex. Experimental Brain Research 91:484–488. Wallace, M.T., R. Ramachandran, and B.E. Stein. 2004. A revised view of sensory cortical parcellation. Proceedings of the National Academy of Sciences 101:2167–2172. Wang, Y., S. Celebrini, Y. Trotter, and P. Barone. 2008. Visuo–auditory interactions in the primary visual cortex of the behaving monkey: Electrophysiological evidence. BMC Neuroscience 9:79. Yaka, R., N. Notkin, U. Yinon, and Z. Wollberg. 2002. Visual, auditory and bimodal activity in the banks of the lateral suprasylvian sulcus in the cat. Neuroscience and Behavioral Physiology 32:103–108.
2
Cortical and Thalamic Pathways for Multisensory and Sensorimotor Interplay Céline Cappe, Eric M. Rouiller, and Pascal Barone
CONTENTS 2.1 Introduction............................................................................................................................. 15 2.2 Cortical Areas in Multisensory Processes............................................................................... 15 2.2.1 Multisensory Association Cortices.............................................................................. 15 2.2.1.1 Superior Temporal Sulcus............................................................................. 16 2.2.1.2 Intraparietal Sulcus....................................................................................... 16 2.2.1.3 Frontal and Prefrontal Cortex....................................................................... 16 2.2.2 Low-Level Sensory Cortical Areas............................................................................. 17 2.2.2.1 Auditory and Visual Connections and Interactions...................................... 17 2.2.2.2 Auditory and Somatosensory Connections and Interactions........................ 19 2.2.2.3 Visual and Somatosensory Connections and Interactions............................ 19 2.2.2.4 Heteromodal Projections and Sensory Representation................................. 19 2.3 Thalamus in Multisensory Processes......................................................................................20 2.3.1 Thalamocortical and Corticothalamic Connections...................................................20 2.3.2 Role of Thalamus in Multisensory Integration............................................................ 21 2.4 Higher-Order, Lower-Order Cortical Areas and/or Thalamus?.............................................. 23 2.5 Conclusions..............................................................................................................................24 Acknowledgments.............................................................................................................................24 References.........................................................................................................................................24
2.1 INTRODUCTION Numerous studies in both monkey and human provided evidence for multisensory integration at high-level and low-level cortical areas. This chapter focuses on the anatomical pathways contributing to multisensory integration. We first describe the anatomical connections existing between different sensory cortical areas, briefly concerning the well-known connections between associative cortical areas and the more recently described connections targeting low-level sensory cortical areas. Then we focus on the description of the connections of the thalamus with different sensory and motor areas and their potential role in multisensory and sensorimotor integration. Finally, we discuss the several possibilities for the brain to integrate the environmental world with the different senses.
2.2 CORTICAL AREAS IN MULTISENSORY PROCESSES 2.2.1 Multisensory Association Cortices Parietal, temporal, and frontal cortical regions of primates have been reported to be polysensory cortical areas, i.e., related to more than a single sensory modality. We describe here several important 15
16
The Neural Bases of Multisensory Processes
features about these regions, focusing on the superior temporal sulcus (STS), the intraparietal sulcus, and the frontal cortex. 2.2.1.1 Superior Temporal Sulcus Desimone and Gross (1979) found neurons responsive to visual, auditory, and somatosensory stimuli in a temporal region of the STS referred to as superior temporal plane (STP) (see also Bruce et al. 1981; Baylis et al. 1987; Hikosaka et al. 1988). The rostral part of the STS (Bruce et al. 1981; Benevento et al. 1977) appears to contain more neurons with multisensory properties than the caudal part (Hikosaka et al. 1988). The connections of the STP include higher-order visual cortical areas as posterior parietal visual areas (Seltzer and Pandya 1994; Cusick et al. 1995) and temporal lobe visual areas (Kaas and Morel 1993), auditory cortical areas (Pandya and Seltzer 1982), and posterior parietal cortex (Seltzer and Pandya 1994; Lewis and Van Essen 2000). The STS region also has various connections with the prefrontal cortex (Cusick et al. 1995). In humans, numerous neuroimaging studies have shown multisensory convergence in the STS region (see Barraclough et al. 2005 for a review). Recently, studies have focused on the role of the polysensory areas of the STS and their interactions with the auditory cortex in processing primate communications (Ghazanfar 2009). The STS is probably one of the origins of visual inputs to the auditory cortex (Kayser and Logothetis 2009; Budinger and Scheich 2009; Cappe et al. 2009a; Smiley and Falchier 2009) and thus participates in the multisensory integration of conspecific face and vocalizations (Ghazanfar et al. 2008) that occurs in the auditory belt areas (Ghazanfar et al. 2005; Poremba et al. 2003). These findings support the hypothesis of general roles for the STS region in synthesizing perception of speech and general biological motion (Calvert 2001). 2.2.1.2 Intraparietal Sulcus The posterior parietal cortex contains a number of different areas including the lateral intraparietal (LIP) and ventral intraparietal (VIP) areas, located in the intraparietal sulcus. These areas seem to be functionally related and appear to encode the location of objects of interest (Colby and Goldberg 1999). These areas are thought to transform sensory information into signals related to the control of hand and eye movements via projections to the prefrontal, premotor, and visuomotor areas of the frontal lobe (Rizzolatti et al. 1997). Neurons of the LIP area present multisensory properties (Cohen et al. 2005; Russ et al. 2006; Gottlieb 2007). Similarly, neurons recorded in the VIP area exhibit typical multisensory responses (Duhamel et al. 1998; Bremmer et al. 2002; Schlack et al. 2005; Avillac et al. 2007). Anatomically, LIP and VIP are connected with cortical areas of different sensory modalities (Lewis and Van Essen 2000). In particular, VIP receives inputs from posterior parietal areas 5 and 7 and insular cortex in the region of S2, and few inputs from visual regions such as PO and MST (Lewis and Van Essen 2000). Although it is uncertain whether neurons in VIP are responsive to auditory stimuli, auditory inputs may originate from the dorsolateral auditory belt and parabelt (Hackett et al. 1998). The connectivity pattern of LIP (Andersen et al. 1990; Blatt et al. 1990; Lewis and Van Essen 2000) is consistent with neuronal responses related to eye position and visual inputs. Auditory and somatosensory influences appear to be very indirect and visuomotor functions dominate, as the connection pattern suggests. In particular, the ventral part of the LIP is connected with areas dealing with spatial information (Andersen et al. 1997) as well as with the frontal eye field (Schall et al. 1995), whereas the dorsal part of the LIP is connected with areas responsible for the processing of visual information related to the form of objects in the inferotemporal cortex (ventral “what” visual pathway). Both LIP and VIP neurons exhibit task-dependent responses (Linden et al. 1999; Gifford and Cohen 2004), although the strength of this dependence and its rules remain to be determined. 2.2.1.3 Frontal and Prefrontal Cortex The premotor cortex, located in the frontal lobe, contains neurons with responses to somatosensory, auditory, and visual signals, especially its ventral part as shown in monkeys (Fogassi et al. 1996;
Cortical and Thalamic Pathways for Multisensory and Sensorimotor Interplay
17
Graziano et al. 1994, 1999). Somatosensory responses may be mediated by connections with somatosensory area S2 and parietal ventral (PV) somatosensory area (Disbrow et al. 2003) and with the posterior parietal cortex, such as areas 5, 7a, 7b, anterior intraparietal area (AIP), and VIP (see Kaas and Collins 2004). Visual inputs could also come from the posterior parietal region. The belt and parabelt auditory areas project to regions rostral to the premotor cortex (Hackett et al. 1999; Romanski et al. 1999) and may contribute to auditory activation, as well as connections from the trimodal portion of area 7b to the premotor cortex (Graziano et al. 1999). Anterior to the premotor cortex, the prefrontal cortex plays a key role in temporal integration and is related to evaluative and cognitive functions (Milner et al. 1985; Fuster 2001). Much of this cortex has long been considered to be multisensory (Bignall 1970) but some regions are characterized by some predominance in one sensory modality, such as an auditory domain in the ventral prefrontal region (Suzuki 1985; Romanski and Goldman-Rakic 2002; Romanski 2004). This region receives projections from auditory, visual, and multisensory cortical regions (e.g., Gaffan and Harrison 1991; Barbas 1986; Romanski et al. 1999; Fuster et al. 2000), which are mediated through different functional streams ending separately in the dorsal and ventral prefrontal regions (Barbas and Pandya 1987; Kaas and Hackett 2000; Romanski et al. 1999). This cortical input arising from different modalities confer to the prefrontal cortex a role in cross-modal association (see Petrides and Iversen 1976; Joseph and Barone 1987; Barone and Joseph 1989; Ettlinger and Wilson 1990) as well as in merging sensory information especially in processing conspecific auditory and visual communication stimuli (Romanski 2007; Cohen et al. 2007).
2.2.2 Low-Level Sensory Cortical Areas Several studies provide evidence that anatomical pathways between low-level sensory cortical areas may represent the anatomical support for early multisensory integration. We will detail these patterns of connections in this part according to sensory interactions. 2.2.2.1 Auditory and Visual Connections and Interactions Recently, the use of anterograde and retrograde tracers in the monkey brain made it possible to highlight direct projections from the primary auditory cortex (A1), the caudal auditory belt and parabelt, and the polysensory area of the temporal lobe (STP) to the periphery of the primary visual cortex (V1, area 17 of Brodmann) (Falchier et al. 2002), as well as from the associative auditory cortex to the primary and secondary visual areas (Rockland and Ojima 2003). These direct projections of the auditory cortex toward the primary visual areas would bring into play connections of the feedback type and may play a role in the “foveation” of a peripheral auditory sound source (Heffner and Heffner 1992). The reciprocity of these connections from visual areas to auditory areas was also tested in a recent study (Falchier et al. 2010) that revealed the existence of projections from visual areas V2 and prostriata to auditory areas, including the caudal medial and lateral belt area, the caudal parabelt area, and the temporoparietal area. Furthermore, in the marmoset, a projection from the high-level visual areas to the auditory cortex was also reported (Cappe and Barone 2005). More precisely, an area anterior to the STS (corresponding to the STP) sends connections toward the auditory core with a pattern of feedback connections. Thus, multiple sources can provide visual input to the auditory cortex in monkeys (see also Smiley and Falchier 2009; Cappe et al. 2009a). Direct connections between the primary visual and auditory areas have been found in rodents, such as in the gerbil (Budinger et al. 2006) or the prairie vole (Campi et al. 2010) as well as in carnivores. For example, the primary auditory cortex of the ferret receives a sparse projection from the visual areas including the primary visual cortex (Bizley et al. 2007). Similarly, in the adult cat, visual and auditory cortices are interconnected but the primary sensory fields are not the main areas involved. Only a minor projection is observed from A1 toward the visual areas A17/18 (Innocenti et al. 1988), the main component arising from the posterior auditory field (Hall and Lomber 2008). It
18
The Neural Bases of Multisensory Processes
is important to note that there is probably a tendency for a decrease in the density of these auditory– visual interconnections when going from rodents to carnivore to primates. This probably means a higher incidence of cross-modal responses in unisensory areas of the rodents (Wallace et al. 2004), whereas such responses are not present in the primary visual or auditory cortex of the monkey (Lakatos et al. 2007; Kayser et al. 2008; Wang et al. 2008). On the behavioral side, in experiments conducted in animals, multisensory integration dealt in most cases with spatial cues, for instance, the correspondence between the auditory space and the visual space. These experiments were mainly conducted in cats (Stein et al. 1989; Stein and Meredith 1993; Gingras et al. 2009). For example, Stein and collaborators (1989) trained cats to move toward visual or auditory targets with weak salience, resulting in poor performance that did not exceed 25% on average. When the same stimuli were presented in spatial and temporal congruence, the percentage of correct detections increased up to nearly 100%. In monkeys, only few experiments have been conducted on behavioral facilitation induced by multimodal stimulation (Frens and Van Opstal 1998; Bell et al. 2005). In line with human studies, simultaneous presentation in monkeys of a sound during a visually guided saccade induced a reduction of about 10% to 15% of saccade latency depending on the visual stimulus contrast level (Wang et al. 2008). Recently, we have shown behavioral evidence for multisensory facilitation between vision and hearing in macaque monkeys (Cappe et al. 2010). Monkeys were trained to perform a simple detection task to stimuli, which were auditory (noise), visual (flash), or auditory–visual (noise and flash) at different intensities. By varying the intensity of individual auditory and visual stimuli, we observed that, when the stimuli are of weak saliency, the multisensory condition had a significant facilitatory effect on reaction times, which disappeared at higher intensities (Cappe et al. 2010). We applied to the behavioral data the “race model” (Raab 1962) that supposes that the faster unimodal modality should be responsible for the shortening in reaction time (“the faster the winner”), which would correspond to a separate activation model (Miller 1982). It turns out that the multisensory benefit at low intensity derives from a coactivation mechanism (Miller 1982) that implies a convergence of hearing and vision to produce multisensory interactions and a reduction in reaction time. The anatomical studies previously described suggest that such a convergence may take place at the lower levels of cortical sensory processing. In humans, numerous behavioral studies, using a large panel of different paradigms and various types of stimuli, showed the benefits of auditory–visual combination stimuli compared to unisensory stimuli (see Calvert et al. 2004 for a review; Romei et al. 2007; Cappe et al. 2009b as recent examples). From a functional point of view, many studies have shown multisensory interactions early in time and in different sensory areas with neuroimaging and electrophysiological methods. Auditory– visual interactions have been revealed in the auditory cortex or visual cortex using electrophysiological or neuroimaging methods in cats and monkeys (Ghazanfar et al. 2005; Bizley et al. 2007; Bizley and King 2008; Cappe et al. 2007; Kayser et al. 2007, 2008; Lakatos et al. 2007; Wang et al. 2008). More specifically, electrophysiological studies in monkeys, revealing multisensory interactions in primary sensory areas such as V1 or A1, showed that cross-modal stimuli (i.e., auditory or visual stimuli, respectively) are rather modulatory on the non-“sensory-specific” response, and/ or acting on the oscillatory activity (Lakatos et al. 2007; Kayser et al. 2008) or on the latency of the neuronal responses (Wang et al. 2008). These mechanisms can enhance the speed of sensory processing and induce a reduction of the reaction times (RTs) during a multisensory stimulation. Neurons recorded in the primary visual cortex showed a significant reduction in visual response latencies, specifically in suboptimal conditions (Wang et al. 2008). It is important to mention that, in the primary sensory areas of the primate, authors have reported the absence of nonspecific sensory responses at the spiking level (Wang et al. 2008; Lakatos et al. 2007; Kayser et al. 2008). These kinds of interactions between hearing and vision were also reported in humans using neuroimaging techniques (Giard and Peronnet 1999; Molholm et al. 2002; Lovelace et al. 2003; Laurienti et al. 2004; Martuzzi et al. 2007).
Cortical and Thalamic Pathways for Multisensory and Sensorimotor Interplay
19
2.2.2.2 Auditory and Somatosensory Connections and Interactions The advantage of being able to use a number of distinct tracers allows us to identify connections between several cortical areas. Indeed, we made injections of retrograde tracers into early visual (V2 and MT), somatosensory (1/3b), and auditory (core) cortical areas in marmosets (Cappe and Barone 2005) allowing us to exhibit connections between cortical areas considered as unisensory areas. Projections from visual areas, such as the STP, to the core auditory cortex have been found (Cappe and Barone 2005), as described in Section 2.2.2. Other corticocortical projections, and in particular from somatosensory to auditory cortex, were found, supporting the view that inputs from different modalities are sent to cortical areas that are classically considered to be unimodal (Cappe and Barone 2005). More precisely, our study revealed projections from somatosensory areas S2/ PV to the primary auditory cortex. Another study conducted in gerbils also showed connections between the primary somatosensory cortex and the primary auditory cortex (Budinger et al. 2006). In marmosets and macaques, projections from the retroinsular area of the somatosensory cortex to the caudiomedial belt auditory area were also reported (de la Mothe et al. 2006a; Smiley et al. 2007). Intracranial recordings in the auditory cortex of monkeys have shown the modulation of auditory responses by somatosensory stimuli, consistent with early multisensory convergence (Schroeder et al. 2001; Schroeder and Foxe 2002; Fu et al. 2003). These findings have been extended by a functional magnetic resonance imaging (fMRI) study in anesthetized monkeys, which showed auditory– somatosensory interactions in the caudal lateral belt area (Kayser et al. 2005). In humans, there have been previous demonstrations of a redundant signal effect between auditory and tactile stimuli (Murray et al. 2005; Zampini et al. 2007; Hecht et al. 2008). Functional evidence was mainly found with EEG and fMRI techniques (Foxe et al. 2000, 2002; Murray et al. 2005). In particular, Murray and collaborators (2005) reported in humans that neural responses showed an initial auditory–somatosensory interaction in auditory association areas. 2.2.2.3 Visual and Somatosensory Connections and Interactions Limited research has been focused on interactions between vision and touch. In our experiments, using multiple tracing methods in marmoset monkeys (Cappe and Barone 2005), we found direct projections from visual cortical areas to somatosensory cortical areas. More precisely, after an injection of retrograde tracer in the primary somatosensory cortex (areas 1/3b), we observed projections originating from visual areas (the ventral and dorsal fundus of the superior temporal area, and the middle temporal crescent). On a functional point of view, electrophysiological recordings in the somatosensory cortex of macaque monkeys showed modulations of responses by auditory and visual stimuli (Schroeder and Foxe 2002). Behavioral results in humans demonstrated gain in performance when visual and tactile stimuli were combined (Forster et al. 2002; Hecht et al. 2008). Evidence of functional interactions between vision and touch was observed with neuroimaging techniques in humans (Amedi et al. 2002, 2007; James et al. 2002). In particular, it has been shown that the perception of motion could activate the MT complex in humans (Hagen et al. 2002). It has also been demonstrated that the extrastriate visual cortex area 19 is activated during tactile perception (see Sathian and Zangaladze 2002 for review). 2.2.2.4 Heteromodal Projections and Sensory Representation In somatosensory (Krubitzer and Kaas 1990; Huffman and Krubitzer 2001) and visual systems (Kaas and Morel 1993; Schall et al. 1995; Galletti et al. 2001; Palmer and Rosa 2006), there is evidence for the existence of different connectivity patterns according to sensory representation, especially in terms of the density of connections between areas. This observation also applies to heteromodal connections. We found that the visual projections to areas 1/3b are restricted to the representation of certain body parts (Cappe and Barone 2005). Some visual projections selectively target the face (middle temporal crescent) or the arm (dorsal fundus of the superior temporal area)
20
The Neural Bases of Multisensory Processes
representations in areas 1/3b. Similarly, auditory and multimodal projections to area V1 are prominent toward the representation of the peripheral visual field (Falchier et al. 2002, 2010; Hall and Lomber 2008), and only scattered neurons in the auditory cortex send a projection to foveal V1. The fact that heteromodal connections are coupling specific sensory representations across modalities probably reflects an adaptive process for behavioral specialization. This is in agreement with human and monkey data showing that the neuronal network involved in multisensory integration, as well as its expression at the level of the neuronal activity, is highly dependent on the perceptual task in which the subject is engaged. In humans, the detection or discrimination of bimodal objects, as well as the perceptual expertise of subjects, differentially affect both the temporal aspects and the cortical areas at which multisensory interactions occur (Giard and Peronnet 1999; Fort et al. 2002). Similarly, we have shown that the visuo–auditory interactions observed at the level of V1 neurons are observed only in behavioral situations during which the monkey has to interact with the stimuli (Wang et al. 2008). Such an influence of the perceptual context on the neuronal expression of multisensory interaction is also present when analyzing the phenomena of cross-modal compensation after sensory deprivation in human. In blind subjects (Sadato et al. 1996), the efficiency of somatosensory stimulation on the activation of the visual cortex is at maximum during an active discrimination task (Braille reading). This suggests that the mechanisms of multisensory interaction, at early stages of sensory processing and the cross-modal compensatory mechanisms, are probably mediated through common neuronal pathways involving the heteromodal connections described previously.
2.3 THALAMUS IN MULTISENSORY PROCESSES 2.3.1 Thalamocortical and Corticothalamic Connections Although the cerebral cortex and the superior colliculus (Stein and Meredith 1993) have been shown to be key structures for multisensory interactions, the idea that the thalamus could play a relay role in multisensory processing has been frequently proposed (Ghazanfar and Schroeder 2006 for review; Hackett et al. 2007; Cappe et al. 2009c; see also Cappe et al. 2009a for review). By using anatomical multiple tracing methods in the macaque monkey, we were able to test this hypothesis recently and looked at the relationship and the distribution of the thalamocortical and the corticothalamic (CT) connections between different sensory and motor cortical areas and thalamic nuclei (Cappe et al. 2009c). In this study, we provided evidence for the convergence of different sensory modalities in the thalamus. Based on different injections in somatosensory [in the posterior parietal somatosensory cortex (PE/PEa in area 5)], auditory [in the rostral (RAC) and caudal auditory cortex (CAC)], and premotor cortical areas [dorsal and ventral premotor cortical areas (PMd and PMv)] in the same animal, we were able to assess how connections between the cortex and the different thalamic nuclei are organized. We demonstrated for the first time the existence of overlapping territories of thalamic projections to different sensory and motor areas. We focus our review on thalamic nuclei that are projecting into more than two areas of different attributes rather than on sensory-specific thalamocortical projections. Thalamocortical projections were found from the central lateral (CL) nucleus and the mediodorsal (MD) nucleus to RAC, CAC, PEa, PE, PMd, and PMv. Common territories of projection were observed from the nucleus LP to PMd, PMv, PEa, and PE. The ventroanterior nucleus (VA), known as a motor thalamic nucleus, sends projections to PE and to PEa. Interestingly, projections distinct from the ones arising from specific unimodal sensory nuclei were observed from auditory thalamic nuclei, such as projections from the medial geniculate nucleus to the parietal cortex (PE in particular) and the premotor cortex (PMd/PMv). Last but not least, the medial pulvinar nucleus (PuM) exhibits the most significant overlap across modalities, with projections from superimposed territories to all six cortical areas injected with tracers. Projections from PuM to the auditory cortex were also described by de la Mothe and colleagues (2006b). Hackett and collaborators (2007)
Cortical and Thalamic Pathways for Multisensory and Sensorimotor Interplay
21
showed that somatosensory inputs may reach the auditory cortex (CM and CL) through connections coming from the medial part of the medial geniculate nucleus (MGm) or the multisensory nuclei [posterior, suprageniculate, limitans, and medial pulvinar (PuM)]. All these thalamocortical projections are consistent with the presence of thalamic territories possibly integrating different sensory modalities with motor attributes. We calculated the degree of overlap between thalamocortical and CT connections in the thalamus to determine the projections to areas of a same modality, as previously described (Tanné-Gariépy et al. 2002; Morel et al. 2005; Cappe et al. 2009c). The degree of overlap may range between 0% when two thalamic territories projecting to two distinct cortical areas are spatially completely segregated and 100% when the two thalamic territories fully overlap (considering a spatial resolution of 0.5 mm, further details in Cappe et al. 2009c). Thalamic nuclei with spatially intermixed thalamocortical cells projecting to auditory or premotor cortices were located mainly in the PuM, VA, and CL nuclei. The overlap between the projections to the auditory and parietal cortical areas concerned different thalamic nuclei such as PuM, CL, and to a lesser extent, LP and PuL. The projections to the premotor and posterior parietal cortex overlapped primarily in PuM, LP, MD, and also in VA, VLpd, and CL. Quantitatively, we found that projections from the thalamus to the auditory and motor cortical areas overlapped to an extent ranging from 4% to 12% through the rostral thalamus and increased up to 30% in the caudal part of the thalamus. In PuM, the degree of overlap between thalamocortical projections to auditory and premotor cortex ranged from 14% to 20%. PuM is the thalamic nucleus where the maximum of overlap between thalamocortical projections was found. Aside from the thalamocortical connections, CT connections were also investigated in the same study, concerning, in particular, the parietal areas PE and PEa injected with a tracer with anterograde properties (biotinylated dextran amine; Cappe et al. 2007). Indeed, areas PE and PEa send CT projections to the thalamic nuclei PuM, LP, and to a lesser extent, VPL, CM, CL, and MD (PEa only for MD). These thalamic nuclei contained both the small and giant CT endings. The existence of these two different types of CT endings reflect the possibility for CT connections to represent either feedback or feedforward projections (for review, see Rouiller and Welker 2000; Sherman and Guillery 2002, 2005; Sherman 2007). In contrast to the feedback CT projection originating from cortical layer VI, the feedforward CT projection originates from layer V and terminates in the thalamus in the form of giant endings, which can ensure highly secure and rapid synaptic transmission (Rouiller and Welker 2000). Considering the TC and CT projections, some thalamic nuclei (PuM, LP, VPL, CM, CL, and MD) could play a role in the integration of different sensory information with or without motor attributes (Cappe et al. 2007, 2009c). Moreover, parietal areas PE and PEa may send, via the giant endings, feedforward CT projection and transthalamic projections to remote cortical areas in the parietal, temporal, and frontal lobes contributing to polysensory and sensorimotor integration (Cappe et al. 2007, 2009c).
2.3.2 Role of Thalamus in Multisensory Integration The interconnections between the thalamus and the cortex described in the preceding section suggest that the thalamus could play the role of early sensory integrator. An additional role for the thalamus in multisensory interplay may derive from the organization of its CT and thalamocortical connections/loops as evoked in Section 2.3.1 (see also Crick and Koch 1998). Indeed, the thalamus could also have a relay role between different sensory and/or premotor cortical areas. In particular, the pulvinar, mainly its medial part, contains neurons which project to the auditory cortex, the somatosensory cortex, the visual cortex, and the premotor cortex (Romanski et al. 1997; Hackett et al. 1998; Gutierrez et al. 2000; Cappe et al. 2009c; see also Cappe et al. 2009a for a review). The feedforward CT projection originating from different sensory or motor cortical areas, combined with a subsequent TC projection, may allow a transfer of information between remote cortical areas through a “cortico–thalamo–cortical” route (see, e.g., Guillery 1995; Rouiller and Welker 2000; Sherman and Guillery 2002, 2005; Sherman 2007; Cappe et al. 2009c). As described in
22
The Neural Bases of Multisensory Processes
Section 2.3.1, the medial part of the pulvinar nucleus is the main candidate (although other thalamic nuclei such as LP, VPL, MD, or CL may also play a role) to represent an alternative to corticocortical loops by which information can be transferred between cortical areas belonging to different sensory and sensorimotor modalities (see also Shipp 2003). On a functional point of view, neurons in PuM respond to visual stimuli (Gattass et al. 1979) and auditory stimuli (Yirmiya and Hocherman 1987), which is consistent with our hypothesis. Another point is that, as our injections in the different sensory and motor areas included cortical layer I (Cappe et al. 2009c), it is likely that some of these projections providing multimodal information to the cortex originate from the so-called “matrix” calbindin-immunoreactive neurons distributed in all thalamic nuclei and projecting diffusely and relatively widely to the cortex (Jones 1998). Four different mechanisms of multisensory and sensorimotor interplay can be proposed based on the pattern of convergence and divergence of thalamocortical and CT connections (Cappe et al. 2009c). First, some restricted thalamic territories sending divergent projections to cortical areas afford different sensory and/or motor inputs which can be mixed simultaneously. Although such a multimodal integration in the temporal domain cannot be excluded (in case the inputs reach the cerebral cortex at the exact same time), it is less likely to provide massive multimodal interplay than an actual spatial convergence of projections. More convincingly, this pattern could support a temporal coincidence mechanism as a synchronizer between remote cortical areas, allowing a higher perceptual saliency of multimodal stimuli (Fries et al. 2001). Second, thalamic nuclei could be an integrator of multisensory information, rapidly relaying this integrated information to the cortex by their multiple thalamocortical connections. In PuM, considerable mixing of territories projecting to cortical areas belonging to several modalities is in line with previously reported connections with several cortical domains, including visual, auditory, somatosensory, and prefrontal and motor areas. Electrophysiological recordings showed visual and auditory responses in this thalamic nucleus (see Cappe et al. 2009c for an extensive description). According to our analysis, PuM, LP, MD, MGm, and MGd could play the role of integrator (Cappe et al. 2009c). Third, the spatial convergence of different sensory and motor inputs at the cortical level coming from thalamocortical connections of distinct thalamic territories suggests a fast multisensory interplay. In our experiments (Cappe et al. 2009c), the widespread distribution of thalamocortical inputs to the different cortical areas injected could imply that this mechanism of convergence plays an important role in multisensory and motor integration. By their cortical connection patterns, thalamic nuclei PuM and LP, for instance, could play this role for auditory–somatosensory interplay in area 5 (Cappe et al. 2009c). Fourth, the cortico–thalamo–cortical route can support rapid and secure transfer from area 5 (PE/PEa; Cappe et al. 2007) to the premotor cortex via the giant terminals of these CT connections (Guillery 1995; Rouiller and Welker 2000; Sherman and Guillery 2002, 2005; Sherman 2007). These giant CT endings, consistent with this principle of transthalamic loop, have been shown to be present in different thalamic nuclei (e.g., Schwartz et al. 1991; Rockland 1996; Darian-Smith et al. 1999; Rouiller et al. 1998, 2003; Taktakishvili et al. 2002; Rouiller and Durif 2004) and may well also apply to PuM, as demonstrated by the overlap between connections to the auditory cortex and to the premotor cortex, allowing an auditory–motor integration (Cappe et al. 2009c). Thus, recent anatomical findings at the thalamic level (Komura et al. 2005; de la Mothe 2006b; Hackett et al. 2007; Cappe et al. 2007, 2009c) may represent the anatomical support for multisensory behavioral phenomenon as well as multisensory integration at the functional level. Indeed, some nuclei in the thalamus, such as the medial pulvinar, receive either mixed sensory inputs or projections from different sensory cortical areas and project to sensory and premotor areas (Cappe et al. 2009c). Sensory modalities may thus already be fused at the thalamic level before being directly conveyed to the premotor cortex and consequently participating in the redundant signal effect expressed by faster reaction times in response to auditory–visual stimulation (Cappe et al. 2010).
Cortical and Thalamic Pathways for Multisensory and Sensorimotor Interplay
23
2.4 HIGHER-ORDER, LOWER-ORDER CORTICAL AREAS AND/OR THALAMUS? When applying the race model to behavioral performance for multisensory tasks, results showed that this model cannot account for the shorter reaction times in auditory–visual conditions (see Cappe et al. 2010 for data in monkeys), a result that imposes a “coactivation” model and implies a convergence of the sensory channels (Miller 1982). The anatomical level at which the coactivation occurs is still under debate (Miller et al. 2001), as it has been suggested to occur early at the sensory level (Miller et al. 2001; Gondan et al. 2005) or late at the motor stage (Giray and Ulrich 1993). However, in humans, analysis of the relationships between behavioral and neuronal indices (Molholm et al. 2002; Sperdin et al. 2009; Jepma et al. 2009) seems to suggest that this convergence of the sensory channels occurs early in sensory processing, before the decision at motor levels (Mordkoff et al. 1996; Gondan et al. 2005), as shown in monkeys (Lamarre et al. 1983; Miller et al. 2001; Wang et al. 2008). Determining the links between anatomic, neurophysiologic, and behavioral indices of multisensory processes is necessary to understand the conditions under which a redundant signal effect is observable. The reality of direct connections from a cortical area considered as unisensory to another one of different modality is a paradox for hierarchical models of sensory processing (Maunsell and Van Essen 1983; Felleman and Van Essen 1991). The most recent findings provided evidence that multisensory interactions can occur shortly after response onset, at the lowest processing stages (see previous paragraphs). These new elements have to be considered and included in view of the sensory system organization. Obviously, it is possible that some connections mediating early-stage multisensory connections have not yet been identified by anatomical methods. Inside a sensory system, the hierarchy relationship between cortical areas have been defined by the nature of the connections in terms of feedforward or feedback although the role of these connections is only partially understood (Salin and Bullier 1995; Bullier 2006). Recent results suggest that multisensory convergence in unisensory areas can intervene with stages of information processing of low levels, through feedback and feedforward circuits (Schroeder et al. 2001; Schroeder and Foxe 2002; Fu et al. 2003; Cappe and Barone 2005). Accordingly, anatomical methods alone are not sufficient to definitely determine the functional distinction of any connections in terms of feedforward–feedback nature, and cannot be used to establish a hierarchy between functional areas of different systems. This review highlights that both higher-order association areas and lower-order cortical areas are multisensory in nature and that the thalamus could also play a role in multisensory processing. Figure 2.1 summarizes and represents schematically the possible scenarios for multisensory integration through anatomical pathways. First, as traditionally proposed, information is processed from the primary “unisensory” cortical areas to “multisensory” association cortical areas, and finally, the premotor and motor cortical areas in a hierarchical way (Figure 2.1a). In these multisensory association areas, the strength and the latencies of neuronal responses are affected by the nature of the stimuli (e.g., Avillac et al. 2007; Romanski 2007; Bizley et al. 2007). Second, recent evidence demonstrated the existence of multisensory interaction at the first level of cortical processing of the information (Figure 2.1b). Third, as we described in this review, the thalamus by its numerous connections could play a role in this processing (Figure 2.1c). Altogether, this model represents the different alternative pathways for multisensory integration. These multiple pathways, which coexist (Figure 2.1d), may be useful to allow different paths according to the task and/or to mediate information of different natures (see Wang et al. 2008 for recent evidence of the influence of a perceptual task on neuronal responses). Taken together, the data reviewed here provide evidence for anatomical pathways possibly involved in multisensory integration at low levels of information processing in the primate and argue against a strict hierarchical model. An alternative for multisensory integration appears to be the thalamus. Indeed, as demonstrated in this chapter, the thalamus, thanks to its multiple connections, appears to belong to a cortico–thalamo–cortical loop. This allows us to consider that it may have a key role in multisensory integration. Finally, higher order association cortical areas, lower order cortical areas,
24
The Neural Bases of Multisensory Processes
(a)
(b)
M
M H
H A (c)
V
S
A (d)
M
V
S
M
H A
V
H S
T
A
V T
S
A: Auditory cortex V: Visual cortex S: Somatosensory cortex M: Premotor and motor cortex H: Higher order multisensory regions T: « non-specific » thalamic nuclei: PuM, LP, VPL, CM, CL and MD as example for connections with auditory and somatosensory cortical areas; PuM as example for connections with A, V and S cortex
FIGURE 2.1 Hypothetical scenarios for multisensory and motor integration through anatomically identified pathways. (a) High-level cortical areas as a pathway for multisensory and motor integration. (b) Low-level cortical areas as a pathway for multisensory integration. (c) Thalamus as a pathway for multisensory and motor integration. (d) Combined cortical and thalamic connections as a pathway for multisensory and motor integration.
as well as the thalamus have now been shown to be part of multisensory integration. The question is now to determine how this system of multisensory integration is organized and how the different parts of the system communicate to allow a unified view of the perception of the world.
2.5 CONCLUSIONS Obviously, we are just beginning to understand the complexity of interactions in the sensory systems and between the sensory and the motor systems. More work is needed in both the neural and perceptual domains. At the neural level, additional studies are needed to understand the extent and hierarchical organization of multisensory interactions. At the perceptual level, further experiments should explore the conditions necessary for cross-modal binding and plasticity, and investigate the nature of the information transfer between sensory systems. Such studies will form the basis for a new comprehension of how the different sensory and/or motor systems function together.
ACKNOWLEDGMENTS This study was supported by the following grants: the CNRS ATIP program (to P.B.), the Swiss National Science Foundation, grants 31-61857.00 (to E.M.R.) and 310000-110005 (to E.M.R.), the Swiss National Science Foundation Center of Competence in Research on “Neural Plasticity and Repair” (to E.M.R.).
REFERENCES Amedi, A., G. Jacobson, T. Hendler, R. Malach, and E. Zohary. 2002. Convergence of visual and tactile shape processing in the human lateral occipital complex. Cerebral Cortex 12:1202–12. Amedi, A., W.M., Stern, J.A. Camprodon et al. 2007. Shape conveyed by visual-to-auditory sensory substitution activates the lateral occipital complex. Nature Neuroscience 10:687–9.
Cortical and Thalamic Pathways for Multisensory and Sensorimotor Interplay
25
Andersen, R.A., C., Asanuma, G. Essick, and R.M. Siegel. 1990. Corticocortical connections of anatomically and physiologically defined subdivisions within the inferior parietal lobule. Journal of Comparative Neurology 296:65–113. Andersen, R.A., L.H. Snyder, D.C. Bradley, and J. Xing. 1997. Multimodal representation of space in the posterior parietal cortex and its use in planning movements. Annual Review of Neuroscience 20: 303–30. Avillac, M., S. Ben Hamed, and J.R. Duhamel. 2007. Multisensory integration in the ventral intraparietal area of the macaque monkey. Journal of Neuroscience 27:1922–32. Barbas, H. 1986. Pattern in the laminar origin of corticocortical connections. Journal of Comparative Neurology 252:415–22. Barbas, H., and D.N. Pandya. 1987. Architecture and frontal cortical connections of the premotor cortex (area 6) in the rhesus monkey. Journal of Comparative Neurology 256:211–28. Barone, P., and J.P. Joseph. 1989. Role of the dorsolateral prefrontal cortex in organizing visually guided behavior. Brain, Behavior and Evolution 33:132–5. Barraclough, N.E., D. Xiao, C.I., Baker, M.W. Oram, and D.I. Perrett. 2005. Integration of visual and auditory information by superior temporal sulcus neurons responsive to the sight of actions. Journal of Cognitive Neuroscience 17:377–91. Baylis, G.C., E.T. Rolls, and C.M. Leonard. 1987. Functional subdivisions of the temporal lobe neocortex. Journal of Neuroscience 7:330–42. Bell, A.H., M.A. Meredith, A.J. Van Opstal, and D.P. Munoz. 2005. Crossmodal integration in the primate superior colliculus underlying the preparation and initiation of saccadic eye movements. Journal of Neurophysiology 93:3659–73. Benevento, L.A., J., Fallon, B.J. Davis, and M. Rezak. 1977. Auditory–visual interaction in single cells in the cortex of the superior temporal sulcus and the orbital frontal cortex of the macaque monkey. Experimental Neurology 57:849–72. Bignall, K.E. 1970. Auditory input to frontal polysensory cortex of the squirrel monkey: Possible pathways. Brain Research 19:77–86. Bizley, J.K., and A.J. King. 2008. Visual–auditory spatial processing in auditory cortical neurons. Brain Research 1242:24–36. Bizley, J.K., F.R. Nodal, V.M. Bajo, I. Nelken, and A.J. King. 2007. Physiological and anatomical evidence for multisensory interactions in auditory cortex. Cerebral Cortex 17:2172–89. Blatt, G.J., R.A. Andersen, and G.R. Stoner. 1990. Visual receptive field organization and cortico-cortical connections of the lateral intraparietal area (area LIP) in the macaque. Journal of Comparative Neurology 299:421–45. Bremmer, F., F. Klam, J.R. Duhamel, S. Ben Hamed, and W. Graf. 2002. Visual-vestibular interactive responses in the macaque ventral intraparietal area (VIP). European Journal of Neuroscience 16:1569–86. Bruce, C., R. Desimone, and C.G. Gross. 1981. Visual properties of neurons in a polysensory area in superior temporal sulcus of the macaque. Journal of Neurophysiology 46:369–84. Budinger, E., and H. Scheich. 2009. Anatomical connections suitable for the direct processing of neuronal information of different modalities via the rodent primary auditory cortex (review). Hearing Research 258:16–27. Budinger, E., P. Heil, A. Hess, and H. Scheich. 2006. Multisensory processing via early cortical stages: Connec tions of the primary auditory cortical field with other sensory systems. Neuroscience 143:1065–83. Bullier, J. 2006. What is feed back? In 23 Problems in Systems Neuroscience, ed. J.L. van Hemmen and T.J. Sejnowski, 103–132. New York: Oxford University Press. Calvert, G.A. 2001. Crossmodal processing in the human brain: Insights from functional neuroimaging studies (review). Cerebral Cortex 11:1110–23. Calvert, G., C. Spence, and B.E. Stein, eds. 2004. The Handbook of Multisensory Processes. Cambridge, MA: MIT Press. Campi, K.L., K.L. Bales, R. Grunewald, and L. Krubitzer. 2010. Connections of auditory and visual cortex in the prairie vole (Microtus ochrogaster): Evidence for multisensory processing in primary sensory areas. Cerebral Cortex 20:89–108. Cappe, C., and P. Barone. 2005. Heteromodal connections supporting multisensory integration at low levels of cortical processing in the monkey. European Journal of Neuroscience 22:2886–902. Cappe, C., A. Morel, and E.M. Rouiller. 2007. Thalamocortical and the dual pattern of corticothalamic projections of the posterior parietal cortex in macaque monkeys. Neuroscience 146:1371–87. Cappe, C., E.M. Rouiller, and P. Barone. 2009a. Multisensory anatomic pathway (review). Hearing Research 258:28–36.
26
The Neural Bases of Multisensory Processes
Cappe, C., G. Thut, V. Romei, and M.M. Murray. 2009b. Selective integration of auditory-visual looming cues by humans. Neuropsychologia 47:1045–52. Cappe, C., A. Morel, P. Barone, and E.M. Rouiller. 2009c. The thalamocortical projection systems in primate: An anatomical support for multisensory and sensorimotor integrations. Cerebral Cortex 19:2025–37. Cappe, C., M.M. Murray, P. Barone, and E.M. Rouiller. 2010. Multisensory facilitation of behavior in monkeys: Effects of stimulus intensity. Journal of Cognitive Neuroscience 22:2850–63. Cohen, Y.E., B.E. Russ, and G.W. Gifford 3rd. 2005. Auditory processing in the posterior parietal cortex (review). Behavioral and Cognitive Neuroscience Reviews 4:218–31. Cohen, Y.E., F. Theunissen, B.E. Russ, and P. Gill. 2007. Acoustic features of rhesus vocalizations and their representation in the ventrolateral prefrontal cortex. Journal of Neurophysiology 97:1470–84. Colby, C.L., and M.E. Goldberg. 1999. Space and attention in parietal cortex (review). Annual Review of Neuroscience 22:319–49. Crick, F., and C. Koch. 1998. Constraints on cortical and thalamic projections: The no-strong-loops hypothesis. Nature 391:245–50. Cusick, C.G., B. Seltzer, M. Cola, and E. Griggs. 1995. Chemoarchitectonics and corticocortical terminations within the superior temporal sulcus of the rhesus monkey: Evidence for subdivisions of superior temporal polysensory cortex. Journal of Comparative Neurology 360:513–35. Darian-Smith, C., A. Tan, and S. Edwards. 1999. Comparing thalamocortical and corticothalamic microstructure and spatial reciprocity in the macaque ventral posterolateral nucleus (VPLc) and medial pulvinar. Journal of Comparative Neurology 410:211–34. de la Mothe, L.A., S. Blumell, Y. Kajikawa, and T.A. Hackett. 2006a. Cortical connections of the auditory cortex in marmoset monkeys: Core and medial belt regions. Journal of Comparative Neurology 496:27–71. de la Mothe, L.A., S. Blumell, Y. Kajikawa, and T.A. Hackett. 2006b. Thalamic connections of the auditory cortex in marmoset monkeys: Core and medial belt regions. Journal of Comparative Neurology 496:72–96. Desimone, R., and C.G. Gross. 1979. Visual areas in the temporal cortex of the macaque. Brain Research 178:363–80. Disbrow, E., E. Litinas, G.H. Recanzo, J. Padberg, and L. Krubitzer. 2003. Cortical connections of the second somatosensory area and the parietal ventral area in macaque monkeys. Journal of Comparative Neurology 462:382–99. Duhamel, J.R., C.L. Colby, and M.E. Goldberg. 1998. Ventral intraparietal area of the macaque: Congruent visual and somatic response properties. Journal of Neurophysiology 79:126–36. Ettlinger, G., and W.A. Wilson. 1990. Cross-modal performance: Behavioural processes, phylogenetic considerations and neural mechanisms (review). Behavioural Brain Research 40:169–92. Falchier, A., S. Clavagnier, P. Barone, and H. Kennedy. 2002. Anatomical evidence of multimodal integration in primate striate cortex. Journal of Neuroscience 22:5749–59. Falchier, A., C.E. Schroeder, T.A. Hackett et al. 2010. Low level intersensory connectivity as a fundamental feature of neocortex. Cerebral Cortex 20:1529–38. Felleman, D.J., and D.C. Van Essen. 1991. Distributed hierarchical processing in the primate cerebral cortex. Cerebral Cortex 1:1–47. Fogassi, L., V. Gallese, L. Fadiga, G. Luppino, M. Matelli, and G. Rizzolatti. 1996. Coding of peripersonal space in inferior premotor cortex (area F4). Journal of Neurophysiology 76:141–57. Fort, A., C. Delpuech, J. Pernier, and M.H. Giard. 2002. Dynamics of corticosubcortical cross-modal operations involved in audio-visual object detection in humans. Cerebral Cortex 12:1031–39. Forster, B., C. Cavina-Pratesi, S.M. Aglioti, and G. Berlucchi. 2002. Redundant target effect and intersensory facilitation from visual-tactile interactions in simple reaction time. Experimental Brain Research 143:480–487. Foxe, J.J., I.A. Morocz, M.M. Murray, B.A. Higgins, D.C. Javitt, and C.E. Schroeder. 2000. Multisensory auditory–somatosensory interactions in early cortical processing revealed by high-density electrical mapping. Brain Research. Cognitive Brain Research 10:77–83. Foxe, J.J., G.R. Wylie, A. Martinez et al. 2002. Auditory–somatosensory multisensory processing in auditory association cortex: An fMRI study. Journal of Neurophysiology 88:540–3. Frens, M.A., and A.J. Van Opstal. 1998. Visual–auditory interactions modulate saccade-related activity in monkey superior colliculus. Brain Research Bulletin 46:211–24. Fries, P., S. Neuenschwander, A.K. Engel, R. Goebel, and W. Singer. 2001. Rapid feature selective neuronal synchronization through correlated latency shifting. Nature Neuroscience 4:194–200. Fu, K.M., T.A. Johnston, A.S. Shah et al. 2003. Auditory cortical neurons respond to somatosensory stimulation. Journal of Neuroscience 23:7510–5. Fuster, J.M. 2001. The prefrontal cortex—an update: Time is of the essence (review). Neuron 30:319–33.
Cortical and Thalamic Pathways for Multisensory and Sensorimotor Interplay
27
Fuster, J.M., M. Bodner, and J.K. Kroger. 2000. Cross-modal and cross-temporal association in neurons of frontal cortex. Nature 405:347–51. Gaffan, D., and S. Harrison. 1991. Auditory–visual associations, hemispheric specialization and temporal– frontal interaction in the rhesus monkey. Brain 114:2133–44. Galletti, C., M. Gamberini, D.F. Kutz, P. Fattori, G. Luppino, M. Matelli. 2001. The cortical connections of area V6: An occipito-parietal network processing visual information. European Journal of Neuroscience 13:1572–88. Gattass, R., E. Oswaldo-Cruz, and A.P. Sousa. 1979. Visual receptive fields of units in the pulvinar of cebus monkey. Brain Research 160:413–30. Ghazanfar, A.A. 2009. The multisensory roles for auditory cortex in primate vocal communication (review). Hearing Research 258:113–20. Ghazanfar, A.A., and C.E. Schroeder. 2006. Is neocortex essentially multisensory? (review). Trends in Cognitive Sciences 10:278–85. Ghazanfar, A.A., J.X. Maier, K.L. Hoffman, and N.K. Logothetis. 2005. Multisensory integration of dynamic faces and voices in rhesus monkey auditory cortex. Journal of Neuroscience 25:5004–12. Ghazanfar, A.A., C. Chandrasekaran, and N.K. Logothetis. 2008. Interactions between the superior temporal sulcus and auditory cortex mediate dynamic face/voice integration in rhesus monkeys. Journal of Neuroscience 28:4457–69. Giard, M.H., and F. Peronnet. 1999. Auditory–visual integration during multimodal object recognition in humans: A behavioral and electrophysiological study. Journal of Cognitive Neuroscience 11:473–90. Gifford 3rd, G.W., and Y.E. Cohen. 2004. Effect of a central fixation light on auditory spatial responses in area LIP. Journal of Neurophysiology 91:2929–33. Gingras, G., B.A. Rowland, and B.E. Stein. 2009. The differing impact of multisensory and unisensory integration on behavior. Journal of Neuroscience 29:4897–902. Giray, M., and R. Ulrich. 1993. Motor coactivation revealed by response force in divided and focused attention. Journal of Experimental Psychology. Human Perception and Performance 19:1278–91. Gondan, M., B. Niederhaus, F. Rösler, and B. Röder. 2005. Multisensory processing in the redundant-target effect: A behavioral and event-related potential study. Perception & Psychophysics 67:713–26. Gottlieb, J. 2007. From thought to action: The parietal cortex as a bridge between perception, action, and cognition (review). Neuron 53:9–16. Graziano, M.S., G.S. Yap, and C.G. Gross. 1994. Coding of visual space by premotor neurons. Science 266:1054–7. Graziano, M.S., L.A. Reiss, and C.G. Gross. 1999. A neuronal representation of the location of nearby sounds. Nature 397:428–30. Guillery, R.W. 1995. Anatomical evidence concerning the role of the thalamus in corticocortical communication: A brief review. Journal of Anatomy 187:583–92. Gutierrez, C., M.G. Cola, B. Seltzer, and C. Cusick. 2000. Neurochemical and connectional organization of the dorsal pulvinar complex in monkeys. Journal of Comparative Neurology 419:61–86. Hackett, T.A., I. Stepniewska, and J.H. Kaas. 1998. Thalamocortical connections of the parabelt auditory cortex in macaque monkeys. Journal of Comparative Neurology 400:271–86. Hackett, T.A., I. Stepniewska, and J.H. Kaas. 1999. Prefrontal connections of the parabelt auditory cortex in macaque monkeys. Brain Research 817:45–58. Hackett, T.A., L.A. de La Mothe, I. Ulbert, G. Karmos, J. Smiley, and C.E. Schroeder. 2007. Multisensory convergence in auditory cortex: II. Thalamocortical connections of the caudal superior temporal plane. Journal of Comparative Neurology 502:924–52. Hagen, M.C., O. Franzén, F. McGlone, G. Essick, C. Dancer, and J.V. Pardo. 2002. Tactile motion activates the human middle temporal/V5 (MT/V5) complex. European Journal of Neuroscience 16:957–64. Hall, A.J., and S.G. Lomber. 2008. Auditory cortex projections target the peripheral field representation of primary visual cortex. Experimental Brain Research 190:413–30. Hecht, D., M. Reiner, and A. Karni. 2008. Enhancement of response times to bi- and tri-modal sensory stimuli during active movements. Experimental Brain Research 185:655–65. Heffner, R.S., and H.E. Heffner. 1992. Visual factors in sound localization in mammals. Journal of Comparative Neurology 317:219–32. Hikosaka, K., E. Iwai, H. Saito, and K. Tanaka. 1988. Polysensory properties of neurons in the anterior bank of the caudal superior temporal sulcus of the macaque monkey. Journal of Neurophysiology 60:1615–37. Huffman, K.J., and L. Krubitzer. 2001. Area 3a: topographic organization and cortical connections in marmoset monkeys. Cerebral Cortex 11:849–67.
28
The Neural Bases of Multisensory Processes
Innocenti, G.M., P. Berbel, and S. Clarke. 1988. Development of projections from auditory to visual areas in the cat. Journal of Comparative Neurology 272:242–59. James, T.W., G.K. Humphrey, J.S. Gati, P. Servos, R.S. Menon, and M.A. Goodale. 2002. Haptic study of threedimensional objects activates extrastriate visual areas. Neuropsychologia 40:1706–14. Jepma, M., E.J. Wagenmakers, G.P. Band, and S. Nieuwenhuis. 2009. The effects of accessory stimuli on information processing: Evidence from electrophysiology and a diffusion model analysis. Journal of Cognitive Neuroscience 21:847–64. Jones, E.G. 1998. Viewpoint: The core and matrix of thalamic organization. Neuroscience 85:331–45. Joseph, J.P., and P. Barone. 1987. Prefrontal unit activity during a delayed oculomotor task in the monkey. Experimental Brain Research 67:460–8. Kaas, J.H., and C.E. Collins. 2001. Evolving ideas of brain evolution. Nature 411:141–2. Kaas, J., and C.E. Collins. 2004. The resurrection of multisensory cortex in primates: connection patterns that integrates modalities. In The Handbook of Multisensory Processes, ed. G. Calvert, C. Spence, and B.E. Stein, 285–93. Cambridge, MA: MIT Press. Kaas, J.H., and T.A. Hackett. 2000. Subdivisions of auditory cortex and processing streams in primates. Proceedings of the National Academy of Sciences of the United States of America 97:11793–9. Kaas, J.H., and A. Morel. 1993. Connections of visual areas of the upper temporal lobe of owl monkeys: The MT crescent and dorsal and ventral subdivisions of FST. Journal of Neuroscience 13:534–46. Kayser, C., and N.K. Logothetis. 2009. Directed interactions between auditory and superior temporal cortices and their role in sensory integration (review). Frontiers in Integrative Neuroscience 3:7. doi: 10.3389/ neuro.07.007.2009. Kayser, C., C.I. Petkov, M. Augath, and N.K. Logothetis. 2005. Integration of touch and sound in auditory cortex. Neuron 48:373–84. Kayser, C., C.I. Petkov, M. Augath, and N.K. Logothetis. 2007. Functional imaging reveals visual modulation of specific fields in auditory cortex. Journal of Neuroscience 27:1824–35. Kayser, C., C.I. Petkov, and N.K. Logothetis. 2008. Visual modulation of neurons in auditory cortex. Cerebral Cortex 18:1560–74. Komura, Y., R. Tamura, T. Uwano, H. Nishijo, and T. Ono. 2005. Auditory thalamus integrates visual inputs into behavioral gains. Nature Neuroscience 8:1203–9. Krubitzer, L.A., and J.H. Kaas. 1990. The organization and connections of somatosensory cortex in marmosets. Journal of Neuroscience 10:952–74. Lakatos, P., C.M. Chen, M.N. O’Connell, A. Mills, and C.E. Schroeder. 2007. Neuronal oscillations and multisensory interaction in primary auditory cortex. Neuron 53:279–92. Lamarre, Y., L. Busby, and G. Spidalieri. 1983. Fast ballistic arm movements triggered by visual, auditory, and somesthetic stimuli in the monkey: I. Activity of precentral cortical neurons. Journal of Neurophysiology 50:1343–58. Laurienti, P.J., R.A. Kraft, J.A. Maldjian, J.H. Burdette, and M.T. Wallace. 2004. Semantic congruence is a critical factor in multisensory behavioral performance. Experimental Brain Research 158:405–14. Lewis, J.W., and D.C. Van Essen. 2000. Corticocortical connections of visual, sensorimotor, and multimodal processing areas in the parietal lobe of the macaque monkey. Journal of Comparative Neurology 428:112–37. Linden, J.F., A. Grunewald, and R.A. Andersen. 1999. Responses to auditory stimuli in macaque lateral intraparietal area: II. Behavioral modulation. Journal of Neurophysiology 82:343–58. Lovelace, C.T., B.E. Stein, and M.T. Wallace. 2003. An irrelevant light enhances auditory detection in humans: A psychophysical analysis of multisensory integration in stimulus detection. Brain Research. Cognitive Brain Research 17:447–53. Martuzzi, R., M.M. Murray, C.M. Michel et al. 2007. Multisensory interactions within human primary cortices revealed by BOLD dynamics. Cerebral Cortex 17:1672–9. Maunsell, J.H., and D.C. Van Essen. 1983. The connections of the middle temporal visual area (MT) and their relationship to a cortical hierarchy in the macaque monkey. Journal of Neuroscience 3:2563–86. Miller, J. 1982. Divided attention: Evidence for coactivation with redundant signals. Cognitive Psychology 14:247–79. Miller, J., R. Ulrich, and Y. Lamarre. 2001. Locus of the redundant-signals effect in bimodal divided attention: A neurophysiological analysis. Perception & Psychophysics 63:555–62. Milner, B., M. Petrides, and M.L. Smith. 1985. Frontal lobes and the temporal organization of memory. Human Neurobiology 4:137–42. Molholm, S., W. Ritter, M.M. Murray, D.C. Javitt, C.E. Schroeder, and J.J. Foxe. 2002. Multisensory auditory– visual interactions during early sensory processing in humans: A high-density electrical mapping study. Brain Research. Cognitive Brain Research 14:115–28.
Cortical and Thalamic Pathways for Multisensory and Sensorimotor Interplay
29
Mordkoff, J.T., J. Miller, and A.C. Roch. 1996. Absence of coactivation in the motor component: Evidence from psychophysiological measures of target detection. Journal of Experimental Psychology. Human Perception and Performance 22:25–41. Morel, A., J. Liu, T. Wannier, D. Jeanmonod, and E.M. Rouiller. 2005. Divergence and convergence of thalamocortical projections to premotor and supplementary motor cortex: A multiple tracing study in macaque monkey. European Journal of Neuroscience 21:1007–29. Murray, M.M., S. Molholm, C.M. Michel et al. 2005. Grabbing your ear: Rapid auditory–somatosensory multi sensory interactions in low-level sensory cortices are not constrained by stimulus alignment. Cerebral Cortex 15:963–74. Palmer, S.M., and M.G. Rosa. 2006. A distinct anatomical network of cortical areas for analysis of motion in far peripheral vision. European Journal of Neuroscience 24:2389–405. Pandya, D.N., and B. Seltzer. 1982. Intrinsic connections and architectonics of posterior parietal cortex in the rhesus monkey. Journal of Comparative Neurology 204:196–210. Petrides, M., and S.D. Iversen. 1976. Cross-modal matching and the primate frontal cortex. Science 192:1023–4. Poremba, A., R.C. Saunders, A.M. Crane, M. Cook, L. Sokoloff, and M. Mishkin. 2003. Functional mapping of the primate auditory system. Science 299:568–72. Raab, D.H. 1962. Statistical facilitation of simple reaction times. Transactions of the New York Academy of Sciences 24:574–90. Rizzolatti, G., L. Fogassi, and V. Gallese. 1997. Parietal cortex: From sight to action (review). Current Opinion in Neurobiology 7:562–7. Rockland, K.S. 1996. Two types of corticopulvinar terminations: Round (type 2) and elongate (type 1). Journal of Comparative Neurology 368:57–87. Rockland, K.S., and H. Ojima. 2003. Multisensory convergence in calcarine visual areas in macaque monkey. International Journal of Psychophysiology 50:19–26. Romanski, L.M. 2004. Domain specificity in the primate prefrontal cortex (review). Cognitive, Affective & Behavioral Neuroscience 4:421–9. Romanski, L.M. 2007. Representation and integration of auditory and visual stimuli in the primate ventral lateral prefrontal cortex. Cerebral Cortex 17 Suppl. no. 1, i61–9. Romanski, L.M., M. Giguere, J.F. Bates, and P.S. Goldman-Rakic. 1997. Topographic organization of medial pulvinar connections with the prefrontal cortex in the rhesus monkey. Journal of Comparative Neurology 379:313–32. Romanski, L.M., J.F. Bates, and P.S. Goldman-Rakic. 1999. Auditory belt and parabelt projections to the prefrontal cortex in the rhesus monkey. Journal of Comparative Neurology 403:141–57. Romanski, L.M., and P.S. Goldman-Rakic. 2002. An auditory domain in primate prefrontal cortex. Nature Neuroscience 5:15–6. Romei, V., M.M. Murray, L.B. Merabet, and G. Thut. 2007. Occipital transcranial magnetic stimulation has opposing effects on visual and auditory stimulus detection: Implications for multisensory interactions. Journal of Neuroscience 27:11465–72. Rouiller, E.M., and C. Durif. 2004. The dual pattern of corticothalamic projection of the primary auditory cortex in macaque monkey. Neuroscience Letters 358:49–52. Rouiller, E.M., J. Tanné, V. Moret, I. Kermadi, D. Boussaoud, and E. Welker. 1998. Dual morphology and topography of the corticothalamic terminals originating from the primary, supplementary motor, and dorsal premotor cortical areas in macaque monkeys. Journal of Comparative Neurology 396:169–85. Rouiller, E.M., and E. Welker. 2000. A comparative analysis of the morphology of corticothalamic projections in mammals. Brain Research Bulletin 53:727–41. Rouiller, E.M., T. Wannier, and A. Morel. 2003. The dual pattern of corticothalamic projection of the premotor cortex in macaque monkeys. Thalamus & Related Systems 2:189–97. Russ, B.E., A.M. Kim, K.L. Abrahamsen, R. Kiringoda, and Y.E. Cohen. 2006. Responses of neurons in the lateral intraparietal area to central visual cues. Experimental Brain Research 174:712–27. Sadato, N., A. Pascual-Leone, J. Grafman et al. 1996. Activation of the primary visual cortex by Braille reading in blind subjects. Nature 380:526–8. Salin, P.A., and J. Bullier. 1995. Corticocortical connections in the visual system: Structure and function. Physiological Reviews 75:107–54. Sathian, K., and A. Zangaladze. 2002. Feeling with the mind’s eye: Contribution of visual cortex to tactile perception (review). Behavioural Brain Research 135:127–32. Schall, J.D., A. Morel, D.J. King, and J. Bullier. 1995. Topography of visual cortex connections with frontal eye field in macaque: Convergence and segregation of processing streams. Journal of Neuroscience 15:4464–87.
30
The Neural Bases of Multisensory Processes
Schlack, A., S.J. Sterbing-D’Angelo, K. Hartung, K.P. Hoffmann, and F. Bremmer. 2005. Multisensory space representations in the macaque ventral intraparietal area. Journal of Neuroscience 25:4616–25. Schroeder, C.E., and J.J. Foxe. 2002. The timing and laminar profile of converging inputs to multisensory areas of the macaque neocortex. Cognitive Brain Research 14:187–98. Schroeder, C.E., R.W. Lindsley, C. Specht, A. Marcovici, J.F. Smiley, and D.C. Javitt. 2001. Somatosensory input to auditory association cortex in the macaque monkey. Journal of Neurophysiology 85:1322–7. Schwartz, M.L., J.J. Dekker, and P.S. Goldman-Rakic. 1991. Dual mode of corticothalamic synaptic termination in the mediodorsal nucleus of the rhesus monkey. Journal of Comparative Neurology 309:289–304. Seltzer, B., and D.N. Pandya. 1994. Parietal, temporal, and occipital projections to cortex of the superior temporal sulcus in the rhesus monkey: A retrograde tracer study. Journal of Comparative Neurology 343:445–63. Sherman, S.M. 2007. The thalamus is more than just a relay. Current Opinion in Neurobiology 17:417–22. Sherman, S.M., and R.W. Guillery. 2002. The role of the thalamus in the flow of information to the cortex. Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences 357:1695–708. Sherman, S.M., and R.W. Guillery. 2005. Exploring the Thalamus and Its Role in Cortical Function. Cambridge: MIT Press. Shipp, S. 2003. The functional logic of cortico-pulvinar connections. Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences 358:1605–24. Smiley, J.F., T.A. Hackett, I. Ulbert, G. Karmas, P. Lakatos, D.C. Javitt, and C.E. Schroeder. 2007. Multisensory convergence in auditory cortex, I. Cortical connections of the caudal superior temporal plane in macaque monkeys. Journal of Comparative Neurology 502:894–923. Smiley, J.F., and A. Falchier. 2009. Multisensory connections of monkey auditory cerebral cortex. Hearing Research 258:37–46. Sperdin, H., C. Cappe, J.J. Foxe, and M.M. Murray. 2009. Early, low-level auditory–somatosensory multisensory interactions impact reaction time speed. Frontiers in Integrative Neuroscience 3:2. doi:10.3389/ neuro.07.002.2009. Stein, B.E., and M.A. Meredith. 1993. The Merging of the Senses. Cambridge, MA: MIT Press. Stein, B.E., M.A. Meredith, W.S. Huneycutt, and L. Mcdade. 1989. Behavioral indices of multisensory integration: Orientation to visual cues is affected by auditory stimuli. Journal of Cognitive Neuroscience 1:12–24. Suzuki, H. 1985. Distribution and organization of visual and auditory neurons in the monkey prefrontal cortex. Vision Research 25:465–9. Tanné-Gariépy, J., E.M. Rouiller, and D. Boussaoud. 2002. Parietal inputs to dorsal versus ventral premotor areas in the macaque monkey: Evidence for largely segregated visuomotor pathways. Experimental Brain Research 145:91–103. Taktakishvili, O., E. Sivan-Loukianova, K. Kultas-Ilinsky, and I.A. Ilinsky. 2002. Posterior parietal cortex projections to the ventral lateral and some association thalamic nuclei in Macaca mulatta. Brain Research Bulletin 59:135–50. Wallace, M.T., R. Ramachandran, and B.E. Stein. 2004. A revised view of sensory cortical parcellation. Proceedings of the National Academy of Sciences of the United States of America 101:2167–72. Wang, Y., S. Celebrini, Y. Trotter, and P. Barone. 2008. Visuo–auditory interactions in the primary visual cortex of the behaving monkey: Electrophysiological evidence. BMC Neuroscience 9:79. Yirmiya, R., and S. Hocherman. 1987. Auditory- and movement-related neural activity interact in the pulvinar of the behaving rhesus monkey. Brain Research 402:93–102. Zampini, M., D. Torresan, C. Spence, and M.M. Murray. 2007. Auditory–somatosensory multisensory interactions in front and rear space. Neuropsychologia 45:1869–77.
3
What Can Multisensory Processing Tell Us about the Functional Organization of Auditory Cortex? Jennifer K. Bizley and Andrew J. King
CONTENTS 3.1 Introduction............................................................................................................................. 31 3.2 Functional Specialization within Auditory Cortex?................................................................ 32 3.3 Ferret Auditory Cortex: A Model for Multisensory Processing.............................................. 33 3.3.1 Organization of Ferret Auditory Cortex...................................................................... 33 3.3.2 Surrounding Cortical Fields........................................................................................ 35 3.3.3 Sensitivity to Complex Sounds.................................................................................... 36 3.3.4 Visual Sensitivity in Auditory Cortex......................................................................... 36 3.3.5 Visual Inputs Enhance Processing in Auditory Cortex............................................... 39 3.4 Where Do Visual Inputs to Auditory Cortex Come From?.....................................................40 3.5 What Are the Perceptual Consequences of Multisensory Integration in the Auditory Cortex?..................................................................................................................................... 41 3.5.1 Combining Auditory and Visual Spatial Representations in the Brain....................... 42 3.5.2 A Role for Auditory Cortex in Spatial Recalibration?................................................. 43 3.6 Concluding Remarks...............................................................................................................44 References.........................................................................................................................................44
3.1 INTRODUCTION The traditional view of sensory processing is that the pooling and integration of information across different modalities takes place in specific areas of the brain only after extensive processing within modality-specific subcortical and cortical regions. This seems like a logical arrangement because our various senses are responsible for transducing different forms of energy into neural activity and give rise to quite distinct perceptions. To a large extent, each of the sensory systems can operate independently. We can, after all, understand someone speaking by telephone or read a book perfectly well without recourse to cues provided by other modalities. It is now clear, however, that multisensory convergence is considerably more widespread in the brain, and particularly the cerebral cortex, than was once thought. Indeed, even the primary cortical areas in each of the main senses have been claimed as part of the growing network of multisensory regions (Ghazanfar and Schroeder 2006). It is clearly beneficial to be able to combine information from the different senses. Although the perception of speech is based on the processing of sound, what we actually hear can be influenced by visual cues provided by lip movements. This can result in an improvement in speech intelligibility 31
32
The Neural Bases of Multisensory Processes
in the presence of other distracting sounds (Sumby and Pollack 1954) or even a subjective change in the speech sounds that are perceived (McGurk and MacDonald 1976). Similarly, the accuracy with which the source of a sound can be localized is affected by the availability of both spatially congruent (Shelton and Searle 1980; Stein et al. 1989) and conflicting (Bertelson and Radeau 1981) visual stimuli. With countless other examples of cross-modal interactions at the perceptual level (Calvert and Thesen 2004), it is perhaps not surprising that multisensory convergence is so widely found throughout the cerebral cortex. The major challenge that we are now faced with is to identify the function of multisensory integration in different cortical circuits, and particularly at early levels of the cortical hierarchy—the primary and secondary sensory areas—which are more likely to be involved in general-purpose processing relating to multiple sound parameters than in task-specific computational operations (Griffiths et al. 2004; King and Nelken 2009). In doing so, we have to try and understand how other modalities influence the sensitivity or selectivity of cortical neurons in those areas while retaining the modality specificity of the percepts to which the activity of the neurons contributes. By investigating the sources of origin of these inputs and the way in which they interact with the dominant input modality for a given cortical area, we can begin to constrain our ideas about the potential functions of multisensory integration in early sensory cortex. In this article, we focus on the organization and putative functions of visual inputs to the auditory cortex. Although anatomical and physiological studies have revealed multisensory interactions in visual and somatosensory areas, it is arguably the auditory cortex where most attention has been paid and where we may be closest to answering these questions.
3.2 FUNCTIONAL SPECIALIZATION WITHIN AUDITORY CORTEX? A common feature of all sensory systems is that they comprise multiple cortical areas that can be defined both physiologically and anatomically, and which are collectively involved in the processing of the world around us. Although most studies on the cortical auditory system have focused on the primary area, A1, there is considerable interest in the extent to which different sound features are represented in parallel in distinct functional streams that extend beyond A1 (Griffiths et al. 2004). Research on this question has been heavily influenced by studies of the visual cortex and, in particular, by the proposal that a division of function exists, with separate dorsal and ventral pathways involved in visuomotor control and object identification, respectively. The dorsal processing stream, specialized for detecting object motion and discriminating spatial relationships, includes the middle temporal (MT) and medial superior temporal (MST) areas, whereas the ventral stream comprises areas responsible for color, form, and pattern discrimination. Although the notion of strict parallel processing of information, originating subcortically in the p and m pathways and terminating in temporal and parietal cortical areas, is certainly an oversimplification (Merigan and Maunsell 1993), the perception–action hypothesis is supported by neuroimaging, human neuropsychology, monkey neurophysiology, and human psychophysical experiments (reviewed by Goodale and Westwood 2004). A popular, if controversial, theory seeks to impose a similar organizational structure onto the auditory cortex. Within this framework, Rauschecker and Tian (2000) proposed that the auditory cortex can be divided into a rostral processing stream, responsible for sound identification, and a caudal processing stream, involved in sound localization. Human functional imaging data provide support for this idea (Alain et al. 2001; Barrett and Hall 2006; Maeder et al. 2001; Warren and Griffiths 2003), and there is evidence for regional differentiation based on the physiological response properties of single neurons recorded in the auditory cortex of nonhuman primates (Tian et al. 2001; Recanzone 2000; Woods et al. 2006; Bendor and Wang 2005). However, the most compelling evidence for a division of labor has been provided by the specific auditory deficits induced by transiently deactivating different cortical areas in cats. Thus, normal sound localization in this species requires the activation of A1, the posterior auditory field (PAF), the anterior ectosylvian sulcus and the dorsal zone of the auditory cortex, whereas other areas, notably the anterior auditory
Auditory Cortex according to Multisensory Processing
33
field (AAF), ventral PAF (VPAF), and secondary auditory cortex (A2) do not appear to contribute to this task (Malhotra and Lomber 2007). Moreover, a double dissociation between PAF and AAF in the same animals has been demonstrated, with impaired sound localization produced by cooling of PAF but not AAF, and impaired temporal pattern discrimination resulting from inactivation of AAF but not PAF (Lomber and Malhotra 2008). Lastly, anatomical projection patterns in nonhuman primates support differential roles for rostral and caudal auditory cortex, with each of those areas having distinct prefrontal targets (Hackett et al. 1999; Romanski et al. 1999). Despite this apparent wealth of data in support of functional specialization within the auditory cortex, there are a number of studies that indicate that sensitivity to both spatial and nonspatial sound attributes is widely distributed across different cortical fields (Harrington et al. 2008; Stecker et al. 2003; Las et al. 2008; Hall and Plack 2009; Recanzone 2008; Nelken et al. 2008; Bizley et al. 2009). Moreover, in humans, circumscribed lesions within the putative “what” and “where” pathways do not always result in the predicted deficits in sound recognition and localization (Adriani et al. 2003). Clearly defined output pathways from auditory cortex to prefrontal cortex certainly seem to exist, but what the behavioral deficits observed following localized deactivation or damage imply about the functional organization of the auditory cortex itself is less clear-cut. Loss of activity in any one part of the network will, after all, affect both upstream cortical areas and potentially the responses of subcortical neurons that receive descending projections from that region of the cortex (Nakamoto et al. 2008). Thus, a behavioral deficit does not necessarily reflect the specialized properties of the neurons within the silenced cortical area per se, but rather the contribution of the processing pathways that the area is integral to. Can the distribution and nature of multisensory processing in the auditory cortex help reconcile the apparently contrasting findings outlined above? If multisensory interactions in the cortex are to play a meaningful role in perception and behavior, it is essential that the neurons can integrate the corresponding multisensory features of individual objects or events, such as vocalizations and their associated lip movements or the visual and auditory cues originating from the same location in space. Consequently, the extent to which spatial and nonspatial sound features are processed in parallel in the auditory cortex should also be apparent in both the multisensory response properties of the neurons found there and the sources of origin of its visual inputs. Indeed, evidence for taskspecific activation of higher cortical areas by different stimulus modalities has recently been provided in humans (Renier et al. 2009). In the next section, we focus on the extent to which anatomical and physiological studies of multisensory convergence and processing in the auditory cortex of the ferret have shed light on this issue. In recent years, this species has gained popularity for studies of auditory cortical processing, in part because of its particular suitability for behavioral studies.
3.3 FERRET AUDITORY CORTEX: A MODEL FOR MULTISENSORY PROCESSING 3.3.1 Organization of Ferret Auditory Cortex Ferret auditory cortex consists of at least six acoustically responsive areas: two core fields, A1 and AAF, which occupy the middle ectosylvian gyrus; two belt areas on the posterior ectosylvian gyrus, the posterior pseudosylvian field (PPF) and posterior suprasylvian field (PSF); plus two areas on the anterior ectosylvian gyrus, the anterior dorsal field (ADF) and the anterior ventral field (AVF) (Bizley et al. 2005; Figure 3.1a). A1, AAF, PPF, and PSF are all tonotopically organized: the neurons found there respond to pure tones and are most sensitive to particular sound frequencies, which vary systemically in value with neuron location within each cortical area. There is little doubt that an equivalent area to the region designated as A1 is found in many different mammalian species, including humans. AAF also appears to be homologous to AAF in other species including the gerbil (Thomas et al. 1993) and the cat (Imaizumi et al. 2004), and is characterized by an underrepresentation of neurons preferring middle frequencies and having shorter response latencies compared to A1.
34
The Neural Bases of Multisensory Processes (a) PPr S1 body S1 face
sss
S RS S M LRS
AAF A1
ADF
AVF
PPF
ssd
C
A AL MLS LS?
21
PSF
VP
19
18
17
PS SSY PLLS?
D
PPc
SIII
20a
20b
5 mm
(b)
(c)
I-IV V-VI wm V-VI I-IV
wm
wm
CTβ BDA
V1, V2 (sparse) Area 20 (visual form) SSY (visual motion)
R
CTβ BDA
1 mm
(d) Visual cortical input
D
A1 AAF PPF PSF ADF
FIGURE 3.1 Visual inputs to ferret auditory cortex. (a) Ferret sensory cortex. Visual (areas 17–20, PS, SSY, AMLS), posterior parietal (PPr, PPc), somatosensory (S1, SIII, MRSS), and auditory areas (A1, AAF, PPF, PSF, and ADF) have been identified. In addition, LRSS and AVF are multisensory regions, although many of the areas classified as modality specific also contain some multisensory neurons. (b) Location of neurons in visual cortex that project to auditory cortex. Tracer injections made into core auditory cortex (A1: BDA, shown in black, and AAF: CTβ, shown in gray) result in retrograde labeling in early visual areas. Every fifth section (50 µm thick) was examined, but for the purpose of illustration, labeling from four sections was collapsed onto single sections. Dotted lines mark the limit between cortical layers IV and V; dashed lines delimit the white matter (wm). (c) Tracer injections made into belt auditory cortex. Retrograde labeling after an injection of CTβ into the anterior fields (on the borders of ADF and AVF) is shown in gray, and retrograde labeling resulting from a BDA injection into the posterior fields PPF and PSF is shown in black. Note the difference in the extent and distribution of labeling after injections into the core and belt areas of auditory cortex. Scale bars in (b) and (c), 1 mm. (d) Summary of sources of visual cortical input to auditory cortex. (Anatomical data adapted with permission from Bizley, J.K. et al., Cereb. Cortex, 17, 2172–89, 2007.)
Auditory Cortex according to Multisensory Processing
35
Neurons in the posterior fields can be distinguished from those in the primary areas by the temporal characteristics of their responses; discharges are often sustained and they vary in latency and firing pattern in a stimulus-dependent manner. The frequency response areas of posterior field neurons are often circumscribed, exhibiting tuning for sound level as well as frequency. As such, the posterior fields in the ferret resemble PAF and VPAF in the cat (Stecker et al. 2003; Phillips and Orman 1984; Loftus and Sutter 2001) and cortical areas R and RT in the marmoset monkey (Bizley et al. 2005; Bendor and Wang 2008), although whether PPF and PSF actually correspond to these fields is uncertain. Neurons in ADF also respond to pure tones, but are not tonotopically organized (Bizley et al. 2005). The lack of tonotopicity and the broad, high-threshold frequency response areas that characterize this field are also properties of cat A2 (Schreiner and Cynader 1984). However, given that ferret ADF neurons seem to show relatively greater spatial sensitivity than those in surrounding cortical fields (see following sections), which is not a feature of cat A2, it seems unlikely that these areas are homologous. Ventral to ADF lies AVF. Although many of the neurons that have been recorded there are driven by sound, the high incidence of visually responsive neurons (see Section 3.3.4) makes it likely that AVF should be regarded as a parabelt or higher multisensory field. Given its proximity to the somatosensory area on the medial bank of the rostral suprasylvian sulcus (MRSS) (Keniston et al. 2009), it is possible that AVF neurons might also be influenced by tactile stimuli, but this remains to be determined. Other studies have also highlighted the multisensory nature of the anterior ectosylvian gyrus. For example, Ramsay and Meredith (2004) described an area surrounding the pseudosylvian sulcus that receives largely segregated inputs from the primary visual and somatosensory cortices, which they termed the pseudosylvian sulcal cortex. Manger et al. (2005) reported that a visually responsive area lies parallel to the pseudosylvian sulcus on the posterolateral half of the anterior ectosylvian gyrus, which also contains bisensory neurons that respond either to both visual and tactile or to visual and auditory stimulation. They termed this area AEV, following the terminology used for the visual region within the cat’s anterior ectosylvian sulcus. Because this region overlaps in part with the acoustically responsive areas that we refer to as ADF and AVF, further research using a range of stimuli will be needed to fully characterize this part of the ferret’s cortex. However, the presence of a robust projection from AVF to the superior colliculus (Bajo et al. 2010) makes it likely that this is equivalent to the anterior ectosylvian sulcus in the cat.
3.3.2 Surrounding Cortical Fields The different auditory cortical areas described in the previous section are all found on the ectosylvian gyrus (EG), which is enclosed by the suprasylvian sulcus (Figure 3.1a). The somatosensory cortex lies rostral to the EG (Rice et al. 1993; McLaughlin et al. 1998), extrastriate visual areas are located caudally (Redies et al. 1990), and the parietal cortex is found dorsal to the EG (Manger et al. 2002). The suprasylvian sulcus therefore separates the different auditory fields from functionally distinct parts of the cerebral cortex. Within the suprasylvian sulcus itself, several additional cortical fields have been characterized (Philipp et al. 2006; Manger et al. 2004, 2008; Cantone et al. 2006; Keniston et al. 2008). Beginning at the rostral border between the auditory and somatosensory cortices, field MRSS (Keniston et al. 2009) and the lateral bank of the rostral suprasylvian sulcus (LRSS) (Keniston et al. 2008) form the medial and lateral sides of the suprasylvian sulcus, respectively. Field LRSS has been identified as an auditory–somatosensory area, whereas MRSS is more modality specific and is thought to be a higher somatosensory field. Field MRSS is bordered by the anteromedial lateral suprasylvian visual area (AMLS), which lines the medial or dorsal bank of the suprasylvian sulcus (Manger et al. 2008). Two more visually responsive regions, the suprasylvian visual area (SSY) (Cantone et al. 2006; Philipp et al. 2006) and the posterior suprasylvian area (PS) (Manger et al. 2004) are found on the caudal side of the sulcus. SSY corresponds in location to an area described by Philipp et al.
36
The Neural Bases of Multisensory Processes
(2005) as the ferret homologue of primate motion-processing area MT. This region has also been described by Manger et al. (2008) as the posteromedial suprasylvian visual area, but we will stay with the terminology used in our previous articles and refer to it as SSY. PS has not been comprehensively investigated and, to our knowledge, neither of these sulcal fields have been tested with auditory or somatosensory stimuli. On the lateral banks of the suprasylvian sulcus, at the dorsal and caudal edges of the EG, remains an area of uninvestigated cortex. On the basis of its proximity to AMLS and SSY, this region has tentatively been divided into the anterolateral lateral suprasylvian visual area (ALLS) and the posterolateral lateral suprasylvian visual area (PLLS) by Manger et al. (2008). However, because these regions of the sulcal cortex lie immediately adjacent to the primary auditory fields, it is much more likely that they are multisensory in nature.
3.3.3 Sensitivity to Complex Sounds In an attempt to determine whether spatial and nonspatial stimulus attributes are represented within anatomically distinct regions of the ferret auditory cortex, we investigated the sensitivity of neurons in both core and belt areas to stimulus periodicity, timbre, and spatial location (Bizley et al. 2009). Artificial vowel sounds were used for this purpose, as they allowed each of these stimulus dimensions to be varied parametrically. Recordings in our laboratory have shown that ferret vocalizations cover the same frequency range as the sounds used in this study. Vowel identification involves picking out the formant peaks in the spectral envelope of the sound, and is therefore a timbre discrimination task. The periodicity of the sound corresponds to its perceived pitch and conveys information about speaker identity (males tend to have lower pitch voices than females) and emotional state. Neuronal sensitivity to timbre and pitch should therefore be found in cortical areas concerned with stimulus identification. Neurons recorded throughout the five cortical areas (A1, AAF, PPF, PSF, and ADF) examined were found to be sensitive to the pitch, timbre, and location of the sound source, implying a distributed representation of both spatial and nonspatial sound properties. Nevertheless, significant inter areal differences were observed. Sensitivity to sound pitch and timbre was most pronounced in the primary and posterior auditory fields (Bizley et al. 2009). By contrast, relatively greater sensitivity to sound-source location was found in A1 and in the areas around the pseudosylvian sulcus, which is consistent with the finding that the responses of neurons in ADF carry more information about sound azimuth than those in other auditory cortical areas (Bizley and King 2008). The variance decomposition method used in the study by Bizley et al. (2009) to quantify the effects of each stimulus parameter on the responses of the neurons was very different from the measures used to define a pitch center in marmoset auditory cortex (Bendor and Wang 2005). We did not, for example, test whether pitch sensitivity was maintained for periodic stimuli in which the fundamental frequency had been omitted. Consequently, the distributed sensitivity we observed is not incompatible with the idea that there might be a dedicated pitch-selective area. However, in a subsequent study, we did find that the spiking responses of single neurons and neural ensembles throughout the auditory cortex can account for the ability of trained ferrets to detect the direction of a pitch change (Bizley et al. 2010). Although further research is needed, particularly in awake, behaving animals, these electrophysiological data are consistent with support the results of an earlier intrinsic optical imaging study (Nelken et al. 2008) in providing only limited support for a division of labor across auditory cortical areas in the ferret.
3.3.4 Visual Sensitivity in Auditory Cortex Visual inputs into auditory cortex have been described in several species, including humans (Calvert et al. 1999; Giard and Peronnet 1999; Molholm et al. 2002), nonhuman primates (Brosch et al. 2005; Ghazanfar et al. 2005; Schroeder and Foxe 2002; Kayser et al. 2007), ferrets (Bizley and King 2008, 2009; Bizley et al. 2007), gerbils (Cahill et al. 1996), and rats (Wallace et al. 2004). In our studies on the ferret, the responses of single neurons and multineuron clusters were recorded to simplistic
37
Auditory Cortex according to Multisensory Processing
artificial stimuli presented under anesthesia. Sensitivity to visual stimulation was defined as a statistically significant change in spiking activity after the presentation of light flashes from a light-emitting diode (LED) positioned in the contralateral hemifield or by a significant modulation of the response to auditory stimulation even if the LED by itself was apparently ineffective in driving the neuron. Although the majority of neurons recorded in the auditory cortex were classified as auditory alone, the activity of more than one quarter was found to be influenced by visual stimulation. Figure 3.2a shows the relative proportion of different response types observed in the auditory cortex as a whole. (a)
(b)
7%
AV mod 14%
Visual
Auditory
Proportion of total units
AV
100
7%
72%
140 284 143 127 225 105
75
Visual Bisensory Auditory
50 25 0
A1 AAF PPF PSF ADF AVF
Area
(c)
(d)
Auditory Visual Audio-visual
1
MI (bits) mean spike latency
A1 AAF PPF PSF ADF AVF Enhancement
0
0.25
Suppression
0.5
Proportion of cells
0.75
0.8 0.6 0.4 0.2 0
0
0.2
0.4
0.6
MI (bits) spike count
0.8
1
FIGURE 3.2 Visual–auditory interactions in ferret auditory cortex. (a) Proportion of neurons (n = 716) that responded to contralaterally presented noise bursts (auditory), to light flashes from an LED positioned in the contralateral visual field (visual), to both of these stimuli (AV), or whose responses to the auditory stimulus were modulated by the presentation of the visual stimulus, which did not itself elicit a response (AVmod). (b) Bar graph showing the relative proportions of unisensory auditory (white), unisensory visual (black), and bisensory (gray) neurons recorded in each auditory field. The actual numbers of neurons recorded are given at the top of each column. (c) Proportion of neurons whose spike rates in response to combined visual–auditory stimulation were enhanced or suppressed. Total number of bisensory neurons in each field: A1, n = 9; AAF, n = 16; PPF, n = 13; PSF, n = 32; ADF, n = 32; AVF, n = 24. (d) Distribution of mutual information (MI) values obtained when two reduced spike statistics were used: spike count and mean spike latency. Points above the unity line indicate that mean response latency was more informative about the stimulus than spike count. This was increasingly the case for all three stimulus conditions when the spike counts were low. (Anatomical data adapted from Bizley, J.K. et al., Cereb. Cortex, 17, 2172–89, 2007 and Bizley, J.K., and King, A.J., Hearing Res., 258, 55–63, 2009.)
38
The Neural Bases of Multisensory Processes
Bisensory neurons comprised both those neurons whose spiking responses were altered by auditory and visual stimuli and those whose auditory response was modulated by the simultaneously presented visual stimulus. The fact that visual stimuli can drive spiking activity in the auditory cortex has also been described in highly trained monkeys (Brosch et al. 2005). Nevertheless, this finding is unusual, as most reports emphasize the modulatory nature of nonauditory inputs on the cortical responses to sound (Ghazanfar 2009; Musacchia and Schroeder 2009). At least part of the explanation for this is likely to be that we analyzed our data by calculating the mutual information between the neural responses and the stimuli that elicited them. Information (in bits) was estimated by taking into account the temporal pattern of the response rather than simply the overall spike count. This method proved to be substantially more sensitive than a simple spike count measure, and allowed us to detect subtle, but nonetheless significant, changes in the neural response produced by the presence of the visual stimulus. Although neurons exhibiting visual–auditory interactions are found in all six areas of the ferret cortex, the proportion of such neurons varies in different cortical areas (Figure 3.2b). Perhaps not surprisingly, visual influences are least common in the primary areas, A1 and AAF. Nevertheless, approximately 20% of the neurons recorded in those regions were found to be sensitive to visual stimulation, and even included some unisensory visual responses. In the fields on the posterior ectosylvian gyrus and ADF, 40% to 50% of the neurons were found to be sensitive to visual stimuli. This rose to 75% in AVF, which, as described in Section 3.3.1, should probably be regarded as a multisensory rather than as a predominantly auditory area. We found that visual stimulation could either enhance or suppress the neurons’ response to sound and, in some cases, increased the precision in their spike timing without changing the overall firing rate (Bizley et al. 2007). Analysis of all bisensory neurons, including both neurons in which there was a spiking response to each sensory modality and those in which concurrent auditory–visual stimulation modulated the response to sound alone, revealed that nearly two-thirds produced stronger responses to bisensory than to unisensory auditory stimulation. Figure 3.2c shows the proportion of response types in each cortical field. Although the sample size in some areas was quite small, the relative proportions of spiking responses that were either enhanced or suppressed varied across the auditory cortex. Apart from the interactions in A1, the majority of the observed interactions were facilitatory rather than suppressive. Although a similar trend for a greater proportion of sites to show enhancement as compared with suppression has been reported for local field potential data in monkey auditory cortex, analysis of spiking responses revealed that suppressive interactions are more common (Kayser et al. 2008). This trend was found across four different categories of naturalistic and artificial stimuli, so the difference in the proportion of facilitatory and suppressive interactions is unlikely to reflect the use of different stimuli in the two studies. By systematically varying onset asynchronies between the visual and auditory stimuli, we did observe in a subset of neurons that visual stimuli could have suppressive effects when presented 100 to 200 ms before the auditory stimuli, which were not apparent when the two modalities were presented simultaneously (Bizley et al. 2007). This finding, along with the results of several other studies (Meredith et al. 2006; Dehner et al. 2004; Allman et al. 2008), emphasizes the importance of using an appropriate combination of stimuli to reveal the presence and nature of cross-modal interactions. Examination of the magnitude of cross-modal facilitation in ferret auditory cortex showed that visual–auditory interactions are predominantly sublinear. In other words, both the mutual information values (in bits) and the spike rates in response to combined auditory–visual stimulation are generally less than the linear sum of the responses to the auditory and visual stimuli presented in isolation, although some notable exceptions to this have been found (e.g., Figure 2E, F of Bizley et al. 2007). This is unsurprising as the stimulus levels used in that study were well above threshold and, according to the “inverse effectiveness principle” (Stein et al. 1988), were unlikely to produce supralinear responses to combined visual–auditory stimulation. Consistent with this is the observation of Kayser et al. (2008), showing that, across stimulus types, multisensory facilitation is more common for those stimuli that are least effective in driving the neurons.
39
Auditory Cortex according to Multisensory Processing
As mentioned above, estimates of the mutual information between the neural responses and each of the stimuli that produce them take into account the full spike discharge pattern. It is then possible to isolate the relative contributions of spike number and spike timing to the neurons’ sensitivity to multisensory stimulation. It has previously been demonstrated in both ferret and cat auditory cortex that the stimulus information contained in the complete spike pattern is conveyed by a combination of spike count and mean spike latency (Nelken et al. 2005). By carrying out a similar analysis of the responses to the brief stimuli used to characterize visual–auditory interactions in ferret auditory cortex, we found that more than half the neurons transmitted more information in the timing of their responses than in their spike counts (Bizley et al. 2007). This is in agreement with the results of Nelken et al. (2005) for different types of auditory stimuli. We found that this was equally the case for unisensory auditory or visual stimuli and for combined visual–auditory stimulation (Figure 3.2d).
3.3.5 Visual Inputs Enhance Processing in Auditory Cortex To probe the functional significance of the multisensory interactions observed in the auditory cortex, we systematically varied the spatial location of the stimuli and calculated the mutual information between the neural responses and the location of unisensory visual, unisensory auditory, and spatially and temporally coincident auditory–visual stimuli (Bizley and King 2008). The majority of the visual responses were found to be spatially restricted, and usually carried more location-related information than was the case for the auditory responses. The amount of spatial information available in the neural responses varied across the auditory cortex (Figure 3.3). For all three stimulus (a)
(b) Auditory
Visual 2 Mutual information (bits)
Bisensory 1.8
0.6
1.8
1.6
1.6
0.5
1.4
1.4 1.2
0.4
1.2 1
1 0.8
0.3
0.8 0.6
0.6
0.2
0.4 0.2 0
(c)
0.4 0.2
0.1 A1
AAF PPF PSF Cortical field
ADF
A1
AAF PPF
PSF
ADF
0
A1
AAF PPF
PSF
ADF
FIGURE 3.3 Box plots displaying the amount of information transmitted by neurons in each of five ferret cortical fields about LED location (a), sound-source location (b), or the location of temporally and spatially congruent auditory–visual stimuli (c). Only neurons for which there was a significant unisensory visual or auditory response are plotted in (a) and (b), respectively, whereas (c) shows the multisensory mutual information values for all neurons recorded, irrespective of their response to unisensory stimulation. The box plots show the median (horizontal bar), interquartile range (boxes), spread of data (tails), and outliers (cross symbols). The notch indicates the distribution of data about the median. There were significant differences in the mutual information values in different cortical fields (Kruskal–Wallis test; LED location, p = .0001; auditory location, p = .0035; bisensory stimulus location, p < .0001). Significant post hoc pairwise differences (Tukey– Kramer test, p < .05) between individual cortical fields are shown by the lines above each box plot. Note that neurons in ADF transmitted the most spatial information irrespective of stimulus modality. (Adapted with permission from Bizley, J.K., and King, A.J., Brain Res., 1242, 24–36, 2008.)
40
The Neural Bases of Multisensory Processes
conditions, spatial sensitivity was found to be highest in ADF, supporting the notion that there is some functional segregation across the auditory cortex, with the anterior fields more involved in spatial processing. Relative to the responses to sound alone, the provision of spatially coincident visual cues frequently altered the amount of information conveyed by the neurons about stimulus location. Bisensory stimulation reduced the spatial information in the response in one third of these cases, but increased it in the remaining two thirds. Thus, overall, visual inputs to the auditory cortex appear to enhance spatial processing. Because of the simple stimuli that were used in these studies, it was not possible to determine whether or how visual inputs might affect the processing of nonspatial information in ferret auditory cortex. However, a number of studies in primates have emphasized the benefits of visual influences on auditory cortex in terms of the improved perception of vocalizations. In humans, lip reading has been shown to activate the auditory cortex (Molholm et al. 2002; Giard and Peronnet 1999; Calvert et al. 1999), and a related study in macaques has shown that presenting a movie of a monkey vocalizing can modulate the auditory cortical responses to that vocalization (Ghazanfar et al. 2005). These effects were compared to a visual control condition in which the monkey viewed a disk that was flashed on and off to approximate the movements of the animal’s mouth. In that study, the integration of face and voice stimuli was found to be widespread in both core and belt areas of the auditory cortex. However, to generate response enhancement, a greater proportion of recording sites in the belt areas required the use of a real monkey face, whereas nonselective modulation of auditory cortical responses was more common in the core areas. Because a number of cortical areas have now been shown to exhibit comparable sensitivity to monkey calls (Recanzone 2008), it would be of considerable interest to compare the degree to which face and non-face visual stimuli can modulate the activity of the neurons found there. This should help us determine the relative extent to which each area might be specialized for processing communication signals.
3.4 WHERE DO VISUAL INPUTS TO AUDITORY CORTEX COME FROM? Characterizing the way in which neurons are influenced by visual stimuli and their distribution within the auditory cortex is only a first step in identifying their possible functions. It is also necessary to know where those visual inputs originate. Potentially, visual information might gain access to the auditory cortex in a number of ways. These influences could arise from direct projections from the visual cortex or they could be inherited from multisensory subcortical nuclei, such as nonlemniscal regions of the auditory thalamus. A third possibility includes feedback connections from higher multisensory association areas in temporal, parietal, or frontal cortex. Anatomical evidence from a range of species including monkeys (Smiley et al. 2007; Hackett et al. 2007a; Cappe et al. 2009), ferrets (Bizley et al. 2007), prairie voles (Campi et al. 2010), and gerbils (Budinger et al. 2006) has shown that subcortical as well as feedforward and feedback corticortical inputs could underpin multisensory integration in auditory cortex. To determine the most likely origins of the nonauditory responses in the auditory cortex, we therefore need to consider studies of anatomical connectivity in conjunction with information about the physiological properties of the neurons, such as tuning characteristics or response latencies. Previous studies have demonstrated direct projections from core and belt auditory cortex into visual areas V1 and V2 in nonhuman primates (Rockland and Ojima 2003; Falchier et al. 2002) and, more recently, in cats (Hall and Lomber 2008). The reciprocal projection, from V1 to A1, remains to be described in primates, although Hackett et al. (2007b) have found evidence for a pathway terminating in the caudomedial belt area of the auditory cortex from the area prostriata, adjacent to V1, which is connected with the peripheral visual field representations in V1, V2, and MT. Connections between early auditory and visual cortical fields have also been described in gerbils (Budinger et al. 2006, 2008) and prairie voles (Campi et al. 2010). By placing injections of neural tracer into physiologically identified auditory fields in the ferret, we were able to characterize the potential sources of visual input (Bizley et al. 2007; Figure 3.1b, c).
Auditory Cortex according to Multisensory Processing
41
These data revealed a clear projection pattern whereby specific visual cortical fields innervate specific auditory fields. A sparse direct projection exists from V1 to the core auditory cortex (A1 and AAF), which originates from the region of V1 that represents the peripheral visual field. This finding mirrors that of the reciprocal A1 to V1 projection in monkeys and cats, which terminates in the peripheral field representation of V1 (Rockland and Ojima 2003; Falchier et al. 2002; Hall and Lomber 2008). Ferret A1 and AAF are also weakly innervated by area V2. The posterior auditory fields, PPF and PSF, are innervated principally by areas 20a and 20b, thought to be part of the visual form-processing pathway (Manger et al. 2004). In contrast, the largest inputs to the anterior fields, ADF and AVF, come from SSY, which is regarded as part of the visual “where” processing stream (Philipp et al. 2006). Interestingly, this difference in the sources of cortical visual input, which is summarized in Figure 3.1d, appears to reflect the processing characteristics of the auditory cortical fields concerned. As described above, the fields on the posterior ectosylvian gyrus are more sensitive to pitch and timbre, parameters that contribute to the identification of a sound source, whereas spatial sensitivity for auditory, visual, and multisensory stimuli is greatest in ADF (Figure 3.3). This functional distinction therefore matches the putative roles of the extrastriate areas that provide the major sources of cortical visual input to each of these regions. These studies appear to support the notion of a division of labor across the nonprimary areas of ferret auditory cortex, but it would be premature to conclude that distinct fields are responsible for the processing of spatial and nonspatial features of the world. Thus, although PSF is innervated by nonspatial visual processing areas 20a and 20b (Figure 3.1c), the responses of a particularly large number of neurons found there show an increase in transmitted spatial information when a spatially congruent visual stimulus is added to the auditory stimulus (Bizley and King 2008). This could be related to a need to integrate spatial and nonspatial cues when representing objects and events in the auditory cortex. The possibility that connections between the visual motion-sensitive area SSY and the fields on the anterior ectosylvian gyrus are involved in processing spatial information provided by different sensory modalities is supported by a magnetoencephalography study in humans showing that audio–visual motion signals are integrated in the auditory cortex (Zvyagintsev et al. 2009). However, we must not forget that visual motion also plays a key role in the perception of communication calls. By making intracranial recordings in epileptic patients, Besle et al. (2008) found that the visual cues produced by lip movements activate MT followed, approximately 10 ms later, by secondary auditory areas, where they alter the responses to sound in ways that presumably influence speech perception. Thus, although the influence of facial expressions on auditory cortical neurons is normally attributed to feedback from the superior temporal sulcus (Ghazanfar et al. 2008), the availability of lower-level visual signals that provide cues to sound onset and offset may be important as well.
3.5 WHAT ARE THE PERCEPTUAL CONSEQUENCES OF MULTISENSORY INTEGRATION IN THE AUDITORY CORTEX? The concurrent availability of visual information presumably alters the representation in the auditory cortex of sources that can be seen as well as heard in ways that are relevant for perception and behavior. Obviously, the same argument applies to the somatosensory inputs that have also been described there (Musacchia and Schroeder 2009). By influencing early levels of cortical processing, these nonauditory inputs may play a fairly general processing role by priming the cortex to receive acoustic signals. It has, for example, been proposed that visual and somatosensory inputs can modulate the phase of oscillatory activity in the auditory cortex, potentially amplifying the response to related auditory signals (Schroeder et al. 2008). But, as we have seen, visual inputs can also have more specific effects, changing the sensitivity and even the selectivity of cortical responses to stimulus location and, at least in primates, to vocalizations where communication
42
The Neural Bases of Multisensory Processes
relies on both vocal calls and facial gestures. The role of multisensory processing in receptive auditory communication is considered in more detail in other chapters in this volume. Here, we will focus on the consequences of merging spatial information across different sensory modalities in the auditory cortex.
3.5.1 Combining Auditory and Visual Spatial Representations in the Brain There are fundamental differences in the ways in which source location is extracted by the visual and auditory systems. The location of visual stimuli is represented topographically, first by the distribution of activity across the retina and then at most levels of the central visual pathway. By contrast, auditory space is not encoded explicitly along the cochlea. Consequently, sound-source location has to be computed within the brain on the basis of the relative intensity and timing of sounds at each ear (“binaural cues”), coupled with the location-dependent filtering of sounds by the external ear (King et al. 2001). By tuning neurons to appropriate combinations of these cues, a “visual-like” map of auditory space is constructed in the superior colliculus, allowing spatial information from different sensory modalities to be represented in a common format (King and Hutchings 1987; Middlebrooks and Knudsen 1984). This arrangement is particularly advantageous for facilitating the integration of multisensory cues from a common source for the purpose of directing orienting behavior (Stein and Stanford 2008). However, because spatial signals provided by each sensory modality are initially encoded using different reference frames, with visual signals based on eye-centered retinal coordinates and auditory signals being head centered, information about current eye position has to be incorporated into the activity of these neurons in order to maintain map alignment (Hartline et al. 1995; Jay and Sparks 1987). In contrast to the topographic representation of auditory space in the superior colliculus, there is no space map in the auditory cortex (King and Middlebrooks 2010), posing an even greater challenge for the integration of visual and auditory spatial signals at the cortical level. The integrity of several auditory cortical areas is essential for normal sound localization (Malhotra and Lomber 2007), but we still have a very incomplete understanding of how neural activity in those regions contributes to the percept of where a sound source is located. The spatial receptive fields of individual cortical neurons are frequently very broad and, for the most part, occupy the contralateral side of space. However, several studies have emphasized that sound-source location can also be signaled by the timing of spikes (Jenison 2000; Nelken et al. 2005; Stecker et al. 2003). Our finding that the presence of spatially congruent visual stimuli leads to auditory cortical neurons becoming more informative about the source location, and that this greater spatial selectivity is based on both the timing and number of spikes evoked, is clearly consistent with this. Whatever the relative contributions of different neural coding strategies might be, it seems that sound-source location is signaled by the population response of neurons in the auditory cortex (Woods et al. 2006). The approach used by Allman and colleagues (2009) to estimate the response facilitation produced in a population of cortical neurons by combining visual and auditory stimuli might therefore be useful for characterizing the effects on spatial processing at this level. We pointed out above that meaningful interactions between different sensory modalities can take place only if the different reference frames used to encode modality-specific spatial signals are brought together. Further evidence for the multisensory representation of spatial signals in the auditory cortex is provided by the demonstration that gaze direction can change the activity of neurons in the auditory cortex (Fu et al. 2004; Werner-Reiss et al. 2003). A modulatory influence of eye position on auditory responses has been observed as early as the inferior colliculus (Groh et al. 2001), indicating that these effects could be inherited from the midbrain rather than created de novo in the auditory cortex. On the other hand, the timing and laminar profile of eye-position effects in the auditory cortex is more consistent with an origin from nonlemniscal regions of the thalamus or via feedback projections from the parietal or frontal cortices (Fu et al. 2004). As in the superior colliculus, varying eye position does not change auditory cortical spatial tuning in a manner consistent
Auditory Cortex according to Multisensory Processing
43
with a straightforward transformation into eye-centered coordinates. Rather, spatial tuning seems to take on an intermediate form between eye-centered and head-centered coordinates (Werner-Reiss et al. 2003).
3.5.2 A Role for Auditory Cortex in Spatial Recalibration? One possibility that has attracted recent attention is that visual–auditory interactions in early sensory cortex could be involved in the visual recalibration of auditory space. The representation of auditory space in the brain is inherently plastic, even in adulthood, and there are several welldocumented examples in which the perceived location of sound sources can be altered so as to conform to changes in visual inputs (King 2009; King et al. 2001). The most famous of these is the ventriloquism illusion, whereby synchronous but spatially disparate visual cues can “capture” the location of a sound source, so that it is incorrectly perceived to arise from near the seen location (Bertelson and Radeau 1981). Repeated presentation of consistently misaligned visual and auditory cues results in a shift in the perception of auditory space that can last for tens of minutes once the visual stimulus is removed. This aftereffect has been reported in humans (Recanzone 1998; Radeau and Bertelson 1974; Lewald 2002) and in nonhuman primates (Woods and Recanzone 2004). Given the widespread distribution of visual–auditory interactions in the cortex, a number of sites could potentially provide the neural substrate for this cross-modal spatial illusion. The finding that the ventriloquism aftereffect does not transfer across sound frequency (Lewald 2002; Recanzone 1998; Woods and Recanzone 2004) implies the involvement of a tonotopically organized region, i.e., early auditory cortex. On the other hand, generalization across frequencies has been observed in another study (Frissen et al. 2005), so this conclusion may not stand. However, neuroimaging results in humans have shown that activity levels in the auditory cortex vary on a trial-by-trial basis according to whether a spatially discrepant visual stimulus is presented at the same time (Bonath et al. 2007). Furthermore, the finding by Passamonti et al. (2009) that patients with unilateral lesions of the visual cortex fail to show the ventriloquism aftereffect in the affected hemifield, whereas patients with parietotemporal lesions still do, is consistent with the possibility that connections between the visual and auditory cortices are involved. On the other hand, the hemianopic patients did show improved sound localization accuracy when visual and auditory stimuli are presented at the same location in space, implying that different neural circuits may underlie these cross-modal spatial effects. Visual capture of sound-source location is thought to occur because visual cues normally provide more reliable and higher-resolution spatial information. If the visual stimuli are blurred, however, so that this is no longer the case, spatially conflicting auditory cues can then induce systematic errors in visual localization (Alais and Burr 2004). Nothing is known about the neural basis for reverse ventriloquism, but it is tempting to speculate that auditory influences on visual cortex might be involved. Indeed, the influence of sound on perceptual learning in a visual motion discrimination task has been shown to be limited to locations in visual space that match those of the sound source, implying an auditory influence on processing in a visual area that is retinotopically organized (Beer and Watanabe 2009). Behavioral studies have shown that adult humans and other mammals can adapt substantially to altered auditory spatial cues produced, for example, by reversibly occluding or changing the shape of the external ear (reviewed by Wright and Zhang 2006). Because visual cues provide a possible source of sensory feedback about the accuracy of acoustically guided behavior, one potential role of visual inputs to the auditory cortex is to guide the plasticity observed when localization cues are altered. However, Kacelnik et al. (2006) found that the capacity of adult ferrets to relearn to localize sound accurately after altering binaural cues by reversible occlusion of one ear is not dependent on visual feedback. It has been suggested that instead of being guided by vision, this form of adaptive plasticity could result from unsupervised sensorimotor learning, in which the dynamic acoustic inputs resulting from an animal’s own movements help stabilize the brain’s representation
44
The Neural Bases of Multisensory Processes
of auditory space (Aytekin et al. 2008). Although vision is not essential for the recalibration of auditory space in monaurally occluded ferrets, it is certainly possible that training with congruent multisensory cues might result in faster learning than that seen with auditory cues alone, as shown in humans for a motion detection task (Kim et al. 2008).
3.6 CONCLUDING REMARKS There is now extensive anatomical and physiological evidence from a range of species that multisensory convergence occurs at the earliest levels of auditory cortical processing. These nonauditory influences therefore have to be taken into account in any model of what the auditory cortex actually does. Indeed, one of the consequences of visual, somatosensory, and eye-position effects on the activity of neurons in core and belt areas of the auditory cortex is that those influences will be passed on to each of the brain regions to which these areas project. Multiple sources of input have been implicated in multisensory integration within auditory cortex, and a more detailed characterization of those inputs will help determine the type of information that they provide and what effect this might have on auditory processing. Some of those inputs are likely to provide low-level temporal or spatial cues that enhance auditory processing in a fairly general way, whereas others provide more complex information that is specifically related, for example, to the processing of communication signals. Revealing where those inputs come from and where they terminate will help unravel the relative contributions of different auditory cortical areas to perception. Indeed, the studies that have been carried out to date have provided additional support for the standpoint that there is some functional segregation across the different parts of the auditory cortex. In order to take this further, however, it will also be necessary to examine the behavioral and physiological effects of experimentally manipulating activity in those circuits if we are to understand how visual inputs influence auditory processing and perception.
REFERENCES Adriani, M., P. Maeder, R. Meuli et al. 2003. Sound recognition and localization in man: Specialized cortical networks and effects of acute circumscribed lesions. Experimental Brain Research 153:591–604. Alain, C., S.R. Arnott, S. Hevenor, S. Graham, and C.L. Grady. 2001. “What” and “where” in the human auditory system. Proceedings of the National Academy of Sciences of the United States of America 98:12301–6. Alais, D., and D. Burr. 2004. The ventriloquist effect results from near-optimal bimodal integration. Current Biology 14:257–62. Allman, B.L., L.P. Keniston, and M.A. Meredith. 2008. Subthreshold auditory inputs to extrastriate visual neurons are responsive to parametric changes in stimulus quality: Sensory-specific versus non-specific coding. Brain Research 1242:95–101. Allman, B.L., L.P. Keniston, and M.A. Meredith. 2009. Adult deafness induces somatosensory conversion of ferret auditory cortex. Proceedings of the National Academy of Sciences of the United States of America 106:5925–30. Aytekin, M., C.F. Moss, and J.Z. Simon. 2008. A sensorimotor approach to sound localization. Neural Computation 20:603–35. Bajo, V.M., F.R. Nodal, J.K. Bizley, and A.J. King. 2010. The non-lemniscal auditory cortex in ferrets: Convergence of corticotectal inputs in the superior colliculus. Frontiers in Neuroanatomy 4:18. Barrett, D.J., and D.A. Hall. 2006. Response preferences for “what” and “where” in human non-primary auditory cortex. NeuroImage 32:968–77. Beer, A.L., and T. Watanabe. 2009. Specificity of auditory-guided visual perceptual learning suggests crossmodal plasticity in early visual cortex. Experimental Brain Research 198:353–61. Bendor, D., and X. Wang. 2005. The neuronal representation of pitch in primate auditory cortex. Nature 436:1161–5. Bendor, D., and X. Wang. 2008. Neural response properties of primary, rostral, and rostrotemporal core fields in the auditory cortex of marmoset monkeys. Journal of Neurophysiology 100:888–906.
Auditory Cortex according to Multisensory Processing
45
Bertelson, P., and M. Radeau. 1981. Cross-modal bias and perceptual fusion with auditory-visual spatial discordance. Perception & Psychophysics 29:578–84. Besle, J., C. Fischer, A. Bidet-Caulet, F. Lecaignard, O. Bertrand, and M.H. Giard. 2008. Visual activation and audiovisual interactions in the auditory cortex during speech perception: Intracranial recordings in humans. Journal of Neuroscience 28:14301–10. Bizley, J.K., and A.J. King. 2008. Visual-auditory spatial processing in auditory cortical neurons. Brain Research 1242:24–36. Bizley, J.K., and A.J. King. 2009. Visual influences on ferret auditory cortex. Hearing Research 258:55–63. Bizley, J.K., F.R. Nodal, I. Nelken, and A.J. King. 2005. Functional organization of ferret auditory cortex. Cerebral Cortex 15:1637–53. Bizley, J.K., F.R. Nodal, V.M. Bajo, I. Nelken, and A.J. King. 2007. Physiological and anatomical evidence for multisensory interactions in auditory cortex. Cerebral Cortex 17:2172–89. Bizley, J.K., K.M. Walker, B.W. Silverman, A.J. King, and J.W. Schnupp. 2009. Interdependent encoding of pitch, timbre, and spatial location in auditory cortex. Journal of Neuroscience 29:2064–75. Bizley, J.K., and K.M. Walker, A.J. King, and J.W. Schnupp. 2010. Neural ensemble codes for stimulus periodicity in auditory cortex. Journal of Neuroscience 30:5078–91. Bonath, B., T. Noesselt, A. Martinez et al. 2007. Neural basis of the ventriloquist illusion. Current Biology 17:1697–703. Brosch, M., E. Selezneva, and H. Scheich. 2005. Nonauditory events of a behavioral procedure activate auditory cortex of highly trained monkeys. Journal of Neuroscience 25:6797–806. Budinger, E., P. Heil, A. Hess, and H. Scheich. 2006. Multisensory processing via early cortical stages: Connections of the primary auditory cortical field with other sensory systems. Neuroscience 143: 1065–83. Budinger, E., A. Laszcz, H. Lison, H. Scheich, and F.W. Ohl. 2008. Non-sensory cortical and subcortical connections of the primary auditory cortex in Mongolian gerbils: Bottom-up and top-down processing of neuronal information via field AI. Brain Research 1220:2–32. Cahill, L., F. Ohl, and H. Scheich. 1996. Alteration of auditory cortex activity with a visual stimulus through conditioning: a 2-deoxyglucose analysis. Neurobiology of Learning and Memory 65:213–22. Calvert, G.A., and T. Thesen. 2004. Multisensory integration: Methodological approaches and emerging principles in the human brain. Journal of Physiology, Paris 98:191–205. Calvert, G.A., M.J. Brammer, E.T. Bullmore, R. Campbell, S.D. Iversen, and A.S. David. 1999. Response amplification in sensory-specific cortices during crossmodal binding. Neuroreport 10:2619–23. Campi, K.L., K.L. Bales, R. Grunewald, and L. Krubitzer. 2010. Connections of auditory and visual cortex in the prairie vole (Microtus ochrogaster): Evidence for multisensory processing in primary sensory areas. Cerebral Cortex 20:89–108. Cantone, G., J. Xiao, and J.B. Levitt. 2006. Retinotopic organization of ferret suprasylvian cortex. Visual Neuroscience 23:61–77. Cappe, C., A. Morel, P. Barone, and E.M. Rouiller. 2009. The thalamocortical projection systems in primate: An anatomical support for multisensory and sensorimotor interplay. Cerebral Cortex 19:2025–37. Dehner, L.R., L.P. Keniston, H.R. Clemo, and M.A. Meredith. 2004. Cross-modal circuitry between auditory and somatosensory areas of the cat anterior ectosylvian sulcal cortex: A ‘new’ inhibitory form of multisensory convergence. Cerebral Cortex 14:387–403. Falchier, A., S. Clavagnier, P. Barone, and H. Kennedy. 2002. Anatomical evidence of multimodal integration in primate striate cortex. Journal of Neuroscience 22:5749–59. Frissen, I., J. Vroomen, B. De Gelder, and P. Bertelson. 2005. The aftereffects of ventriloquism: Generalization across sound-frequencies. Acta Psychologica 118:93–100. Fu, K.M., A.S. Shah, M.N. O’Connell et al. 2004. Timing and laminar profile of eye-position effects on auditory responses in primate auditory cortex. Journal of Neurophysiology 92:3522–31. Ghazanfar, A.A. 2009. The multisensory roles for auditory cortex in primate vocal communication. Hearing Research 258:113–20. Ghazanfar, A.A., and C.E. Schroeder. 2006. Is neocortex essentially multisensory? Trends in Cognitive Sciences 10:278–85. Ghazanfar, A.A., J.X. Maier, K.L. Hoffman, and N.K. Logothetis. 2005. Multisensory integration of dynamic faces and voices in rhesus monkey auditory cortex. Journal of Neuroscience 25:5004–12. Ghazanfar, A.A., C. Chandrasekaran, and N.K. Logothetis. 2008. Interactions between the superior temporal sulcus and auditory cortex mediate dynamic face/voice integration in rhesus monkeys. Journal of Neuroscience 28:4457–69.
46
The Neural Bases of Multisensory Processes
Giard, M.H., and F. Peronnet. 1999. Auditory-visual integration during multimodal object recognition in humans: A behavioral and electrophysiological study. Journal of Cognitive Neuroscience 11:473–90. Goodale, M.A., and D.A. Westwood. 2004. An evolving view of duplex vision: Separate but interacting cortical pathways for perception and action. Current Opinion in Neurobiology 14:203–11. Griffiths, T.D., J.D. Warren, S.K. Scott, I. Nelken, and A.J. King. 2004. Cortical processing of complex sound: A way forward? Trends in Neuroscience 27:181–5. Groh J.M., A.S. Trause, A.M. Underhill, K.R. Clark, and S. Inati. 2001. Eye position influences auditory responses in primate inferior colliculus. Neuron 29:509–18. Hackett, T.A., I. Stepniewska, and J.H. Kaas. 1999. Prefrontal connections of the parabelt auditory cortex in macaque monkeys. Brain Research 817:45–58. Hackett, T.A., L.A. De La Mothe, I. Ulbert, G. Karmos, J. Smiley, and C.E. Schroeder. 2007a. Multisensory convergence in auditory cortex: II. Thalamocortical connections of the caudal superior temporal plane. Journal of Comparative Neurology 502:924–52. Hackett, T.A., J.F. Smiley, I. Ulbert et al. 2007b. Sources of somatosensory input to the caudal belt areas of auditory cortex. Perception 36:1419–30. Hall, A.J., and S.G. Lomber. 2008. Auditory cortex projections target the peripheral field representation of primary visual cortex. Experimental Brain Research 190:413–30. Hall, D.A., and C.J. Plack. 2009. Pitch processing sites in the human auditory brain. Cerebral Cortex 19:576–85. Harrington, I.A., G.C. Stecker, E.A. Macpherson, and J.C. Middlebrooks. 2008. Spatial sensitivity of neurons in the anterior, posterior, and primary fields of cat auditory cortex. Hearing Research 240:22–41. Hartline, P.H., R.L. Vimal, A.J. King, D.D. Kurylo, and D.P. Northmore. 1995. Effects of eye position on auditory localization and neural representation of space in superior colliculus of cats. Experimental Brain Research 104:402–8. Imaizumi, K., N.J. Priebe, P.A. Crum, P.H. Bedenbaugh, S.W. Cheung, and C.E. Schreiner. 2004. Modular functional organization of cat anterior auditory field. Journal of Neurophysiology 92:444–57. Jay, M.F., and D.L. Sparks. 1987. Sensorimotor integration in the primate superior colliculus: II. Coordinates of auditory signals. Journal of Neurophysiology 57:35–55. Jenison, R.L. 2000. Correlated cortical populations can enhance sound localization performance. Journal of the Acoustical Society of America 107:414–21. Kacelnik, O., F.R. Nodal, C.H. Parsons, and A.J. King. 2006. Training-induced plasticity of auditory localization in adult mammals. PLoS Biology 4:627–38. Kayser, C., C.I. Petkov, M. Augath, and N.K. Logothetis. 2007. Functional imaging reveals visual modulation of specific fields in auditory cortex. Journal of Neuroscience 27:1824–35. Kayser, C., C.I. Petkov, and N.K. Logothetis. 2008. Visual modulation of neurons in auditory cortex. Cerebral Cortex 18:1560–74. Keniston, L.P., B.L. Allman, and M.A. Meredith. 2008. The rostral suprasylvian sulcus (RSSS) of the ferret: A ‘new’ multisensory area. Society for Neuroscience Abstracts 38:457.10. Keniston, L.P., B.L. Allman, M.A. Meredith, and H.R. Clemo. 2009. Somatosensory and multisensory properties of the medial bank of the ferret rostral suprasylvian sulcus. Experimental Brain Research 196:239–51. Kim, R.S., A.R. Seitz, and L. Shams. 2008. Benefits of stimulus congruency for multisensory facilitation of visual learning. PLoS ONE 3:e1532. King, A.J. 2009. Visual influences on auditory spatial learning. Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences 364:331–9. King, A.J., and M.E. Hutchings. 1987. Spatial response properties of acoustically responsive neurons in the superior colliculus of the ferret: A map of auditory space. Journal of Neurophysiology 57:596–624. King, A.J., and I. Nelken. 2009. Unraveling the principles of auditory cortical processing: Can we learn from the visual system? Nature Neuroscience 12:698–701. King, A.J., and J.C. Middlebrooks. 2011. Cortical representation of auditory space. In The Auditory Cortex, eds. J.A. Winer and C.E. Schreiner, 329–41. New York: Springer. King, A.J., J.W. Schnupp, and T.P. Doubell. 2001. The shape of ears to come: Dynamic coding of auditory space. Trends in Cognitive Sciences 5:261–70. Las, L., A.H. Shapira, and I. Nelken. 2008. Functional gradients of auditory sensitivity along the anterior ectosylvian sulcus of the cat. Journal of Neuroscience 28:3657–67. Lewald, J. 2002. Rapid adaptation to auditory–visual spatial disparity. Learning and Memory 9:268–78. Loftus, W.C., and M.L. Sutter. 2001. Spectrotemporal organization of excitatory and inhibitory receptive fields of cat posterior auditory field neurons. Journal of Neurophysiology 86:475–91.
Auditory Cortex according to Multisensory Processing
47
Lomber, S.G., and S. Malhotra. 2008. Double dissociation of ‘what’ and ‘where’ processing in auditory cortex. Nature Neuroscience 11:609–16. Maeder, P.P., R.A. Meuli, M. Adriani et al. 2001. Distinct pathways involved in sound recognition and localization: A human fMRI study. Neuroimage 14:802–16. Malhotra, S., and S.G. Lomber. 2007. Sound localization during homotopic and heterotopic bilateral cooling deactivation of primary and nonprimary auditory cortical areas in the cat. Journal of Neurophysiology 97:26–43. Manger, P.R., I. Masiello, and G.M. Innocenti. 2002. Areal organization of the posterior parietal cortex of the ferret (Mustela putorius). Cerebral Cortex 12:1280–97. Manger, P.R., H. Nakamura, S. Valentiniene, and G.M. Innocenti. 2004. Visual areas in the lateral temporal cortex of the ferret (Mustela putorius). Cerebral Cortex 14:676–89. Manger, P.R., G. Engler, C.K. Moll, and A.K. Engel. 2005. The anterior ectosylvian visual area of the ferret: A homologue for an enigmatic visual cortical area of the cat? European Journal of Neuroscience 22:706–14. Manger, P.R., G. Engler, C.K. Moll, and A.K. Engel. 2008. Location, architecture, and retinotopy of the anteromedial lateral suprasylvian visual area (AMLS) of the ferret (Mustela putorius). Visual Neuroscience 25:27–37. McGurk, H., and J. MacDonald. 1976. Hearing lips and seeing voices. Nature 264:746–8. McLaughlin, D.F., R.V. Sonty, and S.L. Juliano. 1998. Organization of the forepaw representation in ferret somatosensory cortex. Somatosensory & Motor Research 15:253–68. Meredith, M.A., L.R. Keniston, L.R. Dehner, and H.R. Clemo. 2006. Crossmodal projections from somatosensory area SIV to the auditory field of the anterior ectosylvian sulcus (FAES) in cat: Further evidence for subthreshold forms of multisensory processing. Experimental Brain Research 172:472–84. Merigan, W.H., and J.H. Maunsell. 1993. How parallel are the primate visual pathways? Annual Review of Neuroscience 16:369–402. Middlebrooks, J.C., and E.I. Knudsen. 1984. A neural code for auditory space in the cat’s superior colliculus. Journal of Neuroscience 4:2621–34. Molholm, S., W. Ritter, M.M. Murray, D.C. Javitt, C.E. Schroeder, and J.J. Foxe. 2002. Multisensory auditoryvisual interactions during early sensory processing in humans: A high-density electrical mapping study. Brain Research Cognitive Brain Research 14:115–28. Musacchia, G., and C.E. Schroeder. 2009. Neuronal mechanisms, response dynamics and perceptual functions of multisensory interactions in auditory cortex. Hearing Research 258:72–9. Nakamoto, K.T., S.J. Jones, and A.R. Palmer. 2008. Descending projections from auditory cortex modulate sensitivity in the midbrain to cues for spatial position. Journal of Neurophysiology 99:2347–56. Nelken, I., G. Chechik, T.D. Mrsic-Flogel, A.J. King, and J.W. Schnupp. 2005. Encoding stimulus information by spike numbers and mean response time in primary auditory cortex. Journal of Computational Neuroscience 19:199–221. Nelken, I., J.K. Bizley, F.R. Nodal, B. Ahmed, A.J. King, and J.W. Schnupp. 2008. Responses of auditory cortex to complex stimuli: Functional organization revealed using intrinsic optical signals. Journal of Neurophysiology 99:1928–41. Passamonti, C., C. Bertini, and E. Ladavas. 2009. Audio-visual stimulation improves oculomotor patterns in patients with hemianopia. Neuropsychologia 47:546–55. Philipp, R., C. Distler, and K.P. Hoffmann. 2006. A motion-sensitive area in ferret extrastriate visual cortex: An analysis in pigmented and albino animals. Cerebral Cortex 16:779–90. Phillips, D.P., and S.S. Orman. 1984. Responses of single neurons in posterior field of cat auditory cortex to tonal stimulation. Journal of Neurophysiology 51:147–63. Radeau, M., and P. Bertelson. 1974. The after-effects of ventriloquism. Quarterly Journal of Experimental Psychology 26:63–71. Ramsay, A.M., and M.A. Meredith. 2004. Multiple sensory afferents to ferret pseudosylvian sulcal cortex. Neuroreport 15:461–5. Rauschecker, J.P., and B. Tian. 2000. Mechanisms and streams for processing of “what” and “where” in auditory cortex. Proceedings of the National Academy of Sciences of the United States of America 97:11800–6. Recanzone, G.H. 1998. Rapidly induced auditory plasticity: The ventriloquism aftereffect. Proceedings of the National Academy of Sciences of the United States of America 95:869–75. Recanzone, G.H. 2000. Spatial processing in the auditory cortex of the macaque monkey. Proceedings of the National Academy of Sciences of the United States of America 97:11829–35. Recanzone, G.H. 2008. Representation of con-specific vocalizations in the core and belt areas of the auditory cortex in the alert macaque monkey. Journal of Neuroscience 28:13184–93.
48
The Neural Bases of Multisensory Processes
Redies, C., M. Diksic, and H. Riml. 1990. Functional organization in the ferret visual cortex: A double-label 2-deoxyglucose study. Journal of Neuroscience 10:2791–803. Renier, L.A., I. Anurova, A.G. De Volder, S. Carlson, J. Vanmeter, and J.P. Rauschecker. 2009. Multisensory integration of sounds and vibrotactile stimuli in processing streams for “what” and “where.” Journal of Neuroscience 29:10950–60. Rice, F.L., C.M. Gomez, S.S. Leclerc, R.W. Dykes, J.S. Moon, and K. Pourmoghadam. 1993. Cytoarchitecture of the ferret suprasylvian gyrus correlated with areas containing multiunit responses elicited by stimulation of the face. Somatosensory & Motor Research 10:161–88. Rockland, K.S., and H. Ojima. 2003. Multisensory convergence in calcarine visual areas in macaque monkey. International Journal of Psychophysiology 50:19–26. Romanski, L.M., B. Tian, J. Fritz, M. Mishkin, P.S. Goldman-Rakic, and J.P. Rauschecker. 1999. Dual streams of auditory afferents target multiple domains in the primate prefrontal cortex. Nature Neuroscience 2:1131–6. Schreiner, C.E., and M.S. Cynader. 1984. Basic functional organization of second auditory cortical field (AII) of the cat. Journal of Neurophysiology 51:1284–305. Schroeder, C.E., and J.J. Foxe. 2002. The timing and laminar profile of converging inputs to multisensory areas of the macaque neocortex. Brain Research Cognitive Brain Research 14:187–98. Schroeder, C.E., P. Lakatos, Y. Kajikawa, S. Partan, and A. Puce. 2008. Neuronal oscillations and visual amplification of speech. Trends in Cognitive Sciences 12:106–13. Shelton, B.R., and C.L. Searle. 1980. The influence of vision on the absolute identification of sound-source position. Perception & Psychophysics 28:589–96. Smiley, J.F., T.A. Hackett, I. Ulbert et al. 2007. Multisensory convergence in auditory cortex, I. Cortical connections of the caudal superior temporal plane in macaque monkeys. Journal of Comparative Neurology 502:894–923. Stecker, G.C., B.J. Mickey, E.A. Macpherson, and J.C. Middlebrooks. 2003. Spatial sensitivity in field PAF of cat auditory cortex. Journal of Neurophysiology 89:2889–903. Stein, B.E., and T.R. Stanford. 2008. Multisensory intergration: Current issues from the perspective of the single neuron. Nature Reviews. Neuroscience 9:1477–85. Stein, B.E., W.S. Huneycutt, and M.A. Meredith. 1988. Neurons and behavior: The same rules of multisensory integration apply. Brain Research 448:355–8. Stein, B.E., M.A. Meredith, W.S. Huneycott, and L. Mcdade. 1989. Behavioral indices of multisensory integration: Orientation of visual cues is affected by auditory stimuli. Journal of Cognitive Neuroscience 1:12–24. Sumby, W.H., and I. Pollack. 1954. Visual contribution to speech intelligibility in noise. Journal of the Acoustical Society of America 26:212–15. Tian, B., D. Reser, A. Durham, A. Kustov, and J.P. Rauschecker. 2001. Functional specialization in rhesus monkey auditory cortex. Science 292:290–3. Thomas, H., J. Tillein, P. Heil, and H. Scheich. 1993. Functional organization of auditory cortex in the mongolian gerbil (Meriones unguiculatus). I. Electrophysiological mapping of frequency representation and distinction of fields. European Journal of Neuroscience 5:882–97. Wallace, M.T., R. Ramachandran, and B.E. Stein. 2004. A revised view of sensory cortical parcellation. Proceedings of the National Academy of Sciences of the United States of America 101:2167–72. Warren, J.D., and T.D. Griffiths. 2003. Distinct mechanisms for processing spatial sequences and pitch sequences in the human auditory brain. Journal of Neuroscience 23:5799–804. Werner-Reiss, U., K.A. Kelly, A.S. Trause, A.M. Underhill, and J.M. Groh. 2003. Eye position affects activity in primary auditory cortex of primates. Current Biology 13:554–62. Woods, T.M., and G.H. Recanzone. 2004. Visually induced plasticity of auditory spatial perception in macaques. Current Biology 14:1559–64. Woods, T.M., S.E. Lopez, J.H. Long, J.E. Rahman, and G.H. Recanzone. 2006. Effects of stimulus azimuth and intensity on the single-neuron activity in the auditory cortex of the alert macaque monkey. Journal of Neurophysiology 96:3323–37. Wright, B.A., and Y. Zhang. 2006. A review of learning with normal and altered sound-localization cues in human adults. International Journal of Audiology 45 Suppl 1, S92–8. Zvyagintsev, M., A.R. Nikolaev, H. Thonnessen, O. Sachs, J. Dammers, and K. Mathiak. 2009. Spatially congruent visual motion modulates activity of the primary auditory cortex. Experimental Brain Research 198:391–402.
Section II Neurophysiological Bases
4
Are Bimodal Neurons the Same throughout the Brain? M. Alex Meredith, Brian L. Allman, Leslie P. Keniston, and H. Ruth Clemo
CONTENTS 4.1 Introduction............................................................................................................................. 51 4.2 Methods................................................................................................................................... 52 4.2.1 Surgical Procedures..................................................................................................... 52 4.2.2 Recording..................................................................................................................... 52 4.2.3 Data Analysis............................................................................................................... 53 4.3 Results...................................................................................................................................... 54 4.3.1 Anterior Ectosylvian Sulcal Cortex............................................................................. 54 4.3.2 Posterolateral Lateral Suprasylvian Cortex................................................................. 54 4.3.3 Rostral Suprasylvian Sulcal Cortex............................................................................. 59 4.3.4 Superior Colliculus...................................................................................................... 59 4.4 Discussion................................................................................................................................60 4.4.1 Bimodal Neurons with Different Integrative Properties.............................................60 4.4.2 Bimodal Neurons in SC and Cortex Differ.................................................................60 4.4.3 Bimodal Neurons in Different Cortical Areas Differ..................................................60 4.4.4 Population Contribution to Areal Multisensory Function........................................... 61 4.4.5 Methodological Considerations................................................................................... 62 4.5 Conclusions.............................................................................................................................. 63 Acknowledgments............................................................................................................................. 63 References......................................................................................................................................... 63
4.1 INTRODUCTION It is a basic tenet of neuroscience that different neural circuits underlie different functions or behaviors. For the field of multisensory processing, however, this concept appears to be superseded by the system’s requirements: convergence of inputs from different sensory modalities onto individual neurons is the requisite, defining step. This requirement is fulfilled by the bimodal neuron, which has been studied for half a century now (Horn and Hill 1966) and has come to represent the basic unit of multisensory processing (but see Allman et al. 2009). Bimodal neurons are ubiquitous: they are found throughout the neuraxis and in nervous systems across the animal kingdom (for review, see Stein and Meredith 1993). Bimodal (and trimodal) neurons exhibit suprathreshold responses to stimuli from more than one sensory modality, and often integrate (a significant response change when compared with unisensory responses) those responses when the stimuli are combined. As revealed almost exclusively by studies of the superior colliculus (SC), bimodal neurons integrate multisensory information according to the spatial, temporal, and physical parameters of the stimuli involved (for review, see Stein and Meredith 1993). The generality of these principles and the 51
52
The Neural Bases of Multisensory Processes Rostral suprasylvian
Posterolateral lateral suprasylvian Superior colliculus
Anterior ectosylvian
FIGURE 4.1 Lateral view of cat brain depicts multisensory recording sites in cortex and midbrain.
broadness of their applicability appeared to be confirmed by similar findings in cortical bimodal neurons (Wallace et al. 1992) and overt multisensory behaviors (Stein et al. 1989). Although it has been generally assumed that bimodal neurons are essentially the same, an insightful study of multisensory integration in bimodal SC neurons demonstrated that bimodal neurons exhibit different functional ranges (Perrault et al. 2005). Some bimodal neurons were highly integrative and exhibited integrated, superadditive (combined response > sum of unisensory responses) responses to a variety of stimulus combinations, whereas others never produced superadditive levels despite the full range of stimuli presented. In this highly integrative structure, approximately 28% of the bimodal neurons showed multisensory integration in the superadditive range. Thus, within the SC, there was a distribution of bimodal neurons with different functional ranges. Hypothetically, if this distribution were altered, for example, in favor of low-integrating bimodal neurons, then it would be expected that the overall SC would exhibit lower levels of multisensory processing. Because many studies of cortical multisensory processing reveal few examples of superadditive levels of integration (e.g., Meredith et al. 2006; Clemo et al. 2007; Allman and Meredith 2007; Meredith and Allman 2009), it seems possible that bimodal cortical neurons also exhibit functional ranges like those observed in the SC, but do so in different proportions. Therefore, the present investigation reviewed single-unit recording data derived from several different cortical areas and the SC (as depicted in Figure 4.1) to address the possibility that bimodal neurons in different parts of the brain might exhibit different integrative properties that occur in area-specific proportions.
4.2 METHODS 4.2.1 Surgical Procedures A two-part implantation/recording procedure was used as described in detail in previous reports (Meredith and Stein 1986; Meredith et al. 2006). First, the animals were anesthetized (pentobarbital, 40 mg/kg) and their heads were secured in a stereotaxic frame. Sterile techniques were used to perform a craniotomy that exposed the targeted recording area and a recording well was implanted over the opening. The scalp was then sutured closed around the implant and routine postoperative care was provided. Approximately 7 to 10 days elapsed before the recording experiment.
4.2.2 Recording Recording experiments were initiated by anesthetizing the animal (ketamine, 35 mg/kg, and acepromazine, 3.5 mg/kg initial dose; 8 with 1 mg kg−1 h−1 supplements, respectively) and securing the implant to a supporting bar. A leg vein was cannulated for continuous administration of fluids, supplemental anesthetics, and to prevent spontaneous movements, a muscle relaxant (pancronium bromide, 0.3 mg/kg initial dose; 0.2 mg kg−1 h−1 supplement). The animal was intubated through
Are Bimodal Neurons the Same throughout the Brain?
53
the mouth and maintained on a ventilator; expired CO2 was monitored and maintained at ~4.5%. A glass-insulated tungsten electrode (impedance 25 kHz) using Spike2 (Cambridge Electronic Design) software and sorted by waveform template for analysis. Then, for each test condition (somatosensory alone, somatosensory–auditory combined, etc.), a peristimulus time histogram was generated from which the mean spike number per trial (and standard deviation) was calculated. For the SC recordings, the online spike counter displayed trial-by-trial spike counts for each of the stimulus conditions, from which these values were recorded and the mean spike number per trial (and standard deviation) was calculated. A paired, two-tailed t-test used to statistically compare the responses to the combined stimuli to that of the most effective single stimulus, and responses that showed a significant difference (p < .05) were defined as response interactions (Meredith and Stein 1986, 1996). The magnitude of a response interaction was estimated by the following formula: (C – M)/M × 100 = %, where C is the response to the combined stimulation, and M is the maximal response to the unimodal stimulation (according to the criteria of Meredith and Stein 1986). Summative responses were evaluated by comparing the responses evoked by the combined stimuli to the sum of the responses elicited by the same stimuli presented separately.
54
The Neural Bases of Multisensory Processes
4.3 RESULTS 4.3.1 Anterior Ectosylvian Sulcal Cortex The banks of the anterior ectosylvian sulcus (AES) contain auditory (field of the AES; Clarey and Irvine 1990), visual (AEV; Olson and Graybiel 1987), and somatosensory (SIV; Clemo and Stein 1983) representations. Numerous studies of this region have identified bimodal neurons (Wallace et al. 1992; Rauschecker and Korte 1993; Jiang et al. 1994a, 1994b) particularly at the intersection of the different sensory representations (Meredith 2004; Carriere et al. 2007). The bimodal neurons described in the present study were collected during the recordings reported by Meredith and Allman (2009). Neurons were identified in six penetrations in three cats, of which 24% (n = 46/193) were bimodal. These neurons exhibited suprathreshold responses to independent presentations of auditory and visual (n = 39), auditory and somatosensory (n = 6), or visual and somatosensory (n = 1) stimuli. A typical example is illustrated in Figure 4.2, where the presentation of either auditory or visual stimuli vigorously activated this neuron. Furthermore, the combination of visual and auditory stimuli induced an even stronger response representing a significant (p < .05, paired t-test) enhancement of activity (36%) over that elicited by the most effective stimulus presented alone (see Meredith and Stein 1986 for criteria). This response increment was representative of bimodal AES neurons because the population average level of enhancement was 34% (see Figure 4.3). This modest level of multisensory integration was collectively achieved by neurons of widely different activity levels. As illustrated in Figure 4.4, responses to separate or combined-modality stimulation achieved between an average of 1 and 50 spikes/trial [response averages to the weakest (5.1 ± 4.9 standard deviation (SD)) and best (8.9 ± 7.9 SD) separate stimuli and to combined-modality stimulation (11.7 ± 9.9 SD) are also shown in Figure 4.3]. However, only a minority (46%; n = 21/46) of bimodal neurons showed response enhancement to the available stimuli and most showed levels of activity that plotted close to the line of unity in Figure 4.4. Figure 4.5 shows that the highest levels of enhancement were generally achieved in those neurons with lower levels of unimodal response activity. Specifically, the neurons showing >75% response change (average 130%) exhibited responses to unimodal stimuli that averaged 6.6 spikes/trial. As illustrated in Figure 4.6, however, most (85%; n = 39/46) bimodal neurons demonstrated response enhancements of 40° (Allman and Meredith
600
V
0
Time (ms)
0 600
A
S
V
6
0
15
0 A
A
S
V
AV
SA
0%
36%
*
0
600
0
60
0
600
(d) Superior colliculus A
0
20
V
(b) Posterolateral lateral suprasylvian area V A
A V
V A
0
25
0
8
A
A
V
*
AV
*
AV
302%
V
39%
FIGURE 4.2 For each recording area (a–d), individual bimodal neurons showed responses to both unimodal stimuli presented separately as well as to their combination stimuli, as illustrated by rasters (1 dot = 1 spike) and histograms (10 ms time bins). Waveforms above each raster/histogram indicate stimulation condition (square wave labeled “A” = auditory; ramp labeled “V” = visual; ramp labeled “S” = somatosensory; presented separately or in combination). Bar graphs depict mean (and standard deviation) of responses to different stimulus conditions; numerical percentage indicates proportional difference between the most effective unimodal stimulus and the response elicited by stimulus combination (i.e., integration). Asterisk (*) indicates that response change between these two conditions was statistically significant (p < .05 paired t-test).
0
20
A
S
(c) Rostral suprasylvian sulcal area
0 0
40
Spikes
A
Mean spikes/trial
(a) Anterior ectosylvian sulcal area A
Are Bimodal Neurons the Same throughout the Brain? 55
56
The Neural Bases of Multisensory Processes
Mean response (spikes/trial)
20
AES
15
34 ± 7 % 24 ± 4 %
10
5
0 20
Mean response (spikes/trial)
PLLS
Lowest unimodal
Best unimodal
Lowest unimodal
Combined
RSS
Best unimodal
Combined
88 ± 12 %
SC
15
10
37 ± 4 %
5
0
Lowest unimodal
Best unimodal
Lowest unimodal
Combined
Best unimodal
Combined
FIGURE 4.3 For each recording area, average response levels (and standard error of the mean [SEM]) for population of bimodal neurons. Responses to unimodal stimuli were grouped by response level (lowest, best), not by modality. Percentage (and SEM) indicates proportional change between the best unimodal response and that elicited by combined stimulation (i.e., integration). In each area, combined response was statistically greater than that evoked by the most effective unimodal stimulus (p < .05; paired t-test).
Combined response (spikes/trial)
50 40
40
30
30
20
20
10
10
0
0
50
Combined response (spikes/trial)
50
AES
10
20
30
40
Best unimodal response (spikes/trial)
50
0
40
30
30
20
20
10
10
0
0
50
RSS
40
0
PLLS
10
20
30
40
Best unimodal response (spikes/trial)
50
0
10
20
30
40
50
10
20
30
40
50
Best unimodal response (spikes/trial) SC
0
Best unimodal response (spikes/trial)
FIGURE 4.4 For neural areas sampled, response of a given bimodal neuron to the most effective unimodal stimulus (x axis) was plotted against its response to stimulus combination (y axis). For the most part, bimodal neurons in each area showed activity that almost always plotted above line of unity (dashed line).
57
Are Bimodal Neurons the Same throughout the Brain?
Interaction (%)
400
400
AES
300
300
200
200
100
100
0
0
10
20
0
–200
Best unimodal response (spikes/trial)
400
–200
300
200
200
100
100 0
10
20
–100 –200
10
20
30
Best unimodal response (spikes/trial)
400
RSS
300
0
0
–100
–100
Interaction (%)
30
PLLS
30
0
SC
0
10
20
30
–100
Best unimodal response (spikes/trial)
–200
Best unimodal response (spikes/trial)
FIGURE 4.5 For each of recording areas, response of a given bimodal neuron to the most effective unimodal stimulus (x axis) was plotted against proportional change (interaction) elicited by combined stimuli (y axis). Most bimodal neurons exhibited interactions > 0, but level of interaction generally decreased with increasing levels of spiking activity.
2007). The bimodal neurons described in the present study were collected during PLLS recordings reported by Allman and Meredith (2007). A total of 520 neurons were identified in eight penetrations in three cats, of which 9% (n = 49/520) were visual–auditory bimodal. A typical example is illustrated in Figure 4.2, where the presentation of either auditory or visual stimuli vigorously activated the neuron. In addition, when the same visual and auditory stimuli were combined, an even stronger response was evoked. The combined response representing a significant (p < .05, paired t-test) enhancement of activity (39%) over that elicited by the most effective stimulus presented alone (see Meredith and Stein 1986 for criteria). This response increment was slightly larger than the average magnitude of integration (24%) seen in the population of bimodal PLLS neurons [response averages to the weakest (4.7 ± 5.4 SD) and best (7.1 ± 6.8 SD) separate stimuli and to combined-modality stimulation (8.8 ± 8.8 SD) are shown in Figure 4.3]. This modest response increment was generated by neurons of widely different activity levels. As illustrated in Figure 4.4, PLLS responses to separate or combined-modality stimulation produced between 1 and 50 mean spikes/trial. However, only a minority (39%; n = 19/49) of bimodal neurons showed significant response enhancement to the available stimuli and most showed levels of activity that plotted close to the line of unity in Figure 4.4. Figure 4.5 shows that levels of response interaction were generally the same across activity levels. Furthermore, all PLLS interaction magnitudes represented –25 –25 to 25 to 75 to 125 to >175 24 74 124 174
0
Interaction (%)
Neurons (%)
60
RSS
50
50
40
40
30
30
20
20
10
10
0
>–25 –25 to 25 to 75 to 125 to >175 24 74 124 174
Interaction (%)
x = 24%
>–25 –25 to 25 to 75 to 125 to >175 24 74 124 174
Interaction (%)
60
x = 37%
PLLS
0
SC
x = 88%
>–25 –25 to 25 to 75 to 125 to >175 24 74 124 174
Interaction (%)
FIGURE 4.6 For each recording area, many bimodal neurons showed low levels of interaction (–25% to 25%). However, only AES and SC exhibited integrated levels in excess of 175%.
100
AES
PLLS
RSS
SC
Statistical summative
Statistical summative
Neurons (%)
80 60 40 20 0
Statistical summative
Statistical summative
FIGURE 4.7 Multisensory interactions in bimodal neurons can be evaluated by statistical (paired t-test between best unimodal and combined responses) or by summative (combined response exceeds sum of both unimodal responses) methods. For each area, fewer combined responses met these criteria using summative rather than statistical methods. However, only in SC was integration (by either method) achieved by >50 of neurons.
Are Bimodal Neurons the Same throughout the Brain?
59
4.3.3 Rostral Suprasylvian Sulcal Cortex As described by Clemo et al. (2007), extracellular recordings were made in three cats in which recording penetrations (n = 27) covered the anterior–posterior extent and depth of the lateral bank of rostral suprasylvian sulcus (RSS; see Figure 4.1 for location). A total of 946 neurons were recorded, of which 24% were identified as bimodal: either auditory–somatosensory neurons (20%; n = 193/946) or audio–visual neurons (4%; n = 35/946). Of these, 86 were tested quantitatively for responses to separate and combined-modality stimulation, of which a representative example is provided in Figure 4.2. This neuron showed a reliable response to the auditory stimulus, and a vigorous response to the somatosensory stimulus. When the two stimuli were combined, a vigorous response was also elicited but did not significantly differ from that of the most effective (somatosensory) stimulus presented alone. In addition, nearly 20% (18/97) of the neurons showed smaller responses to the combined stimuli than to the most effective single-modality stimulus. This low level of multisensory integration was surprising, although not unusual in the RSS. In fact, the majority (66%; 64/97) of RSS bimodal neurons failed to show a significant response interaction to combined stimulation. This effect is evident in the averaged responses of the RSS population, which achieved an average 37% response increase (see Figure 4.3). Also evident from this figure are the comparatively low levels of response evoked by stimuli presented separately (least effective, 1.67 ± 1.2 SD; most effective, 2.8 ± 2.2 SD average spikes/trial) or together (3.6 ± 2.9 SD average spikes/trial). These low response levels are also apparent in Figure 4.4, where responses to best and combined stimulation are plotted for each neuron and, under no condition, was activity measured >20 spikes/trial. This low level of activity may underlie the strong inverse relationship between effectiveness and interactive level, shown in Figure 4.5, because the neurons with the lowest unimodal response values also showed the highest proportional gains. In fact, all of the neurons that showed >75% response change had an average response to the most effective unimodal stimulus of only 0.89 ± 0.5 spikes/ trial. Therefore, the appearance of large proportional changes in these low-activity neurons may be the result of comparisons among low values. With that in mind, the proportion of RSS neurons showing response changes that were more than summative may be artificially large. As shown in Figure 4.7, the proportion of RSS bimodal neurons with significant (34%) or more than summative (20%) changes represented only a third of the sample or less. Given that only 24% of the RSS was identified as bimodal, the small amount of multisensory integration produced by less than one third of participating neurons would indicate that integrated multisensory signals are not a robust indicator of this cortical region.
4.3.4 Superior Colliculus The bimodal SC neurons described in the present study were collected from recordings reported by Meredith and Stein (1983, 1985). A total of 81 bimodal neurons met acceptance criteria (see Methods) were identified from recordings from 20 cats. Of these SC neurons, 62% (n = 50/81) were visual–auditory, 16% (n = 13/81) were visual–somatosensory, 10% (n = 8/81) auditory– somatosensory, and 12% (n = 10/81) were trimodal; these proportions were similar to those reported earlier (Meredith and Stein 1986). A typical example of a bimodal SC neuron is illustrated in Figure 4.2, where the presentation of either auditory or visual stimuli activated the neuron. When the same visual and auditory stimuli were combined, however, a significantly (p < .05 paired t-test) stronger response was evoked. This response to the combined stimulation represented a multisensory enhancement of activity of >300%. Most (77%; n = 62/81) bimodal SC neurons showed significant response enhancement, averaging a magnitude of 88% for the overall population [response averages to the weakest (5.9 ± 6.7 SD) and best (10.9 ± 10.4 SD) separate stimuli and to combinedmodality stimulation (17.4 ± 13.5 SD) are shown in Figure 4.3]. As depicted in Figure 4.4, response enhancement was generated by neurons of widely different activity levels, ranging from 1 to 40 mean spikes/trial. However, Figure 4.5 shows that levels of response enhancement tended to be
60
The Neural Bases of Multisensory Processes
larger for responses with lower levels of activity. Given the levels of enhancement achieved by such a large proportion of SC bimodal neurons, it did not seem surprising that >48% of neurons showed enhancement levels in excess of a 75% change (see Figure 4.6). In contrast, few SC neurons (3%; 3/97) produced combined responses that were lower than that elicited by the most effective singlemodality stimulus. Analysis of the proportional change in bimodal SC neurons resulting from combined-modality stimulation revealed that a majority (56%; n = 45/81) achieved superadditive levels of activity; a large majority also demonstrated statistically significant levels of response enhancement (76%; n = 62/81). Given that bimodal neurons represent a majority of neurons in the deep layers of the SC (63%; Wallace and Stein 1997), and that significant levels of multisensory response enhancement are achieved in more than three-fourths of those, these data suggest that integrated multisensory signals are a robust component of sensory signals in the SC.
4.4 DISCUSSION 4.4.1 Bimodal Neurons with Different Integrative Properties Bimodal neurons clearly differ from one another (Perrault et al. 2005). In the SC, some bimodal neurons are highly integrative and exhibit integrated, superadditive responses to a variety of stimulus combinations, whereas others never produce superadditive levels in spite of the full range of stimuli presented. Thus, different bimodal neurons exhibit different functional ranges. The question of whether bimodal neurons elsewhere in the brain might also exhibit integrative differences was examined in the present study. Bimodal neurons in the AES, PLLS, and RSS were tested for their responses to combined-modality stimuli that revealed that some cortical neurons generated multisensory integrated responses whereas others did not. It should be pointed out that the present study did not make an exhaustive characterization of the integrative capacity of each neuron (as done by Perrault et al. 2005). However, the present sampling methods appear to have overestimated (not underestimated) the proportion of integrative neurons because 45% of the SC sample showed superadditive response levels, whereas fewer (28%) were identified using more intensive methods (Perrault et al. 2005). Regardless of these testing differences, these combined studies indicate that bimodal neurons from across the brain are a diverse group.
4.4.2 Bimodal Neurons in SC and Cortex Differ The SC is well known for its highly integrative neurons, with examples of multisensory response enhancement in excess of 1200% (Meredith and Stein 1986). The present sample of bimodal SC neurons (derived from Meredith and Stein 1983, 1985) showed a range of –11% to 918% change (average 88%) with most (55%; 45/81) neurons showing superadditive responses. In contrast, cortical bimodal neurons (AES, PLLS, and RSS) generated a consistently lower range of integration (–62 to 212; 33% overall average). In fact, only a minority (39%; 75/192) of cortical bimodal neurons exhibited significant multisensory response changes and only 17% (33/192) produced superadditive response levels. As a group, the average level of response interaction was only 17% change from the best unimodal response. In addition, instances where the combined response was less than the maximal unimodal response occurred in 16% of cortical bimodal neurons, but only in 3% of the SC neurons (no such examples were observed in SC by Perrault et al. 2005). Clearly, bimodal neurons in the cortex integrate multisensory information differently from those in the SC.
4.4.3 Bimodal Neurons in Different Cortical Areas Differ Bimodal neurons in different cortical areas also exhibit different capacities for multisensory integration. Proportionally more bimodal AES neurons showed significant response interactions (46%;
61
Are Bimodal Neurons the Same throughout the Brain?
21/46) and higher levels of integration (34% average) than those in the RSS (34%; 33/97 showed significant response change; 24% average). Furthermore, bimodal neurons in these regions showed significantly different (p < .01 t-test) spike counts in response to adequate separate and combinedmodality stimuli. AES neurons averaged 8.9 ± 7.9 SD spikes/trial in response to the most effective separate-modality stimulus, and 11.7 ± 9.9 SD spikes/trial to the combined stimuli. In contrast, RSS neurons averaged 2.8 ± 2.2 SD spikes/trial in response to the most effective separate-modality stimulus, and 3.6 ± 2.9 SD spikes/trial to the combined stimuli. In addition, nearly 20% of RSS neurons showed combined responses that were less than the maximal unimodal responses, compared with 11% of AES bimodal neurons. Thus, by a variety of activity measures, the multisensory processing capacity is clearly different for bimodal neurons in different cortical areas. Measures of multisensory processing in bimodal PLLS neurons appear to fall between those obtained for AES and RSS.
4.4.4 Population Contribution to Areal Multisensory Function The present results indicate that the range of multisensory integration is different for bimodal neurons in different neural areas. Therefore, it should be expected that the performance of different areas will differ under the same multisensory conditions. As illustrated in the left panel of Figure 4.8, some areas contain relatively few bimodal neurons, and those that are present are generally poor multisensory integrators (e.g., those observed in the RSS). In contrast, other areas (e.g., the SC) contain a high proportion of bimodal neurons of which many are strong integrators (right panel Figure 4.8). Furthermore, the data suggest that areas of intermediate multisensory properties also occur (e.g., AES), as schematized by the intermingled low- and high-integrators in the center panel of Figure 4.8. Under these conditions, it is likely that a given multisensory stimulus will simultaneously elicit widely different multisensory responses and levels of integration in these different areas. Furthermore, although the cat SC contains ~63% bimodal (and trimodal) neurons (Wallace and Stein 1997), most cortical areas exhibit bimodal populations of only between 25% and 30% (Rauschecker and Korte 1993; Jiang et al. 1994a, 1994b; Carriere
Low integration
High integration
FIGURE 4.8 Bimodal neurons with different functional modes, when distributed in different proportions, underlie regions exhibiting different multisensory properties. Each panel shows same array of neurons, except that proportions of unisensory (white), low-integrator (gray), and high-integrator (black) multisensory neurons are different. Areas in which low-integrator neurons predominate show low overall levels of multisensory integration (left), whereas those with a large proportion of high-integrators (right) exhibit high levels of multisensory integration. Intermediate proportions of low- and high-integrators collectively generate intermediate levels of multisensory integration at areal level. Ultimately, these arrangements may underlie a range of multisensory processes that occur along a continuum from one extreme (no integration, not depicted) to the other (high integration).
62
The Neural Bases of Multisensory Processes
et al. 2007; Meredith et al. 2006; Clemo et al. 2007; Allman and Meredith 2007). Therefore, from an areal level, the comparatively weak multisensory signal from a cortical area is likely to be further diluted by the fact that only a small proportion of bimodal neurons contribute to that signal. It should also be pointed out that many cortical areas have now been demonstrated to contain subthreshold multisensory (also termed “modulatory”) neurons. These neurons are activated by inputs from only one modality, but that response can be subtly modulated by influences from another to show modest (but statistically significant) levels of multisensory interaction (Dehner et al. 2004; Meredith et al. 2006; Carriere et al. 2007; Allman and Meredith 2007; Meredith and Allman 2009). Collectively, these observations suggest that cortical multisensory activity is characterized by comparatively low levels of integration. In the context of the behavioral/perceptual role of cortex, these modest integrative levels may be appropriate. For example, when combining visual and auditory inputs to facilitate speech perception (e.g., the cocktail party effect), it is difficult to imagine how accurate perception would be maintained if every neuron showed a response change in excess of 1200%. On the other hand, for behaviors in which survival is involved (e.g., detection), multisensory interactions >1200% would clearly provide an adaptive advantage.
4.4.5 Methodological Considerations Several methodological considerations should be appreciated for these results to have their proper context. The results were obtained from cats under essentially the same experimental conditions (single-unit recording under ketamine anesthesia). Data collection for all cortical values was carried out using the same paradigms and equipment. Although the experimental design was the same, the SC data were obtained before the incorporation of computers into experimental methods. Consequently, the different sensory trials were not interleaved but taken in sequential blocks, usually with fewer repetitions (n = 10–16). This is important because the number of trials has recently been demonstrated to be a key factor in determining statistical significance among multisensory interactions (Allman et al. 2009), where the larger number of trials was correlated with more neurons meeting statistical criterion. However, the SC recordings revealed a higher proportion of significantly affected neurons than in the cortex, despite these statistical measures being based on fewer trials for SC neurons (10–16 trials) than for the cortical neurons (25 trials). All cortical sensory tests were conducted in essentially the same manner: adequate (not minimal or maximal) stimuli from each modality were used and they were not systematically manipulated to maximize their integrative product. For this reason, only SC data taken before the spatial and temporal parametric investigations (e.g., Meredith and Stein 1986; Meredith et al. 1987) were included in the present comparative study. The present results are based completely on comparisons of spike counts in response to single- and combined-modality stimulation. It is also possible (indeed likely) that other response measures, such as temporal pattern or information content, may provide reliable indicators of these different effects. In addition, each of these experiments used an anesthetized preparation and it would be expected that effects such as alertness and attention would have an influence on neuronal properties. However, the anesthetic regimen was the same for each of the experiments and the comparisons were made with respect to relative changes within the data sample. Furthermore, it would seem counterintuitive that response subtleties among bimodal neurons would be observable under anesthesia but not in alert animals. However, these issues await empirical evaluation. In an effort to identify cortical areas capable of multisensory processing in humans, studies using noninvasive technologies have adopted the principles of multisensory integration determined at the level of the bimodal neuron in the SC into the criteria by which computational, perceptual, and cognitive multisensory effects could be measured and defined. For example, the metric of superadditivity has been used in neuroimaging studies in a conservative effort
Are Bimodal Neurons the Same throughout the Brain?
63
to avoid “false positives” while identifying sites of multisensory integration within the cortex (see Laurienti et al. 2005 for review). Based on the multisensory characteristics of SC neurons (Perrault et al. 2005), however, Laurienti and colleagues cautioned that multisensory stimuli would not likely generate superadditive responses in the blood oxygenation level–dependent signal as measured by functional magnetic resonance imaging (Laurienti et al. 2005). The results of the present study further support this caution because proportionally fewer cortical neurons reveal superadditive responses than SC neurons (Figure 4.7), and the magnitude of response enhancement is considerably smaller in the cortex (Figure 4.6). On the other hand, given the tenuous relationship between single neuron discharge activity (i.e., action potentials) and brain hemodynamics underlying changes in the blood oxygenation level–dependent signal (Logothetis et al. 2001; Laurienti et al. 2005; Sirotin and Das 2009; Leopold 2009), it remains debatable whether effects identified in single-unit electrophysiological studies are appropriate to characterize/define multisensory processing in neuroimaging studies in the first place. How this issue is resolved, however, does not change the fact that electrophysiological measures of multisensory processing at the neuronal level reveal differences among bimodal neurons from different brain regions.
4.5 CONCLUSIONS Bimodal neurons are known to differ functionally within the same structure, the SC. The present study shows that this variation also occurs within the cortex. Ultimately, by varying the proportional representation of the different types of bimodal neurons (defined by functional ranges), different neural areas can exhibit different levels of multisensory integration in response to the same multisensory stimulus.
ACKNOWLEDGMENTS Collection of superior colliculus data was supported by grants NS019065 (to B.E. Stein) and NS06838 (to M.A. Meredith), that of cortical data was supported by grant NS039460 (to M.A. Meredith).
REFERENCES Allman, B.L., and M.A. Meredith. 2007. Multisensory processing in ‘unimodal’ neurons: Cross-modal subthreshold auditory effects in cat extrastriate visual cortex. Journal of Neurophysiology 98:545–549. Allman, B.L., L.P. Keniston, and M.A. Meredith. 2009. Not just for bimodal neurons anymore: The contribution of unimodal neurons to cortical multisensory processing. Brain Topography 21:157–167. Carriere, B.N., D.W. Royal, T.J. Perrault, S.P. Morrison, J.W. Vaughan, B.E. Stein, and M.T. Wallace. 2007. Visual deprivation alters the development of cortical multisensory integration. Journal of Neurophysiology 98:2858–2867. Clarey, J.C., and D.R.F. Irvine. 1990. The anterior ectosylvian sulcal auditory field in the cat: I. An electrophysiological study of its relationship to surrounding auditory cortical fields. Journal of Comparative Neurology 301:289–303. Clemo, H.R., B.L. Allman, M.A. Donlan, and M.A. Meredith. 2007. Sensory and multisensory representations within the cat rostral suprasylvian cortices. Journal of Comparative Neurology 503:110–127. Clemo, H.R., and B.E. Stein. 1983. Organization of a fourth somatosensory area of cortex in cat. Journal of Neurophysiology 50:910–925. Dehner, L.R., L.P. Keniston, H.R. Clemo, and M.A. Meredith. 2004. Cross-modal circuitry between auditory and somatosensory areas of the cat anterior ectosylvian sulcal cortex: A ‘new’ inhibitory form of multisensory convergence. Cerebral Cortex 14:387–403. Horn, G., and R.M. Hill. 1966. Responsiveness to sensory stimulation of units in the superior colliculus and subjacent tectotegmental regions of the rabbit. Experimental Neurology 14:199–223. Jiang, H., F. Lepore, M. Ptito, and J.P. Guillemot. 1994a. Sensory interactions in the anterior ectosylvian cortex of cats. Experimental Brain Research 101:385–396.
64
The Neural Bases of Multisensory Processes
Jiang, H., F. Lepore, M. Ptito, and J.P. Guillermot. 1994b. Sensory modality distribution in the anterior ectosylvian cortex (AEC) of cats. Experimental Brain Research 97:404–414. King, A.J., and A.R. Palmer. 1985. Integration of visual and auditory information in bimodal neurones in the guinea-pig superior colliculus. Experimental Brain Research 60:492–500. Laurienti, P.J., T.J. Perrault, T.F. Stanford, M.T. Wallace, and B.E. Stein. 2005. On the use of superadditivity as a metric for characterizing multisensory integration in functional neuroimaging studies. Experimental Brain Research 166:289–297. Leopold, D.A. 2009. Neuroscience: Pre-emptive blood flow. Nature 457:387–388. Logothetis, N.K., J. Pauls, M. Augath, T. Trinath, and A. Oeltermann. 2001. Neurophysiological investigation of the basis of the fMRI signal. Nature 412:150–157. Meredith, M.A. 2004. Cortico-cortical connectivity and the architecture of cross-modal circuits. In Handbook of Multisensory Processes, eds. C. Spence, G. Calvert, and B. Stein, 343–355. Cambridge, MA: MIT Press. Meredith, M.A., and B.L. Allman. 2009. Subthreshold multisensory processing in cat auditory cortex. Neuroreport 20:126–131. Meredith, M.A., and B.E. Stein. 1983. Interactions among converging sensory inputs in the superior colliculus. Science 221:389–391. Meredith, M.A., and B.E. Stein. 1985. Descending efferents of the superior colliculus relay integrated multisensory information. Science 227:657–659. Meredith, M.A., and B.E. Stein. 1986. Visual, auditory, and somatosensory convergence on cells in the superior colliculus results in multisensory integration. Journal of Neurophysiology 56:640–662. Meredith, M.A., and B.E. Stein. 1996. Spatial determinants of multisensory integration in cat superior colliculus neurons. Journal of Neurophysiology. 75:1843–1857. Meredith, M.A., L.R. Keniston, L.R. Dehner, and H.R. Clemo. 2006. Cross-modal projections from somatosensory area SIV to the auditory field of the anterior ecosylvian sulcus (FAES) in cat: Further evidence for subthreshold forms of multisensory processing. Experimental Brain Research 172:472–484. Meredith, M.A., J.W. Nemitz, and B.E. Stein. 1987. Determinants of multisensory integration in superior colliculus neurons: I. Temporal factors. Journal of Neuroscience 7:3215–3229. Olson, C.R., and A.M. Graybiel. 1987. Ectosylvian visual area of the cat: Location, retinotopic organization, and connections. Journal of Comparative Neurology 261:277–294. Palmer, L.A., A.C. Rosenquist, and R.J. Tusa. 1978. The retinotopic organization of lateral suprasylvian visual areas in the cat. Journal of Comparative Neurology 177:237–256. Perrault, T.J., J.W. Vaughan, B.E. Stein, and M.T. Wallace. 2005. Superior colliculus neurons use distinct operational modes in the integration of multisensory stimuli. Journal of Neurophysiology 93:2575–2586. Rauschecker, J.P., and M. Korte. 1993. Auditory compensation for early blindness in cat cerebral cortex. Journal of Neuroscience 13:4538–4548. Sirotin, Y.B., and A. Das. 2009. Anticipatory haemodynamic signals in sensory cortex not predicted by local neuronal activity. Nature 457:475–479. Stecker, G.C., I.A. Harrington, E.A. MacPherson, and J.C. Middlebrooks. 2005. Spatial sensitivity in the dorsal zone (area DZ) of cat auditory cortex. Journal of Neurophysiology 94:1267–1280. Stein, B.E., and M.A. Meredith. 1993. Merging of the Senses. Cambridge, MA: MIT Press. Stein, B.E., M.A. Meredith. W.S. Huneycutt, and L. McDade. 1989. Behavioral indices of multisensory integration: Orientation to visual cues is affected by auditory stimuli. Journal of Cognitive Neuroscience 1:12–24. Sugihara, T., M.D. Diltz, B.B. Averbeck, and L.M. Romanski. 2006. Integration of auditory and visual communication information in the primate ventrolateral prefrontal cortex. Journal of Neuroscience 26:11138–11147. Wallace, M.T., and B.E. Stein. 1997. Development of multisensory neurons and multisensory integration in cat superior colliculus. Journal of Neuroscience 17:2429–2444. Wallace, M.T., M.A. Meredith, and B.E. Stein. 1992. Integration of multiple sensory inputs in cat cortex. Experimental Brain Research 91:484–488.
5
Audiovisual Integration in Nonhuman Primates A Window into the Anatomy and Physiology of Cognition Yoshinao Kajikawa, Arnaud Falchier, Gabriella Musacchia, Peter Lakatos, and Charles E. Schroeder
CONTENTS 5.1 Behavioral Capacities..............................................................................................................66 5.1.1 Recognition..................................................................................................................66 5.1.2 Fusion and Illusions.....................................................................................................66 5.1.3 Perception.................................................................................................................... 67 5.2 Neuroanatomical and Neurophysiological Substrates.............................................................68 5.2.1 Prefrontal Cortex......................................................................................................... 69 5.2.2 Posterior Parietal Cortex............................................................................................. 71 5.2.3 STP Area..................................................................................................................... 72 5.2.4 MTL Regions............................................................................................................... 73 5.2.5 Auditory Cortex........................................................................................................... 74 5.2.6 Visual Cortex............................................................................................................... 75 5.2.7 Subcortical Regions..................................................................................................... 76 5.3 Functional Significance of Multisensory Interactions............................................................. 77 5.3.1 Influences on Unimodal Perception............................................................................. 77 5.3.1.1 Influence on Temporal Dynamics of Visual Processing............................... 77 5.3.1.2 Sound Localization....................................................................................... 78 5.3.2 AV Recognition........................................................................................................... 79 5.4 Principles of Multisensory Interaction.................................................................................... 79 5.4.1 Inverse Effectiveness................................................................................................... 80 5.4.2 Temporal Contiguity....................................................................................................80 5.4.3 Spatial Contiguity........................................................................................................ 81 5.5 Mechanisms and Dynamics of Multisensory Interaction........................................................ 82 5.5.1 Phase Reset: Mechanisms............................................................................................ 82 5.5.2 Phase Reset: Dependence on Types of Stimuli........................................................... 83 5.6 Importance of Salience in Low-Level Multisensory Interactions........................................... 83 5.6.1 Role of (Top-Down) Attention.....................................................................................84 5.6.2 Attention or Saliency of Stimuli.................................................................................. 85 5.7 Conclusions, Unresolved Issues, and Questions for Future Studies........................................ 85 5.7.1 Complex AV Interactions............................................................................................. 85 5.7.2 Anatomical Substrates of AV Interaction.................................................................... 85 5.7.3 Implication of Motor Systems in Modulation of Reaction Time................................. 85 5.7.4 Facilitation or Information?......................................................................................... 86 65
66
The Neural Bases of Multisensory Processes
5.7.5 Inverse Effectiveness and Temporal Interaction.......................................................... 86 5.7.6 What Drives and What Is Driven by Oscillations?...................................................... 86 5.7.7 Role of Attention.......................................................................................................... 86 Acknowledgment.............................................................................................................................. 87 References......................................................................................................................................... 87
5.1 BEHAVIORAL CAPACITIES Humans can associate a sound with its visual source, where it comes from, how it is produced, and what it means. This association, or audiovisual (AV) integration, also occurs in many nonhuman primate species, and may be used in kin recognition, localization, and social interaction, among other things (Cheney and Seyfarth 1990; Ghazanfar and Santos 2004). These abilities suggest that nonhuman primates integrate sight and sound as humans do: through recognition of AV vocalizations and enhanced perception of audiovisual stimuli.
5.1.1 Recognition One of the most ubiquitous AV functions in everyday human life is recognizing and matching the sight and sounds of other familiar humans. Nonhuman primates can also recognize the sight and sound of a familiar object and can express this association behaviorally. Primates reliably associate coincident auditory and visual signals of conspecific vocalizations (Evans et al. 2005; Ghazanfar and Logothetis 2003; Jordan et al. 2005; Sliwa et al. 2009) and can match pictures to vocal sounds of both conspecifics and familiar humans (Izumi and Kojima 2004; Kojima et al. 2003; Martinez and Matsuzawa 2009). Monkeys can also identify a picture in which the number of individuals matches the number of vocal sounds (Jordan et al. 2005). Although it appears that primates recognize the AV components of a talking face much better when the individual is socially familiar, familiarity does not appear to be a critical component of audiovisual recognition; many of the studies cited above showed that primates can correctly match AV vocalizations from other primate species (Martinez and Matsuzawa 2009; Zangenehpour et al. 2009). Facial movement, on the other hand, appears to be a key component for nonhuman primates in recognizing the vocal behavior of others. When matching a visual stimulus to a vocalization, primates correctly categorized a still face as a mismatch (Izumi and Kojima 2004; Evans et al. 2005; Ghazanfar and Logothetis 2003) and performed poorly when only the back view was presented (Martinez and Matsuzawa 2009). AV matching by monkeys is not limited to facial recognition. Ghazanfar et al. (2002) showed that a rising-intensity sound attracted a monkey’s attention to a similar degree as a looming visual object (Schiff et al. 1962). These auditory and visual signals are signatures of an approaching object. Monkeys preferentially look at the corresponding looming rather than receding visual signal when presented with a looming sound. This was not the case when the monkey was presented with either a receding sound or white noise control stimulus with an amplitude envelope matching that of the looming sound (Maier et al. 2004). Therefore, monkeys presumably form single events by associating sound and visual attributes at least for signals of approaching objects. Taken together, these data indicate that the dynamic structure of the visual stimulus and compatibility between two modalities is vital for AV recognition in primates and suggest a common mechanistic nature across primate species.
5.1.2 Fusion and Illusions For humans, one of the most striking aspects of AV integration is that synchronous auditory and visual speech stimuli seem fused together, and illusions relating to this phenomenon may arise. The McGurk illusion is a case of this sort. When a mismatch between certain auditory and visual syllables occurs (e.g., an auditory “ba” with a visual “ga”), humans often perceive a synthesis of those
Audiovisual Integration in Nonhuman Primates
67
syllables, mostly “da” (McGurk and MacDonald 1976). The illusion persists even when the listener is aware of the mismatch, which indicates that visual articulations are automatically integrated into speech perception (Green et al. 1991; Soto-Faraco and Alsius 2009). Vatakis et al. (2008) examined whether auditory and visual components of monkey vocalizations elicited a fused perception in humans. It is well known that people are less sensitive to temporal asynchrony when auditory and visual components of speech are matched compared to a mismatched condition (called the “unity effect”). Capitalizing on this phenomenon, Vatakis and colleagues used a temporal order judgment task with matched and mismatched sounds and movies of monkey vocalizations across a range of stimulus onset asynchronies (SOA). The unity effect was observed for human speech vocalization, but was not observed when people observed monkey vocalizations. The authors also showed negative results for human vocalizations mimicking monkey vocalizations, suggesting that the fusion of face–voice components is limited to human speech for humans. This may be because of the fact that monkey vocal repertoires are much more limited than those of humans and have a large dissimilarity between facial expressive components and sound (Chakladar et al. 2008; Partan 2002). Another famous AV illusion, called the “ventriloquist effect,” also appears to have a corollary in nonhuman primate perception. The effect is such that under the right conditions, a sound may be perceived as originating from a visual location despite a spatial disparity. After training a monkey to identify the location of a sound source, Recanzone’s group introduced a 20 to 60 min period of spatially disparate auditory (tones) and visual (dots) stimuli (Woods and Recanzone 2004). The consequence of this manipulation appeared in the sound lateralization task as a deviation of the “auditory center spot” in the direction to the location of sound relative to visual fixation spot during the prior task. The underlying neural mechanism of this effect may be similar to the realignment of visual and auditory spatial maps after adapting to an optical prism displacing the visual space (Cui et al. 2008; Knudsen and Knudsen 1989). What about perception of multisensory moving objects? Preferential looking at looming sound and visual signal suggests that monkeys associate sound and visual attributes of approaching objects (Maier et al. 2004). However, longer looking does not necessarily imply fused perception, but may instead suggest the attentional attraction to moving stimuli after assessing their congruency. Fused perception of looming AV signals was supported by human studies, showing the redundant signal effect (see Section 5.1.3 for more details) in reaction time (shorter reaction time to congruent looming AV signals) under the condition of bimodal attention (Cappe et al. 2010; see also Romei et al. 2009 for data suggesting preattentive effects of looming auditory signals). Interestingly, for such an AV looming effect to happen, the spectrum of the sound has to be dynamically structured along with sound intensity. It is not known which other attributes of a visual stimulus, other than motion, could contribute to this effect. It is likely that auditory and visual stimuli must be related, not only in spatial and temporal terms, but also in dynamic spectral dimensions in both modalities in order for an attentional bias or performance enhancement to appear.
5.1.3 Perception Visual influences on auditory perception, and vice versa, is well established in humans (Sumby and Pollack 1954; Raab 1962; Miller 1982; Welch and Warren 1986; Sams et al. 1991; Giard and Peronnet 1999; for review, see Calvert 2001; Stein and Meredith 1993) and has been examined in several studies on nonhuman primates (described below). By using simple auditory and visual stimuli, such as tones and dots, the following studies show that auditory and visual information interact with each other to modulate perception in monkeys. Barone’s group trained monkeys to make a saccade to a visual target that starts to flash at the moment when the fixation point disappears (Wang et al. 2008). In half of the trials, the visual target was presented with a brief task-irrelevant noise. The result was faster saccadic reaction times when the visual target was accompanied with a sound than without it. Frens and Van Opstal (1998) also
68
The Neural Bases of Multisensory Processes
studied the influence of auditory stimulation on saccadic responses in monkeys performing tasks similar to that of Wang et al. (2008). They showed not only a shortening of reaction time, but also that reaction time depended on the magnitude of the spatial and temporal shift between visual and auditory stimuli; smaller distance and closer timing yielded shorter reaction times. These results demonstrated a temporal effect of sound on visual localization. These results are compatible with human psychophysical studies of AV integration (Frens et al. 1995; Diederich and Colonius 2004; Perrott et al. 1990) and suggest that the underlying mechanism may be common to both human and nonhuman primates. Like humans, monkeys have also been shown to have shorter manual reaction times to bimodal targets compared with unimodal targets. In a simple detection task in which a monkey had to report the detection of a light flash (V alone), noise sound (A alone), or both (AV) stimuli by manual response, reaction times to AV stimuli were faster than V alone regardless of its brightness (Cappe et al. 2010; see also Miller et al. 2001, showing similar data for small data sets). When the sound was loud, reaction times to AV stimuli and A alone were not different. When sound intensity was low, the overall reaction time was longer and the response to AV stimuli was still faster than A alone. A study from our laboratory showed that reaction times to perceptual “oddballs,” or novel stimuli in a train of standard stimuli, were faster for AV tokens than for the visual or auditory tokens presented alone (Kajikawa and Schroeder 2008). Monkeys were presented with a series of standard AV stimuli (monkey picture and vocal sound) with an occasional oddball imbedded in the series that differed from the standard in image (V alone), sound (A alone), or both (AV) stimuli. The monkey had to manually respond upon detection of such oddballs. In that case, whereas intensity levels were fixed, reaction times to the AV oddballs were faster than either A alone or V alone oddballs. In addition, the probability of a correct response was highest for the AV oddball and lowest for the A alone condition. Therefore, not only the detection of signals, but also its categorization benefited from AV integration. This pattern of reaction times conforms to the results of human psychophysics studies showing faster reaction time to bimodal than unimodal stimuli (Frens et al. 1995; Diederich and Colonius 2004; Perrott et al. 1990). Observations of faster reaction in response to bimodal compared with unimodal stimuli in different motor systems suggest that AV integration occurs in sensory systems before the motor system is engaged to generate a behavioral response (or that a similar integration mechanism is present in several motor systems). Difference in task demands complicates the ability to define the role of attention in the effect of AV integration on reaction times. In the study conducted by Wang et al. (2008), monkeys were required to monitor only the occurrence of the visual stimulus. Therefore, task-irrelevant sound acted exogenously from outside of the attended sensory domain, that is, it likely drew the monkey’s attention, but this possibility is impossible to assess. In contrast, Cappe et al. (2010) and Kajikawa and Schroeder (2008) used monkeys that were actively paying attention to both visual and auditory modalities during every trial. It is worth noting that the sound stimuli used by Wang et al. (2008) did not act as distracters. Hence, it was possible that monkeys could do the task by paying attention to both task-relevant visual stimuli and task-irrelevant sound (see Section 5.6).
5.2 NEUROANATOMICAL AND NEUROPHYSIOLOGICAL SUBSTRATES In the following sections, we will describe AV interactions in numerous monkey brain regions (Figure 5.1). Investigators have identified AV substrates in broadly two ways: by showing that (1) the region responds to both auditory and visual stimuli or (2) AV stimuli produce neural activity that differs from the unimodal responses presented alone. AV integration has been shown at the early stages of processing, including primary sensory and subcortical areas (for review, see Ghazanfar and Schroeder 2006; Musacchia and Schroeder 2009; Schroeder and Foxe 2005; Stein and Stanford 2008). Other areas that respond to both modalities have been identified in the prefrontal cortex (PFC), the posterior parietal cortex (PPC), the superior temporal polysensory area (STP), and
69
Audiovisual Integration in Nonhuman Primates
2
iS
-R
-lg PV
pt al T ud B Ca lt/P 1 be
VLPFC
al A str B Ro lt/P be
T
FS
DLPFC
7A t
p al T ud B Caelt/P 1 b al A str B Roelt/P b
23 31
V2/V1
Pro
MP MGm SG
Li Po
FIGURE 5.1 (See color insert.) Connections mediating multisensory interactions in primate auditory cortex. Primate auditory cortices receive a variety of inputs from other sensory and multisensory areas. Somatosensory areas (PV, parietoventral area; Ri, retroinsular area; S2, secondary somatosensory cortex) and their projections to auditory cortex are shown in red. Blue areas and lines denote known visual inputs (FST, fundus of superior temporal area; Pro, prostriata; V1, primary visual cortex; V2, secondary visual cortex). Feedback inputs from higher cognitive areas (7A, Brodmann’s area 7A; 23, Brodmann’s area 23; 31, Brodmann’s area 31; DLPFC, dorsolateral prefrontal cortex; VLPFC, ventrolateral prefrontal cortex) are shown in green. Multisensory feedforward inputs from thalamic nuclei (Li, limitans; MP, medial pulvinar; MGm, medial division of medial geniculate; Po, posterior nucleus; SG, suprageniculate nucleus) are shown in purple.
medial temporal lobe (MTL). Even though most studies could not elucidate the relationship between behavior and physiology because they did not test the monkey’s behavior in conjunction with physiological measures, these studies provide promising indirect evidence that is useful in directing future behavioral/physiological studies.
5.2.1 Prefrontal Cortex In the PFC, broad regions have been reported to be multisensory. PFC is proposed to have “what” and “where” pathways of visual object and space information processing segregated into dorsolateral (DLPFC) and ventrolateral (VLPFC) parts of PFC (Goldman-Rakic et al. 1996; Levy and Goldman-Rakic 2000; Ungerleider et al. 1998). Although numerous studies support the idea of segregated information processing in PFC (Wilson et al. 1993), others found single PFC neurons integrated what and where information during a task that required monitoring of both object and location (Rao et al. 1997). It appears that auditory information processing in PFC also divides into analogous “what” (e.g., speaker specific) and “where” (e.g., location specific) domains. The proposed “what” and “where” pathways of the auditory cortical system (Kaas and Hackett 2000; Rauschecker and Tian 2000) have been shown to project to VLPFC and DLPFC, respectively (Hackett et al. 1999; Romanski et al. 1999a, 1999b). Broad areas of the DLPFC were shown to be sensitive to sound location (Artchakov
70
The Neural Bases of Multisensory Processes
et al. 2007; Azuma and Suzuki 1984; Kikuchi-Yorioka and Sawaguchi 2000; Vaadia et al. 1986). Conversely, response selectivity to macaque vocal sounds were found in VLPFC (Cohen et al. 2009; Gifford et al. 2005; Romanski and Goldman-Rakic 2002; Romanski et al. 2005) and orbitofrontal cortex (Rolls et al. 2006). These two areas may correspond to face-selective regions of frontal lobe in nonhuman primates (Parr et al. 2009; Tsao et al. 2008b). Taken together, these findings support the notion that, as in the visual system, sensitivity to location and nonspatial features of sounds are segregated in PFC. Although the dorsolateral stream in PFC has largely been shown to be sensitive to location, auditory responses to species-specific vocalizations were also found in regions of DLPFC in squirrel monkey (Newman and Lindsley 1976; Wollberg and Sela 1980) and macaque monkey (Bon and Lucchetti 2006). Interestingly, visual fixation diminished responses to vocal sounds in some neurons (Bon and Lucchetti 2006). Taken together with the results of Rao et al. (1997) showing that neurons of the “what” and “where” visual stream are distributed over a region spanning both the DLPFC and VLPFC, these studies suggest that the “what” auditory stream might extend outside the VLPFC. Apart from showing signs of analogous processing streams in auditory and visual pathways, PFC is cytoarchitecturally primed to process multisensory stimuli. In addition to auditory cortical afferents, the DLPFC and VLPFC have reciprocal connections with rostral and caudal STP subdivisions (Seltzer and Pandya 1989). The VLPFC also receives inputs from the PPC, a presumed “where” visual region (Petrides and Pandya 2009). Within both the DLPFC and VLPFC, segregated projections of different sensory afferents exist. Area 8 receives projections from visual cortices (occipital and IPS) in its caudal part, and auditory-responsive cortices [superior temporal gyrus (STG) and STP] in its rostral part (Barbas and Mesulam 1981). Similar segregation of visual [inferior temporal (IT)] and auditory (STG and STP) afferents exist within VLPFC (Petrides and Pandya 2002). Thus, DLPFC and VLPFC contain regions receiving both or either one of auditory and visual projections, and those regions are intermingled. Additionally, orbitofrontal cortex and medial PFC receive inputs from IT, STP, and STG (Barbas et al. 1999; Carmichael and Price 1995; Cavada et al. 2000; Kondo et al. 2003; Saleem et al. 2008), and may contribute to AV integration (see Poremba et al. 2003). Not surprisingly, bimodal properties of PFC neurons have been described in numerous studies. Some early studies described neurons responsive to both tones and visual stimuli (Kubota et al. 1980; Aou et al. 1983). However, because these studies used sound as a cue to initiate immediate behavioral response, it is possible that the neuronal response to the sound might be related to motor execution. Other studies of PFC employed tasks in which oculomotor or manual responses were delayed from sensory cues (Artchakov et al. 2007; Ito 1982; Joseph and Barone 1987; KikuchiYorioka and Sawaguchi 2000; Vaadia et al. 1986; Watanabe 1992). Despite the delayed response, populations of neurons still responded to both visual and auditory stimuli. Such responses had spatial tuning and dependence on task conditions such as modality of task and task demands of discrimination, active detection, passive reception (Vaadia et al. 1986), or reward/no reward contingency (Watanabe 1992). One report shows that visuospatial and audiospatial working memory processes seem to share a common neural mechanism (Kikuchi-Yorioka and Sawaguchi 2000). The behavioral tasks used in studies described so far did not require any comparison of visual and auditory events. Fuster et al. (2000) trained monkeys to learn pairing of tones and colors and perform a cross-modal delayed matching task using tones as the sample cue and color signals as the target. They found that PFC neurons in those monkeys had elevated firing during the delay period that was not present on error trials. Therefore, PFC has many neurons responsive to both auditory and visual signals, somehow depending on behavioral conditions, and possibly associates them. Romanski’s group explored multisensory responses in VLPFC (Sugihara et al. 2006), and found that this region may have unimodal visual, unimodal auditory, or bimodal AV responsive regions (Romanski et al. 2002, 2005). Their group used movies, images, and sounds of monkeys producing vocalizations as stimuli, and presented them unimodally or bimodally while subjects fixated. Although neurons responded exclusively to one or both modalities, about half of the neurons
Audiovisual Integration in Nonhuman Primates
71
examined exhibited AV integration as either enhancement or suppression of unimodal response. Because subjects were not required to maintain working memory or make decision, those responses are considered to be sensory. In addition to the above described regions, premotor (PM) areas between the primary motor cortex and the arcuate sulcus contain neurons sensitive to sound and vision. Although most of the neurons in PM respond to somatosensory stimuli, there are neurons that also respond to sound and visual stimuli and have receptive fields spatially registered between different modalities (Graziano et al. 1994, 1999). Those neurons are located in caudal PM particularly coding the space proximal to the face (Fogassi et al. 1996; Graziano et al. 1997; Graziano and Gandhi 2000) as well as defensive actions (Cooke and Graziano 2004a, 2004b). Rostral PM contains audiovisual mirror neurons activity that is elevated not only during the execution of actions but also during the observation of such actions from others. Those neurons generate specific manual actions and respond to sound in addition to the sight of such actions (Keysers et al. 2003; Kohler et al. 2002; Rizzolatti et al. 1996; Rizzolatti and Craighero 2004) and the goal objects of those actions (Murata et al. 1997). Although AV sensitivity in caudal PM seems directly connected to the subject’s actions, rostral PM presumably reflects the cognitive processing of others’ actions. In summary, the PFC is subdivided into various regions based on sensory, motor, and other cognitive processes. Each subdivision contains AV sensitivity that could serve to code locations or objects. There are neurons specialized in coding vocalization, associating sound and visual signals, or engaged in representation/execution of particular motor actions.
5.2.2 Posterior Parietal Cortex The PPC in the monkey responds to different modalities (Cohen 2009), is known to be a main station of the “where” pathway before the information enters PFC (Goodale and Milner 1992; Ungerleider and Mishkin 1982), and is highly interconnected with multisensory areas (see below). PPC receives afferents from various cortices involved in visual spatial and motion processing (Baizer et al. 1991; Cavada and Goldman-Rackic 1989a; Lewis and Van Essen 2000; Neal et al. 1990). The caudal area of PPC has reciprocal connections with multisensory parts of PFC and STS, suggesting that the PPC plays a key role in multisensory integration (Cavada and Goldman-Rackic 1989b; Neal et al. 1990). The ventral intraparietal area receives input from the auditory association cortex of the temporoparietal area (Lewis and Van Essen 2000). The anterior intraparietal area also receives projections from the auditory cortex (Padberg et al. 2005). PPC receives subcortical inputs from the medial pulvinar (Baizer et al. 1993) and superior colliculus (SC; Clower et al. 2001) that may subserve multisensory responses in PPC. Several subregions of PPC are known to be bimodal. An auditory responsive zone in PPC overlaps with visually responsive areas (Poremba et al. 2003). Space-sensitive responses to sound (noise) in several areas of PPC, typically thought to be primarily visual oriented, have been observed in the lateral intraparietal cortex (LIP; Stricane et al. 1996), ventral intraparietal area (Schlack et al. 2005), the medial intraparietal cortex, and the parietal reach region (Cohen and Andersen 2000, 2002). The auditory space-sensitive neurons in PPC also respond to visual stimulation with similar spatial tuning (Mazzoni et al. 1996; Schlack et al. 2005). Furthermore, the spatial tuning of the auditory and visual response properties was sufficiently correlated to be predictive of one another, indicating a shared spatial reference frame across modalities (Mullette-Gilman et al. 2005, 2009). PPC also plays a major role in motor preparation during localization tasks (Andersen et al. 1997). Auditory responses in LIP only appeared after training on memory-guided delayed reaction tasks with auditory and visual stimuli (Grunewald et al. 1999) and disappeared when the sound cue became irrelevant for the task (Linden et al. 1999). These results suggested that auditory responses in PPC were not just sensory activity. Information for encoding spatial auditory cues evolve as the phase of the task progresses but remains constantly higher for visual ones in LIP and parietal reach region (Cohen et al. 2002, 2004). Thus, there is a difference in processing between modalities.
72
The Neural Bases of Multisensory Processes
Even though most PPC studies used simple stimuli such as LED flashes and noise bursts, one study also examined LIP response to vocal sounds and showed that LIP neurons are capable of carrying information of sound acoustic features in addition to spatial location (Gifford and Cohen 2005). In that study, sounds were delivered passively to monkeys during visual fixation. Thus, it seems inconsistent with the previously mentioned findings that manifestation of auditory response in PPC requires behavioral relevance of the sounds (Grunewald et al. 1999; Linden et al. 1999). Nevertheless, that study suggested the possibility that auditory coding in PPC may not be limited to spatial information. Similarly, the existence of face-selective patches was shown in PPC of chimpanzee using PET (Parr et al. 2009). Although these studies suggest AV integration in PPC, responses to stimuli in bimodal conditions have not yet been directly examined in monkeys.
5.2.3 STP Area The STP, located in the anterior region of the superior temporal sulcus, from the fundus to the upper bank, responds to multisensory stimuli in monkeys (Bruce et al. 1981; Desimone and Gross 1979; Schroeder and Foxe 2002; Poremba et al. 2003) and is a putatively key site for AV integration in both monkeys and humans. STP is highly connected to subcortical and cortical multisensory regions. STP receives inputs from presumed multisensory thalamic structures (Yeterian and Pandya 1989) and medial pulvinar (Burton and Jones 1976), and has reciprocal connections with the PFC and other higher-order cortical regions such as PPC, IT cortex, cingulate cortex, MTL, and auditory parabelt regions (Barnes and Pandya 1992; Cusick et al. 1995; Padberg et al. 2003; Saleem et al. 2000; Seltzer et al. 1996; Seltzer and Pandya 1978, 1994). Based on connectivity patterns, area STP can be subdivided into rostral and caudal regions. Its anterior part is connected to the ventral PFC, whereas the caudal part seems to be connected to the dorsal PFC (Seltzer and Pandya 1989). STP exhibits particular selectivity to complex objects, faces, and moving stimuli. STP was shown to have responses to visual objects (Oram and Perrett 1996), and particularly to show some degree of face selectivity (Bruce et al. 1981; Baylis et al. 1987). Face-selectivity was shown to exist in discrete patches in monkeys (Pinsk et al. 2005; Tsao et al. 2006, 2008a) and chimpanzees (Parr et al. 2009), although others found responses to faces over a wide area (Hoffman et al. 2007). Responses to faces are further selective to identity, gaze direction, and/or viewing angle of the presented face (De Souza et al. 2005; Eifuku et al. 2004). Both regions of caudal STS, like MT (Born and Bradley 2005; Duffy and Wurtz 1991; Felleman and Kaas 1984) or MST (Gu et al. 2008; Tanaka et al. 1986) as well as anterior STP (Anderson and Siegel 1999, 2005; Nelissen et al. 2006; Oram et al. 1993), are sensitive to directional movement patterns. Although the caudal STS is regarded as a part of the “where” pathway, the anterior STP is probably not because of its large spatial receptive field size (Bruce et al. 1981, 1986; Oram et al. 1993). Given this and taken together with face selectivity, it stands to reason that anterior STP may be important for the perception or recognition of facial gestures, such as mouth movement. In addition, STP responds to somatosensory, auditory, and visual stimulation. Multisensory responsiveness of neurons in STS was tested in anesthetized (Benevento et al. 1977; Bruce et al. 1981; Hikosaka et al. 1988) and alert monkeys (Baylis et al. 1987; Perrett et al. 1982; Watanabe and Iwai 1991). In both cases, stimuli were delivered unimodally (Baylis et al. 1987; Bruce et al. 1981; Hikosaka et al. 1988) or simple bimodal stimuli (tone and LED flash) were used (Benevento et al. 1977; Watanabe and Iwai 1991). Although auditory and visual selective neurons were present in STG and formed segregated clusters in STP (Dahl et al. 2009), a population of neurons responded to both visual and auditory stimuli (Baylis et al. 1987; Bruce et al. 1981; Hikosaka et al. 1988). When the response to bimodal stimuli was examined, the neural firing rate was either enhanced or reduced compared to unimodal stimuli (Benevento et al. 1977; Watanabe and Iwai 1991). The laminar profile
Audiovisual Integration in Nonhuman Primates
73
of current source density (CSD), which reflects a pattern of afferent termination across cortical layers in response to sounds (click) and lights (flash), indicated that STP receives feedforward auditory and visual inputs to layer IV (Schroeder and Foxe 2002). Lesion studies in STP reveal that the region appears to process certain dimensions of sound and vision used for discrimination. Monkeys with lesions of STG and STP areas showed an impairment of auditory but not visual working memory and auditory pattern discrimination while sparing hearing (Iversen and Mishkin 1973; Colombo et al. 2006). Although IT lesions impair many visual tasks, IT and STP lesions (Aggleton and Mishkin 1990; Eaccott et al. 1993) selectively impair visual discrimination of objects more severely while sparing the performance of other visual tasks. These findings suggest that multisensory responses in STP are not simply sensory, but are involved in cognitive processing of certain aspects of sensory signals. A series of recent studies examined AV integration in STS using more naturalistic stimuli during visual fixation, using sound and sight of conspecific vocalizations, naturally occurring scenes, and artifactual movies (Barraclough et al. 2005; Dahl et al. 2009; Chandrasekaran and Ghazanfar 2009; Ghazanfar et al. 2008; Kayser and Logothetis 2009; Maier et al. 2008). As in previous studies (Benevento et al. 1977; Watanabe and Iwai 1991), neuronal firing to bimodal stimuli was found to be either stronger or weaker when compared to unimodal stimuli. Barraclough et al. (2005) showed that the direction of change in the magnitude of response to AV stimuli from visual response depended on the size of visual response. Incongruent pairs of sound and scenes seem to evoke weaker responses (Barraclough et al. 2005; Maier et al. 2008). To our knowledge, there are no animal studies that used task conditions requiring active behavioral discrimination. Therefore, results may not be conclusive about whether the STS can associate/integrate information of different modalities to form a recognizable identity. However, their bimodal responsiveness, specialization for objects such as faces in the visual modality, and sensitivity to congruence of signals in different modalities suggests that areas in STP are involved in such cognitive processes and/or AV perception.
5.2.4 MTL Regions The MTL is composed of the hippocampus, entorhinal, perirhinal and parahippocampal cortices. These regions are involved in declarative memory formation (Squire et al. 2004) and place coding (McNaughton et al. 2006). The amygdala plays a predominant role in emotional processes (Phelps and LeDoux 2005), some of which may be affected by multisensory conjunction (e.g., in response to “dominant” conspecifics or looming stimuli, as discussed above). The MTL receives various multisensory cortical inputs. Entorhinal cortex (EC), the cortical gate to the hippocampus, receives inputs from STG, STP, IT, and other nonprimary sensory cortices either directly or through parahippocampal and perirhinal cortices (Blatt et al. 2003; Mohedano-Moriano et al. 2007, 2008; Suzuki and Amaral 1994). Auditory, visual, and somatosensory association cortices also project to the nuclei of the amygdala (Kosmal et al. 1997; Turner et al. 1980). Although IT, a part of the ventral “what” pathway (Ungerleider and Mishkin 1982) and the major input stage to MTL, responds mainly to complex visual stimuli, IT can exhibit postauditory sample delay activity during cross-modal delayed match-to-sample tasks, in which auditory sample stimuli (tones or broadband sounds) were used to monitor the type of visual stimuli (Colombo and Gross 1994; Gibson and Maunsell 1997). During the same task, greater auditory responses and delay activity were observed in the hippocampus. Those delay activities presumably reflected the working memory of a visual object associated with sound after learning. In a visual discrimination task that used tone as a warning to inform the start of trials to monkeys, ventral IT neurons responded to this warning sound (Ringo and O’Neill 1993). Such auditory responses did not appear when identical tones were used to signal the end of a trial, thereby indicating that effects were context-dependent.
74
The Neural Bases of Multisensory Processes
In the hippocampus, a small population of neurons responds to both auditory and visual cues for moving tasks in which monkeys control their own spatial translation and position (Ono et al. 1993). Even without task demands, hippocampal neurons exhibit spatial tuning properties to auditory and visual stimuli (Tamura et al. 1992). Neurons in the amygdala respond to face or vocalization of conspecifics passively presented (Brothers et al. 1990; Kuraoka and Nakamura 2007; Leonard et al. 1985). Some neurons respond selectively to emotional content (Hoffman et al. 2007; Kuraoka and Nakamura 2007). Multisensory responses to different sensory cues were also shown in the amygdala of monkeys performing several kinds of tasks to retrieve food or drink, avoid aversive stimuli, or discriminate sounds associated with reward (Nishijo et al. 1988a). These responses reflected affective values of those stimuli rather than the sensory aspect (Nishijo et al. 1988b). These data corroborate the notion that sensory activity in MTL is less likely to contribute to detection, but more related to sensory association, evaluation, or other cognitive processes (Murray and Richmond 2001). The integrity of these structures is presumably needed for the formation and retention of cross-modal associational memory (Murray and Gaffan 1994; Squire et al. 2004).
5.2.5 Auditory Cortex Recent findings of multisensory sensitivity in sensory (early) cortical areas, including primary areas, have revised our understanding of cortical “AV integration” (for review, see Ghazanfar and Schroeder 2006). Before these findings came to light, it was thought that AV integration occurred in higher-order cortices during complex component processing. To date, a large body of work has focused on multisensory mechanisms in the AC. Like some of the seminal findings with human subjects in this field (Sams et al. 1991; Calvert and Campbell 2003), the monkey AC appears to respond to visual stimulus presented alone. Kayser et al. (2007) measured the BOLD signal to natural unimodal and bimodal stimuli over the superior temporal plane. They observed that visual stimuli alone could induce activity in the caudal area of the auditory cortex. In this same area, the auditory-evoked signal was also modulated by cross-modal stimuli. The primate auditory cortex stretches from the fundus of the lateral sulcus (LS) medially to the STG laterally, and has more than 10 defined areas (Hackett 2002; Hackett et al. 2001; Kaas and Hackett 2000). Among auditory cortical areas, the first area in which multisensory responsiveness was examined was the caudal–medial area (CM; Schroeder et al. 2001). In addition to CM, other auditory areas including the primary auditory cortex (A1) were also shown to receive somatosensory inputs (Cappe and Barone 2005; Disbrow et al. 2003; de la Mothe et al. 2006a; Kayser et al. 2005; Lakatos et al. 2007; Smiley et al. 2007; for a review, see Musacchia and Schroeder 2009). Most areas also received multisensory thalamic inputs (de la Mothe 2006b; Hackett et al. 2007; Kosmal et al. 1997). Documented visual inputs to the auditory cortex have thus far originated from STP (Cappe and Barone 2005) as well as from peripheral visual fields of V2 and prostriata (Falchier et al. 2010). Schroeder and Foxe (2002) reported CSD responses to unimodal and bimodal combinations of auditory, visual, and somatosensory stimuli in area CM of the awake macaque. The laminar profiles of CSD activity in response to visual stimuli differed from those of auditory and somatosensory responses. Analysis of activity in different cortical layers revealed that visual inputs targeted the extragranular layers, whereas auditory and somatosensory inputs terminated in the granular layers in area CM. These two termination profiles are in accordance with the pattern of laminar projections of visual corticocortical projections (Falchier et al. 2002; Rockland and Ojima 2003) and primarylike thalamocortical projections (Jones 1998), respectively. In contrast, A1 receives auditory and somatosensory inputs in the granular and supragranular cortical layers, respectively (Lakatos et al. 2007). This suggests that somatosensory input to A1 originates from lateral, feedback, or nonspecific thalamic nuclei connections. Our laboratory showed that attended visual stimuli presented in
Audiovisual Integration in Nonhuman Primates
75
isolation modulate activity in the extragranular layer of A1 (Lakatos et al. 2009) and the same pattern is observed with attended auditory stimuli in V1 (Lakatos et al. 2008). These findings strengthen the hypothesis that nonspecific thalamic projections (Sherman and Guillery 2002) or pulvinar-mediated lateral connections (Cappe et al. 2009) contribute to AV integration in A1. Ghazanfar et al. and Logothetis et al. groups have shown that concurrent visual stimuli influenced auditory cortical response systematically in A1 as well as in the lateral associative auditory cortices and STP (Ghazanfar et al. 2005; Hoffman et al. 2008; Kayser et al. 2007, 2008). These studies used complex and natural AV stimuli, which are more efficient in evoking responses in some nonprimary auditory areas (Petkov et al. 2008; Rauschecker et al. 1995; Russ et al. 2008). Their initial study (Ghazanfar et al. 2005) revealed that movies of vocalizations presented with the associated sounds could modulate local field potential (LFP) responses in A1 and the lateral belt. Kayser et al. (2008) showed visual responses in LFP at frequency bands near 10 Hz. This frequency component responded preferably to faces, and more preference existed in the lateral belt than A1 (Hoffman et al. 2008). However, multiunit activity (MUA) barely showed visual response that correlated in magnitude with the LFP response. AV interactions occurred as a small enhancement in LFP and suppression in MUA (see also Kayser and Logothetis 2009). Although AV integration in areas previously thought to be unisensory are intriguing and provocative, the use of a behavioral task is imperative in order to determine the significance of this phenomenon. Brosch et al. (2005) employed a task in which an LED flash cued the beginning of an auditory sequence. Monkeys were trained to touch a bar to initiate the trial and to signal the detection of a change in the auditory sequence. They found that some neurons in AC responded to LED, but only when the monkey touched the bar after detecting the auditory change. This response disappeared when the monkey had to perform a visual task that did not require auditory attention. Although this may be due in part to the fact that the monkeys were highly trained (or potentially overtrained) on the experimental task, they also point to the importance of engaging auditory attention in evoking responses to visual stimuli. Findings like these, which elucidate the integrative responses of individual and small populations of neurons, can provide key substrates to understand the effects of bimodal versus unimodal attention on cross-modal responses demonstrated in humans (Jääskeläinen et al. 2007; McDonald et al. 2003; Rahne and Böckmann-Barthel 2009; Talsma et al. 2009; von Kriegstein and Giraud 2006). The timing of cross-modal effects in primary auditory and posterior auditory association cortices in resting or anesthetized monkeys seemed consistent with the cross-modal influence of touch and sight in monkeys engaged in an auditory task. In resting monkeys, the somatosensory CSD response elicited by electrical stimulation of the median nerve had an onset latency as short as 9 ms (Lakatos et al. 2007; Schroeder et al. 2001), and single neurons responded to air puff stimulation at dorsum hand in anesthetized monkey with a latency of about 30 ms (Fu et al. 2003). Cutaneous sensory response of single units in AC during active task peaked at 20 ms (Brosch et al. 2005) and occurred slower than direct electrical activation of afferent fibers but faster than passive condition. Similarly, visual responses of single units in AC were observed from 60 ms and peaked at around 100 ms after the onset of LED during an active task (Brosch et al. 2005). That was within the same range of the onset latency, about 100 ms, of neuronal firing and the peak timing of LFP responses to complex visual stimuli in AC when monkeys were simply visually fixating (Hoffman et al. 2007; Kayser et al. 2008). The effect of gaze direction/saccades will also need to be taken into account in future studies because it has been proposed that it can considerably affect auditory processing (Fu et al. 2004; Groh et al. 2001; Werner-Reiss et al. 2006).
5.2.6 Visual Cortex There has been much less multisensory research done in visual cortex than in auditory cortex, although it has been shown that the peripheral visual field representations of primary visual cortex (V1) receive inputs from auditory cortical areas, A1, parabelt areas on STG, and STP (Falchier et al.
76
The Neural Bases of Multisensory Processes
2002). The peripheral visual field representation of area V2 also receives feedback inputs from caudal STG/auditory belt region (Rockland and Ojima 2003). A preference to vocal sounds, relative to other sounds, was found in the nonprimary visual cortex using functional MRI (fMRI) in monkeys (Petkov et al. 2008). In contrast to studies of visual responses in the auditory cortex, not many visual studies recorded auditory responses in visual cortex during the performance of a task. Wang et al. (2008) recorded V1 single-unit firing while monkeys performed a visual detection task. Concurrent presentation of auditory and visual stimuli not only shortened saccadic reaction time, but also increased the neuronal response magnitude and reduced response latency. This effect was greatest when the intensity of visual stimuli was of a low to moderate level, and disappeared when the luminance of the visual stimuli was intense. When monkeys were not performing a task, no auditory effect was observed in V1 (see Section 5.6.1). In a series of studies from our laboratory, a selective attention task was employed to determine whether attention to auditory stimuli influenced neuronal activity in V1 (Lakatos et al. 2008, 2009; Mehta et al. 2000a, 2000b). In these studies, tones and flashes were presented alternatively and monkeys had to monitor a series of either visual or auditory stimuli, while ignoring the other modality. The visual response was stronger when monkeys tracked the visual series than when they tracked the auditory series. In the attend-auditory condition, it appeared that a phase reset of ongoing neuronal oscillations occurred earlier than the visual response (Lakatos et al. 2009). This effect disappeared when the same stimuli were ignored. Thus, auditory influences on V1 were observed only when auditory stimuli were attended. It contrasted with the findings of Wang et al. (2008) in which sound affected V1 activity in monkeys performing a visual task. As we propose later, control of attention likely has a major role in the manifestation of auditory effects in V1 (see Section 5.6.2).
5.2.7 Subcortical Regions The basal ganglia group is composed of several nuclei, each having a distinct function, such as motor planning and execution, habitual learning, and motivation. Several studies show auditory, visual, and bimodally responsive neurons in basal ganglia nuclei. Even though multisensory responses could be observed under passive conditions (Santos-Benitez et al. 1995), many studies showed that these responses were related to reinforcement (Wilson and Rolls 1990) or sensorimotor association (Aosaki et al. 1995; Hikosaka et al. 1989; Kimura 1992). Although it is well known that the SC is a control station orienting movement (Wurtz and Albano 1980), its multisensory property has been a hotbed of research for decades in monkey (Allon and Wollberg 1978; Cynader and Berman 1972; Updyke 1974; Wallace et al. 1996) and other animal models (Meredith and Stein 1983; Meredith et al. 1987; Rauschecker and Harris 1989; Stein et al. 2001, 2002). Neurons in the monkey SC adhered to well-established principles of multisensory integration such as spatial contiguity and inverse effectiveness (for review, see Stein and Steinford 2008), whether the animals were engaged in tasks (Frens and Van Opstal 1998) or under anesthesia (Wallace et al. 1996). In the SC of awake animals, AV integration depended on the task conditions, whether they fixated on visible or memory-guided spots during AV stimuli (Bell et al. 2003). The presence of a visual fixation spot decreased unimodal responses, and nearly suppressed response enhancement by AV stimuli. Bell et al. (2003) attributed the reason of weaker AV integration during visually guided fixation to fixation-mediated inhibition in SC. It is consistent with the fact that, whereas activity in SC is coupled to eye movements, fixation requires the monkey to refrain from gaze shifts. Although the inferior colliculus (IC) has been generally assumed to be a passive station for primarily auditory information and immune to nonauditory or cognitive influences, recent AV studies challenge this view. Neuronal activity in the IC has been shown to be influenced by eye position (Groh et al. 2001), saccades, and visual stimuli (Porter et al. 2007), suggesting that the IC may be
Audiovisual Integration in Nonhuman Primates
77
influenced by covert orienting of concurrent visual events. This covert orienting may contribute to the visual influence observed on portions of human auditory brainstem responses that are roughly localized to the IC (Musacchia et al. 2006). Studies of thalamic projections to the primary auditory cortex show that multisensory connections are present in centers previously thought to be “unisensory” (de la Mothe et al. 2006b; Hackett et al. 2007; Jones 1998). Multiple auditory cortices also receive divergent afferents originating from common thalamic nuclei (Cappe et al. 2009; Jones 1998). In addition, the connections between thalamic nuclei and cortices are largely reciprocal. Even though the functions of those thalamic nuclei have to be clarified, they may contribute to multisensory responsiveness in cerebral cortices. Bimodal responsiveness was shown in a few thalamic nuclei (Matsumoto et al. 2001; Tanibuchi and Goldman-Rakic 2003).
5.3 FUNCTIONAL SIGNIFICANCE OF MULTISENSORY INTERACTIONS It was shown in monkeys that, under certain circumstances, audition influences vision (Wang et al. 2008), vision influences audition (Woods and Recanzone 2004), or the two senses influence each other (Cappe et al. 2010). For AV integration of any form, auditory and visual information has to converge. As described in the previous section, most brain regions have the potential to support that interaction (for review, see Ghazanfar and Schroeder 2006; Musacchia and Schroeder 2009), but the importance of that potential can only be determined by assessing the functional role that each region plays in helping to achieve perceptual integration of sight and sound. This can be achieved by observing the behavioral effects of cortical lesions or electrical stimulation in different areas and by simultaneously measuring behavioral performance and neural activity in normal functioning and impaired populations.
5.3.1 Influences on Unimodal Perception Neural activity in a unimodal area is thought to give rise to sensations only in the preferential modality of the area. It is not surprising, therefore, that lesions in these areas only extinguish sensations of the “primary” modality. For example, STG lesions impair auditory memory retention but leave visual memory retention intact (Colombo et al. 1996). One exception to this rule lies in cases of acquired cross-modal activity such as auditory responses in the occipital cortex in blind people (Théoret et al. 2004). Despite this reorganization, direct cortical stimulation in the visual cortex of blind people elicits photic sensations of simple patterns (such as letters) (Dobelle et al. 1974). Similar sensations of phosphenes can also be induced in sighted individuals using transcranial magnetic stimulation (TMS) (Bolognini et al. 2010; Ramos-Estebanez et al. 2007; Romei et al. 2007, 2009). But do they also induce auditory sensations? Our opinion is that auditory activity in the visual cortex does not induce visual sensations, and visual activity in the auditory cortex does not induce auditory sensations, although it may depend on the condition of subjective experience with stimuli (Meyer et al. 2010). In humans, influences of cross-modal attention on activity of sensory cortices during cross-modal stimulus presentation, e.g., visual attention gates visual modulation in auditory cortex, is known (Ciaramitaro et al. 2007; Lehman et al. 2006; Nager et al. 2006; TederSälejärvi et al. 1999). In particular, the functional role of visual information on speech perception and underlying auditory cortical modulation is well documented (Besle et al. 2009; van Attenveldt et al. 2009; Schroeder et al. 2008). The findings described below also suggest that the functional role of cross-modal activation in early sensory cortices is likely the modulation of primitive (lowlevel) sensory perception/detection. 5.3.1.1 Influence on Temporal Dynamics of Visual Processing In the sensory system, more intense stimuli generally produce a higher neuronal firing rate, faster response onset latencies, and stronger sensations. AV interactions often have a facilitative
78
The Neural Bases of Multisensory Processes
effect on the neural response, either through increased firing rate or faster response (for review, see Stein and Stanford 2008), suggesting that AV stimuli should increase the acuity of the behavioral sensation in some fashion. In humans, AV stimuli increase reaction time speed during target detection (Diederich and Colonius 2004; Giard and Peronnet 1999; Molholm et al. 2002, 2007) and improve temporal order judgments (Hairston et al. 2006; Santangelo and Spence 2009). In the monkey, Wang et al. (2008) showed electrophysiological results consistent with this notion. During a visual localization task, the effect of AV enhancement in V1 occurred as shorter response latency. Interestingly, no appreciable enhancement of visual response was elicited by auditory stimuli when monkeys were not engaged in tasks. The auditory stimuli by themselves did not evoke firing response in V1. This suggests that auditory influence on V1 activity is a subthreshold phenomenon. Suprathreshold response in V1 begins at about 25 to 30 ms poststimulation (Chen et al. 2007; Musacchia and Schroeder 2009). To achieve auditory influences on visual responses, auditory responses must arrive within a short temporal window, a few milliseconds before visual input arrives (Lakatos et al. 2007; Schroeder et al. 2008). Auditory responses in the auditory system generally begin much earlier than visual responses in V1. For some natural events such as speech, visible signals lead the following sounds (Chandrasekaran et al. 2009; for review, see Musacchia and Schroeder 2009). For these events, precedence of visual input, relative to auditory input, is likely a requirement for very early AV interaction in early sensory interactions. 5.3.1.2 Sound Localization The ventriloquist aftereffect observed by Woods and Recanzone (2004) involves the alteration of auditory spatial perception by vision. This phenomenon implies the recruitment of structures whose auditory response depends on or encodes sound location. Several brain structures have sensitivity to spatial location of sound in monkeys. Those include IC (Groh et al. 2001), SC (Wallace et al. 1996), ventral division of the medial geniculate body (Starr and Don 1972), caudal areas of auditory cortex (Recanzone et al. 2000; Tian et al. 2001), PPC (Cohen 2009), and PFC (Artchakov et al. 2007; Kikuchi-Yorioka and Sawaguchi 2000). Woods and Recanzone (2004) used two tasks to test for bimodal interaction during sound localization: one for training to induce the ventriloquist aftereffect and another to test spatial sound lateralization. Monkeys maintained fixation except when making a saccade to the target sound location in the latter test task. The location of the LED light on which monkeys fixated during training task differed between sessions and affected the sound localization in the subsequent sound localization test tasks. Monkey’s “sound mislocalization” was predicted by the deviation of the LED position during the training task from the true center position on which the monkey fixated during the test task. Because monkeys always fixated on the LED, the retinotopic locus of the LED was identical across the tasks. However, there was a small difference in gaze direction that played a key role in causing “mislocalization” by presumably inducing plastic change in proprioceptive alignment of gaze position to sensory LED position. An additional key to that study was even though LED positions were not identical between tasks, they were so close to each other that monkeys presumably treated fixation points of slightly different positions as the same and did not notice differences in gaze directions. Therefore, it could be guessed that the plasticity of the visual spatial localization affected the auditory spatial localization. Although the precise substrate for the ventriloquist aftereffect in the macaque has not been established, several structures are candidates: IC (Groh et al. 2001), SC (Jay and Sparks 1984), AC (Werner-Reiss et al. 2006), and LIP and MIP (Mullette-Gilman et al. 2005). However, in all structures, except for the SC, the observed effects varied between simple gain modulation without altering the spatial receptive field (head-centered coordinate), systematic change that followed gaze direction (eye-centered coordinate), or other complex changes. Plastic change in either coordinate or both can presumably contribute to inducing the ventriloquist aftereffect.
Audiovisual Integration in Nonhuman Primates
79
Fixation during head restraint does not allow any eye movement. During fixation, subjects can pay visual attention to locations off from the fixated spot (covert attention) or listen carefully. Neuronal activity correlates of such processes were seen in PFC (Artchakov et al. 2007; KikuchiYorioka and Sawaguchi 2000) or PPC (Andersen et al. 1997). Meanwhile, subjects have to keep feeding oculomotor command signals to maintain steady eye position. Therefore, the signal that transmits fixating location and differentiates between center and deviant should be present. A possible correlate to such a signal was described in AC, a change in spontaneous activity dependent on gaze direction, whereas it was not observed in IC (Werner-Reiss et al. 2006). Even though what provides the eye positional signal to AC is unknown, it suggests AC as one of the candidates inducing the ventriloquist aftereffect. It is worth mentioning that regardless of the name “ventriloquist aftereffect,” it is quite different from the ventriloquist effect. The ventriloquist effect happens when audio and visual signals stem from a shared vicinity, but does not require fixation on a visual spot and a steady eye positional signal. In contrast, the ventriloquist aftereffect is about spatial coding of solely auditory events. Hence, the study of this phenomenon may be useful to clarify which type of neuronal coding is the main strategy for cortical encoding of sound localization.
5.3.2 AV Recognition Identifying a previously known AV object, such as a speaker’s face and voice, requires AV integration, discrimination, and retention. This process likely relies on accurate encoding of complex stimulus features in sensory cortices and more complex multiplexing in higher-order multisensory association cortices. Multisensory cortices in the “what” pathway probably function to unite these sensory attributes. In humans, audiovisual integration plays an important role in person recognition (Campanella and Belin 2007). Several studies have shown that unimodal memory retrieval of multisensory experiences activated unisensory cortices, presumably because of multisensory association (Wheeler et al. 2000; Nyberg et al. 2000; Murray et al. 2004, 2005; von Kriegstein and Giraud 2006) and such memory depended on meaningfulness of combined signals (Lehmann and Murray 2005). Differential responses to vocal sounds were observed in PFC (Gifford et al. 2005; Romanski et al. 2005), STG (Rauschecker et al. 1995; Russ et al. 2008), and AC (Ghazanfar et al. 2005). Differential responses to faces were found in PFC (Rolls et al. 2006), temporal lobe cortices (Eifuku et al. 2004), and amygdala (Kuraoka and Nakamura 2007). Some of these structures may possess selectivity to both vocal sounds and faces. Recognition of a previously learned object suggests that this process relies in part on working and long-term memory centers. The fact that the identification of correspondence between vocal sound and face is better when the individuals are socially familiar (Martinez and Matsuzawa 2009) supports this notion. PFC and MTL are also involved in the association of simple auditory and visual stimuli as shown by delayed match to sample task studies (Colombo and Gross 1994; Fuster et al. 2000; Gibson and Maunsell 1997). Lesions in MTL (Murray and Gaffan 1994) or PFC (Gaffan and Harrison 1991) impaired performance in tasks requiring memory and AV association. These findings implicate PFC, STG, and MTL in AV recognition.
5.4 PRINCIPLES OF MULTISENSORY INTERACTION Relationships between multisensory responses and stimulus parameters, derived primarily from single-unit studies in the cat SC, are summarized in three principles of multisensory interaction: inverse effectiveness, temporal, and spatial principles (Stein and Meredith 1993). These organizing principles have been shown to be preserved with other sensory combinations (i.e., auditory– somatosensory; Lakatos et al. 2007) and in humans (Stevenson and James 2009); however, systematic examination of these principles for AV integration in monkey cerebral cortex is limited to the auditory cortex.
80
The Neural Bases of Multisensory Processes
5.4.1 Inverse Effectiveness The inverse effectiveness principle of multisensory interaction states that the interaction of weaker unimodal inputs results in larger gain of multisensory response. In the case of audition, the response to a softer sound should be enhanced more by visual input, relative to a louder sound. In the case of vision, the response to a dimmer object should be enhanced more by sounds relative to a brighter object. Cappe et al. (2010) showed a behavioral correlate to inverse effectiveness in monkeys. Manual reaction times to soft sounds were slower relative to loud sounds, and only the reaction time to soft sound was shortened by simultaneous visual stimuli. Responses to AV stimuli were also more accurate than responses to sounds alone at the lowest sound intensities. The same group also showed that the effect of sound on saccades as well as V1 neuronal response latencies is larger in the case of less salient visual stimuli (Wang et al. 2008). fMRI studies show that degraded auditory and visual stimuli both evoke weaker BOLD signal responses in the macaque AC, relative to intact stimuli (Kayser et al. 2007). When those degraded stimuli were presented simultaneously, enhancement of BOLD signal responses was larger than simultaneous intact stimuli. Even though they did not test the combination of degraded and intact stimuli, the results suggest synergistic inverse effectiveness between modalities. Electrophysiologically, Ghazanfar et al. (2005) showed that weaker LFP responses to vocal sounds were enhanced more by concurrently viewing a movie clip of a vocalizing monkey, relative to stronger responses. Another study showed that responses to vocal stimuli were modulated by movie stimuli differentially depending on loudness: responses to the loud vocal stimuli were suppressed when the movie was added, whereas the responses to the soft sounds were enhanced (Kayser et al. 2008). These studies are compatible with the idea that weak responses are enhanced by AV integration. Additionally, a recent study reported a small but significant increase in the information capacity of auditory cortical activity (Kayser et al. 2010). Thus, visual stimuli may not only enhance responses but also deploy more cortical neurons in computational analysis of auditory signals, creating redundancy in processed information to secure the perception.
5.4.2 Temporal Contiguity The Temporal Principle of multisensory processing (Stein and Meredith 1993) predicts that integration effects will be greatest when neuronal responses evoked by stimuli of the two modalities are within a small temporal window. Quite a few studies investigated spatial and temporal contiguity principles of AV integration in nonhuman primates. Overall, results in the monkey SC and A1 conform to the principle of temporal contiguity and describe a range of enhancement and suppression effects. In the SC, Wallace et al. (1996) showed that visual stimuli preceding auditory stimuli tend to produce more interaction. This condition corresponds to the natural order of physical events in everyday stimuli where the visual stimulus precedes the accompanying auditory one. Ghazanfar et al. (2005) described neural responses in A1 and lateral belt areas to the presentation of conspecific vocal sounds, with and without the accompanying movies at different SOAs. In this region, bimodal stimulation can elicit suppression or enhancement, depending on the neural population. Results showed that the proportion of sites exhibiting bimodal enhancement depended on the SOA: SOAs longer than 100 ms enhanced less regions of AC. When the auditory response was suppressed by a movie, the proportion of suppressed locations peaked at SOAs shorter than 80 ms and longer than 100 ms, interestingly sparing the peak timing of visually evoked LFPs. Kayser et al. (2008) tested responses in A1 and belt areas to systematic combinations of noise bursts and flashes in 20 ms steps. Bimodal suppression was only observed when the flash preceded noise by 20 to 80 ms. For the natural AV stimuli, bimodal enhancement was observed in some popu-
Audiovisual Integration in Nonhuman Primates
81
lations of auditory cortex at an SOA of 0 ms, and that was abolished by introducing a perceivable delay between stimuli (160 ms). These results suggest that AV interaction in AC could happen as either enhancement (if audio and visual stimuli are nearly synchronized or separated by less than 100 ms delay) or suppression (at delays longer than 100 ms). Interpretations of these data should be approached with a little caution. In the first study, the effect of AV interaction was attributed to the interaction between movements of the mouth and the following vocal sound (Ghazanfar et al. 2005). However, because the mouth movement started immediately after the abrupt appearance of the first movie frame, the sudden change in the screen image could capture visual attention. In other studies, an abrupt visual change was shown to elicit a brief freeze of gaze position in monkeys (Cui et al. 2009) and in humans (e.g., Engbert and Kliegl 2003). Therefore, the onset of the movie itself could evoke transient activity. This would suggest that the observed effects were related simply to visual response or a transient change in covert visual attention. Because LFPs capture the response of a large population of neurons, such activity generated in non-AC structures may be superimposed. Further studies are necessary to dissociate the AV interaction into mouth movement-related and other components.
5.4.3 Spatial Contiguity The spatial principle of multisensory integration states that multisensory integration is greatest when loci of events of different modalities overlap with the receptive fields of neurons and when those receptive fields of different modalities overlap with each other. Although there is little data in monkey cortex on this topic for AV integration, we can speculate how it operates based on anatomical and electrophysiological findings. Anatomical studies predict that peripheral representations of visual stimuli should be more susceptible to auditory influences. The representation of the visual periphery is retinotopically organized in the visual cortex and is interconnected with caudal auditory cortices (Falchier et al. 2002, 2010; Rockland and Ojima 2003). In accordance with this prediction, Wang et al. (2008) observed auditory influences on V1 responses to visual stimuli presented more peripherally than 10°, although central vision was not tested. Similarly, in humans, auditory activation of visual cortex subserving the peripheral visual fields was shown (Cate et al. 2009). However, many human studies used central and parafoveal stimuli, for which anatomical substrates or other physiological mechanisms await to be found. Other studies used different types of visual stimuli to study auditory cortical responses. Flashes (e.g., Kayser et al. 2008; Lakatos et al. 2009) excite a wide area of the retinotopic map. Images and movies were overlaid around a central fixation point (Ghazanfar et al. 2005, 2008). In the latter case, visual stimulation did not extend to peripheral visual space. In addition, when monkey faces are used, the subjects tend to look at the mouth and eyes proximal to the center of face (Ghazanfar et al. 2006). These findings suggest that visual influence may have different sources depending on the type of stimulus preference in each area. For example, cortices across STS possess face preference, large receptive fields, and position invariance of object selectivity. Therefore, facial influences on AC may originate from STS, as proposed by recent studies (Ghazanfar et al. 2008; Kayser and Logothetis 2009, see below). Such speculation may be further clarified by comparing the vocal movie effect on AC between face positions relative to gaze position considering the difference in the size of receptive field between visually responsive cortices. In PPC, common spatial tuning to visual and auditory stimuli was observed (Mazzoni et al. 1996; Schlack et al. 2005). Even though PPC response to simultaneous AV stimuli has not been investigated, it is likely that integration there depends on spatial congruency between modalities. Further studies are needed to verify this.
82
The Neural Bases of Multisensory Processes
5.5 MECHANISMS AND DYNAMICS OF MULTISENSORY INTERACTION Traditionally, multisensory integration is indexed at the neuronal level by a change in the averaged magnitude of evoked activity relative to the sum of unimodal responses. This type of effect was most often studied in the classical higher-order multisensory regions of the temporal, parietal, and frontal cortices, and generally manifested as a simple enhancement of the excitatory response beginning at the initial input stage in layer 4 as reviewed by Schroeder and Foxe (2002). Recent studies have shown that cross-modal influence on traditional unisensory cortices could occur via manipulation of ongoing oscillatory activity in supragranular layers, which in turn modulates the probability that neurons will fire in response to the dominant (driving) auditory input (Lakatos et al. 2007; Schroeder and Lakatos 2009). Similarly, modulatory rather than driving multisensory influences were found in single-unit studies as well (Allman and Meredith 2007; Allman et al. 2008; Dehner et al. 2004; Meredith et al. 2009). This more novel mechanism will be the focus of discussion here.
5.5.1 Phase Reset: Mechanisms Somatosensory stimuli evoked a modulatory response in the supragranular layer of A1, with an onset time even faster than the auditory response (Lakatos et al. 2007). When it was paired with synchronized auditory stimuli, faster somatosensory activation influenced the forthcoming auditory response. However, somatosensory activity did not evoke a single rapid bolus of afferent activity like a click, which elevates signal power across a broad frequency range at once. Instead, the somato sensory effect appeared as a modulation by phase reset of certain dominating neuronal oscillations observed in CSD. In other words, the somatosensory stimulus changed randomly fluctuating excitability of auditory neuronal ensembles to a certain excitable condition (represented by the oscillatory phase), thereby determining the effect of the auditory input. The modulatory effect is differential across somatosensory–auditory SOAs dependent on how a given SOA relates to periods of delta, theta, and gamma oscillations; that is, facilitation is maximal at SOAs corresponding to full gamma, theta, and delta cycles, and these peaks in the function are separated by “suppressive” troughs, particularly at SOAs corresponding to 1/2 of a theta cycle, and 1/2 of a delta cycle. In contrast with somatosensory activation of A1, visual responses are relatively slow even within the visual systems (Chen et al. 2007; Musacchia and Schroeder 2009; Schmolesky et al. 1998). It takes more time for visual activity to reach the auditory cortex than auditory activity in both A1 and V1 (Lakatos et al. 2009). Therefore, for the timing of visual signals to coincide with or to reach AC earlier than that of the auditory signal, visual stimuli have to occur earlier than auditory stimuli, which is the case for many natural forms of AV stimulation, particularly speech (Chandrasekaran et al. 2009). Cross-modal auditory modulation of V1 activity and visual modulation of A1 activity were observed in monkeys performing an intermodal selective attention task, in which auditory and visual stimuli were presented alternatively at a rate in the range of delta frequency (Lakatos et al. 2009). Just like in the case of somatosensory modulation of A1 activity, cross-modal responses occurred as a modulatory phase reset of ongoing oscillatory activity in the supragranular layers, without a significant change in neuronal firing while those stimuli were attended. Supragranular and granular layers are recipients of corticocortical, nonspecific thalamocortical inputs or sensory-specific thalamocortical inputs, respectively. Modulatory phase reset in supragranular layer without any change in neuronal firing in granular and even supragranular layers suggests that cross-modal activation happens as a transient change in supragranular cellular excitability at the subthreshold level. It is consistent with the fact that cross-modal sensory firing response has not been reported for primary sensory cortices in many studies that relied on action potentials as a sole dependent measure. The manifestations of multiple poststimulus time windows of excitability are consistent with nested hierarchical structure of frequency bands of ongoing neuronal activity (Lakatos et al. 2005).
Audiovisual Integration in Nonhuman Primates
83
Cross-modal responses during an intermodal selective attention task were observed in response to unimodal stimuli (Lakatos et al. 2008, 2009). What would be the effect of a phase reset when auditory and visual stimuli are presented simultaneously? Wang et al. (2008) analyzed neuronal firing responses to light with or without paired auditory noise stimuli using single-unit recordings in V1. When stimuli were presented passively, firing rate in a population of V1 neurons increased and remained high for 500 ms. V1 population responses to a visual target without sound during visual detection tasks appeared as double peaks in a temporal pattern. The timing of each peak after response onset was in the range of cycle length of gamma or theta frequency bands. In response to AV stimuli, an additional peak near the time frame of a full delta cycle showed up in the temporal firing pattern. Although translation of firing activity into underlying membrane potential is not straightforward, those activity parameters are roughly monotonically proportional to each other (e.g., Anderson et al. 2000). Thus, the oscillatory pattern of neuronal firing suggests oscillatory modulation of neuronal excitability by the nonauditory stimuli.
5.5.2 Phase Reset: Dependence on Types of Stimuli How would phase reset work in response to stimuli with complex temporal envelopes? Sounds and movies of vocalizations are the popular stimuli examined in studies of AV integration in auditory cortical areas and STP in nonhuman primates. As vocalization starts with visible facial movement before a sound is generated, phase reset by visible movement pattern is in a position to affect processing of a following sound. Kayser et al. (2008) showed changes in frequency bands of LFP (around and below 10 Hz) consistent with the above predictions, that is, they observed phase reset and excitability increases when response to the sound of complex AV stimuli started in A1. When phase reset occurred, it was accompanied with enhanced firing responses. There were differences in the frequency bands in which phase reset is produced by visual inputs between Kayser et al. (2008) and the findings of Lakatos et al. (2009), who showed cross-modal phase reset in A1 and V1 occurred around theta (below 10 Hz) and gamma (above 25 Hz) bands leaving a 10 to 25 Hz band out of the phenomena. Kayser et al. observed phase reset by visual input alone across the range of 5 to 25 Hz. The differences between the results of these studies are likely attributable to differences in visual stimuli. Lakatos et al. (2009) did not examine whether phase reset of ongoing oscillatory activity at theta and gamma bands contributed to AV integration because their task did not present auditory and visual stimuli simultaneously. Kayser et al. (2008) showed that observation of enhanced neuronal firing response to AV stimuli compared with auditory stimuli correlated with the occurrence of phase reset about 10 Hz, underscoring the importance of reset in that band for AV response enhancement. Also, differences in frequency band of phase reset by visual stimuli between the Lakatos et al. and Kayser et al. studies suggests that the frequency of oscillation influenced by crossmodal inputs depends on conditions of attention and stimulation. Is phase reset a phenomenon beyond primary sensory cortices? This question is open. At least STP clearly receives feedforward excitatory input from several modalities (Schroeder and Foxe 2002). The contribution of oscillatory phase reset in STP and other higher-order multisensory areas have not been examined in detail, although the suspicion is that phase reset may have more to do with attentional modulation than multisensory representation.
5.6 IMPORTANCE OF SALIENCE IN LOW-LEVEL MULTISENSORY INTERACTIONS Variations in AV integration effects according to saliency and attentional conditions are so pervasive that some have begun to wonder if attention is a prerequisite to integration (Navarra et al. 2010).
84
The Neural Bases of Multisensory Processes
However, AV integration has been observed in many higher cortical areas even when subjects were only required to maintain visual fixation without further demands of a task (PFC, Sugihara et al. 2006; STP, Barraclough et al. 2005; AC, Ghazanfar et al. 2005; Kayser et al. 2008). Does this mean audiovisual interactions happen automatically? The answer may depend on the level of the system being studied, as well as the behavioral states, as discussed below.
5.6.1 Role of (Top-Down) Attention There is strong evidence that top-down attention is required in order for AV integration to take place in primary sensory cortices. Using an intermodal selective attention task, Lakatos et al. (2008, 2009) showed that the manifestation of visual influence in A1 and auditory influence in V1 was dependent on attention. If a stimulus was ignored, its cross-modal influence could not be detected. The selective role of sensory attention illustrated above contrasts with some findings that show how attention to either modality elicits AV effects. Wang et al. (2008) showed that neurons in V1 responded to auditory targets only when monkeys performed a purely visual localization task. Similarly, in humans, task-irrelevant sound promoted phosphene detection during a task that requires only visual attention to detect phosphene induced by TMS over visual cortex (Romei et al. 2007, 2009). Thus, tasks requiring either auditory (Lakatos et al. 2009) or visual (Romei et al. 2007, 2009; Wang et al. 2008) attention both rendered auditory influences observable in V1. This apparent disagreement is most likely because of differences in the role of unattended sensory stimuli during those tasks. In the visual localization task (Wang et al. 2008), monkeys needed to react faster to localize visual targets. Task-irrelevant auditory stimuli occurred in half of the trials, being delivered always temporally congruent with visual targets and at a fixed center location. In this task, the status of sound is key. Auditory stimuli, when delivered, were always informative, and thus, could act as an instruction like that given verbally to subjects performing visual localization as in Posner’s classical study (Posner et al. 1980). Therefore, it was possible that monkeys paid attention to such informative auditory stimuli in addition to visual stimuli to perform the visual localization task. In a similar vein, responses to visual events in the auditory discrimination task of Brosch et al. (2005) may be regarded as an informative cross-modal cue to perform the task, although again, the effects of overtraining must also be considered. In the intermodal attention task (Lakatos et al. 2008, 2009), subjects did not have to spread their spatial attention to different locations because visual and auditory stimuli were spatially congruent. However, those stimuli were temporally incongruent, divided into two series as asynchronous streams. Furthermore, whereas monkeys had to monitor a sequence of one modality, deviants appeared in the other sequence and monkeys had to refrain from responding to it. The easiest way to perform such a task is to plug one’s ears when watching and to close the eyes when you are listening. Prevented from these strategies, all monkeys could do actually was not only to pay attention to a modality cued to attend to but also to ignore the other stream at the same time, in order to perform the task. Although it may be impossible to determine what monkeys are actually attending to during any given task, it can be argued that monkeys do not ignore informative sounds based on the observation of auditory influence on visual response in V1 (Wang et al. 2008). Further studies are needed to determine how attentive conditions influence AV integration. It would be interesting to see whether an auditory influence could be observable in a visual localization task, as in the study of Wang et al. (2008), but with auditory stimuli incongruent with visual stimuli matched both spatially and temporally, thereby acting as distracters. Auditory attention has also been suggested to play a role in evoking auditory response in LIP (Linden et al. 1999) and PFC (Vaadia et al. 1986). Further clarification of the role of attention in higher associative areas, such as the PFC, is very important because many models assume that those cortices impose attentional control over lower cortices.
Audiovisual Integration in Nonhuman Primates
85
5.6.2 Attention or Saliency of Stimuli Degrees of attentional focus and ranges of stimulus saliency surely have differential effects on AV integration. It is difficult to argue that monkeys monitor AV stimuli during simple tasks such as fixation because monkeys will receive reward anyway regardless of what happens during stimulus presentation. However, monkeys are certainly alert in such a condition. Even though the mandated level of such attention is different from active monitoring, such weak attention, or lack of competing stimulation, may be enough to induce audiovisual integration. Besides attentive requirements, there are differences in stimulus saliency between simple stimuli, such as flashes and tones, and complex stimuli such as faces. It is well known that meaningful visual stimuli attract attention in a behaviorally observable manner. The eyes and mouths of individuals vocalizing draw a subject’s gaze (Ghazanfar et al. 2006). Thus, it is possible that highly salient stimuli may passively induce AV effects in the absence of explicit requirements to attend. Certain forms of AV effects in adult animals occur only after training (Grunewald et al. 1999; Woods and Recanzone 2004). In that sense, perception of vocalization has already been acquired by life-long training in monkeys. We may suppose that AV integration is essential for acquisition of communication skills in nonhuman primates. Once trained, AV integration may become “pre- potent” requiring less attention and may be done “effortlessly.”
5.7 CONCLUSIONS, UNRESOLVED ISSUES, AND QUESTIONS FOR FUTURE STUDIES Compared to human studies, behavioral studies of AV integration in nonhuman primates are still relatively rare. The ability to simultaneously record behavior and local neural activity has helped to reconcile the multisensory findings in humans, and expand our understanding of how AV integration occurs in the nervous system. Below, we list several issues to be addressed in the future.
5.7.1 Complex AV Interactions Tasks requiring linguistic ability may be out of reach for experiments involving nonhuman primates; however, visual tasks of high complexity have been done in previous studies. Considering that many AV effects in humans were seen with purely visual tasks, it may be possible to train monkeys to perform complex visual tasks and then study the effect of auditory presentation on visual performance.
5.7.2 Anatomical Substrates of AV Interaction The anatomical substrates of cross-modal inputs to primary sensory cortices (de la Mothe et al. 2006b; Cappe and Barone 2005; Cappe et al. 2009; Falchier et al. 2002, 2010; Hackett et al. 2007; Rockland and Ojima 2003; Smiley et al. 2007) provide the basis for the models of routes for AV integration. These data show that two types of corticocortical inputs (feedback and lateral connections), and thalamocortical along with subcortical inputs from nonspecific as well as multisensory thalamic nuclei are potential pathways mediating early multisensory convergence and integration. The challenge here is to discriminate the influence of each of these pathways during a behavioral task. It is probable that the weight of these different pathways is defined by the sensory context as well as by the nature of the task objective.
5.7.3 Implication of Motor Systems in Modulation of Reaction Time Brain structures showing AV responses included parts of not just sensory but motor systems. Facilitated reaction time in both saccadic and manual responses raises an issue of whether
86
The Neural Bases of Multisensory Processes
enhancement occurs in just sensory systems or somewhere else additionally. As Miller et al. (2001) showed, motor cortical activation triggered by sensory stimuli reflected that sensory signals were already integrated at the stage of primary motor cortex, it is possible that activation of PPC, PFC, particularly PM areas or SC is facilitated by redundant sensory inputs. These possibilities are not fully discerned yet. The possibility of additional sources for facilitated reaction time was also suggested by the findings of Wang et al. (2008). When intense visual stimuli were presented, additional auditory stimuli did not affect visual response in V1, but it did influence saccadic reaction time. This suggests either that visual response is facilitated somewhere in the visual system outside of V1 or that auditory stimuli directly affect motor responses.
5.7.4 Facilitation or Information? In general, larger neuronal responses can be beneficial for faster reactions to and discrimination of events because they have faster onset latencies and better signal-to-noise ratios. The coding of which strategy, or strategies, neurons take as they respond to stimuli has to be discerned. For example, visual localization tasks require not only fast reaction times but also good discrimination of visual target location. Visual influences on ongoing oscillations by phase reset mechanisms and the consequence of modulations on response magnitude have been shown by several groups. Additionally, Kayser et al. (2010) has shown the possibility that visual influences can tune the auditory response by increasing the signal-to-noise ratio and thereby its information capacity. Because it is not known what aspect of neuronal response the brain utilizes, it is desirable to compare mechanisms of modulation with behavioral responses.
5.7.5 Inverse Effectiveness and Temporal Interaction Inverse effectiveness states that multisensory integration is most effective when weak stimuli are presented. Even though most electrophysiological studies of AV integration in monkey auditory cortex often utilize loud sounds, low stimulus intensity can degrade the temporal response pattern of sensory neurons. Such an effect would be more prominent for complex stimuli, such as vocal sounds, because smaller peaks in the temporal envelope (e.g., the first envelope peak of macaque grunt call) may be missed in auditory encoding. The condition of weak sound is relevant to Sumby and Pollack’s (1954) classic observation of inverse effectiveness of human speech. It is thus important to investigate how AV integration works in degraded conditions. It could be possible that degraded stimuli reveal a more central role of attention because weaker stimuli require more attention in order to discern them. Also, altered timing of peaks in response to weak vocal sound may interact differently with the excitability phases of ongoing oscillation, leading to different patterns of enhancement.
5.7.6 What Drives and What Is Driven by Oscillations? Recent studies of AV integration in AC and STP stress the importance of oscillatory neuronal activity. Oscillations in field potentials and CSD reflect rhythmic net excitability fluctuations of the local neuronal ensemble in sensory cortical areas. Although numerous hypotheses are available, the role of oscillatory modulation in other structures is unknown. Endogenous attention may also be reflected in ongoing activity by top-down modulation. Its interaction with bottom-up sensory activation can contribute to and be influenced by oscillatory dynamics. This is an extremely fruitful area for future studies.
5.7.7 Role of Attention Although some multisensory studies in monkeys did control for attention, most studies were done where attention was not specifically controlled. The former studies provide ample evidence for a
Audiovisual Integration in Nonhuman Primates
87
definitive role of sensory attention in AV integration. To get a clear picture on the role attention plays in multisensory interactions, more studies are needed in which attention, even unimodal, is controlled through behavioral tasks and stimuli. It will be also important to investigate issues of attentional load because differences in selective attention may only emerge under high load conditions, as under high attentional loads in attended modality subjects may try to ignore stimuli of irrelevant modalities either consciously or unconsciously.
ACKNOWLEDGMENT This work was supported by grant nos. K01MH082415, R21DC10415, and R01MH61989.
REFERENCES Aggleton, J.P., and M. Mishkin. 1990. Visual impairments in macaques following inferior temporal lesions are exacerbated selectively by additional damage to superior temporal sulcus. Behavioural Brain Research 39:262–274. Allman, B.L., L.P. Keniston, and M.A. Meredith. 2008. Subthreshold auditory inputs to extrastriate visual neurons are responsive to parametric changes in stimulus quality: Sensory-specific versus non-specific coding. Brain Research 1242:95–101. Allman, B.L., and M.A. Meredith. 2007. Multisensory processing in “unimodal” neurons: Cross-modal subthreshold auditory effects in cat extrastriate visual cortex. Journal of Neurophysiology 98:545–549. Allon, N., and Z. Wollberg. 1978. Responses of cells in the superior colliculus of the squirrel monkey to auditory stimuli. Brain Research 159:321–330. Andersen, R.A., L.H. Snyder, D.C. Bradley, and J. Xing. 1997. Multimodal representation of space in the posterior parietal cortex and its use in planning movements. Annual Review of Neuroscience 20:303–330. Anderson, J., I. Lampl, I. Reichova, M. Carandini, and D. Ferster. 2000. Stimulus dependence of two-state fluctuations of membrane potential in cat visual cortex. Nature Neuroscience 3:617–621. Anderson, K.C., and R.M. Siegel. 1999. Optic flow selectivity in the anterior superior temporal polysensory area, STPa, of the behaving monkey. Journal of Neuroscience 19:2681–2691. Anderson, K.C., and R.M. Siegel. 2005. Three-dimensional structure-from-motion selectivity in the anterior superior temporal polysensory area STPs of the behaving monkey. Cerebral Cortex 15:1299–1307. Aosaki, T., M. Kimura, and A.M. Graybiel. 1995. Temporal and spatial characteristics of tonically active neurons of the primate’s striatum. Journal of Neurophysiology 73:1234–1252. Aou, S., Y. Oomura, H. Nishino, et al. 1983. Functional heterogeneity of single neuronal activity in the monkey dorsolateral prefrontal cortex. Brain Research 260:121–124. Artchakov, D., D. Tikhonravov, V. Vuontela, I. Linnankoski, A. Korvenoja, and S. Carlson. 2007. Processing of auditory and visual location information in the monkey prefrontal cortex. Experimental Brain Research 180:469–479. Azuma, M., and H. Suzuki. 1984. Properties and distribution of auditory neurons in the dorsolateral prefrontal cortex of the alert monkey. Brain Research 298:343–346. Baizer, J.S., L.G. Ungerleider, and R. Desimone. 1991. Organization of visual inputs to the inferior temporal and posterior parietal cortex in macaques. Journal of Neuroscience 11:168–190. Baizer, J.S., R. Desimone, and L.G. Ungerleider. 1993. Comparison of subcortical connections of inferior temporal and posterior parietal cortex in monkeys. Visual Neuroscience 10:59–72. Barbas, H., H. Ghashghaei, S.M. Dombrowski, and N.L. Rempel-Clower. 1999. Medial prefrontal cortices are unified by common connections with superior temporal cortices and distinguished by input from memory-related areas in the rhesus monkey. Journal of Comparative Neurology 410:343–367. Barbas, H., and M.M. Mesulam. 1981. Organization of afferent input to subdivisions of area 8 in the rhesus monkey. Journal of Comparative Neurology 200:407–431. Barnes, C.L., and D.N. Pandya. 1992. Efferent cortical connections of multimodal cortex of the superior temporal sulcus in the rhesus monkey. Journal of Comparative Neurology 318:222–244. Barraclough, N.E., D. Xiao, C.I. Baker, M.W. Oram, and D.I. Perrett. 2005. Integration of visual and auditory information by superior temporal sulcus neurons responsive to the sight of actions. Journal of Cognitive Neuroscience 17:377–391.
88
The Neural Bases of Multisensory Processes
Baylis, G.C., E.T. Rolls, and C.M. Leonard. 1987. Functional subdivisions of the temporal lobe neocortex. Journal of Neuroscience 7:330–342. Bell, A.H., B.D. Corneil, D.P. Munoz, and M.A. Meredith. 2003. Engagement of visual fixation suppresses sensory responsiveness and multisensory integration in the primate superior colliculus. European Journal of Neuroscience 18:2867–2873. Benevento, L.A., J. Fallon, B.J. Davis, and M. Rezak. 1977. Auditory–visual interaction in single cells in the cortex of the superior temporal sulcus and the orbital frontal cortex of the macaque monkey. Experimental Neurology 57:849–872. Besle, J., Bertrand, O., and Giard, M.H. 2009. Electrophysiological (EEG, sEEG, MEG) evidence for multiple audiovisual interactions in the human auditory cortex. Hearing Research 258:143–151. Blatt, G.J., D.N. Pandya, and D.L. Rosene. 2003. Parcellation of cortical afferents to three distinct sectors in the parahippocampal gyrus of the rhesus monkey: An anatomical and neurophysiological study. Journal of Comparative Neurology 466:161–179. Bologninia, N., I. Senna, A. Maravita, A. Pascual-Leone, and L.B. Merabet. 2010. Auditory enhancement of visual phosphene perception: The effect of temporal and spatial factors and of stimulus intensity. Neuroscience Letters 477:109–114. Bon, L., and C. Lucchetti. 2006. Auditory environmental cells and visual fixation effect in area 8B of macaque monkey. Experimental Brain Research 168:441–449. Born, R.T., and D.C. Bradley. 2005. Structure and function of visual area MT. Annual Review of Neuroscience 28:157–189. Brosch, M., E. Selezneva, and H. Scheich. 2005. Nonauditory events of a behavioral procedure activate auditory cortex of highly trained monkeys. Journal of Neuroscience. 25:6797–6806. Brothers, L., B. Ring, and A. Kling. 1990. Response of neurons in the macaque amygdala to complex social stimuli. Behavioural Brain Research 41:199–213. Bruce, C.J., R. Desimone, and C.G. Gross. 1981. Visual properties of neurons in polysensory area in superior temporal sulcus of the macaque. Journal of Neurophysiology 46:369–384. Bruce, C.J., R. Desimone, and C.G. Gross. 1986. Both striate cortex and superior colliculus contributes to visual properties of neurons in superior temporal polysensory area of macaque monkey. Journal of Neurophysiology 55:1057–1075. Burton, H., and E.G. Jones. 1976. The posterior thalamic region and its cortical projection in new world and old world monkeys. Journal of Comparative Neurology 168:249–302. Carmichael, S.T., and J.L. Price. 1995. Sensory and premotor connections of the orbital and medial prefrontal cortex of macaque monkeys. Journal of Comparative Neurology 363:642–664. Calvert, G.A. 2001. Crossmodal processing in the human brain: Insights from functional neuroimaging studies. Cerebral Cortex 11:1110–1123. Calvert, G.A., and R. Campbell. 2003. Reading speech from still and moving faces: The neural substrates of visible speech. Journal of Cognitive Neuroscience 15:57–70. Campanella, S., and P. Belin. 2007. Integrating face and voice in person perception. Trends in Cognitive Sciences 11:535–543. Cappe, C., and P. Barone. 2005. Heteromodal connections supporting multisensory integration at low levels of cortical processing in the monkey. European Journal of Neuroscience 22:2886–2902. Cappe, C., A. Morel, P. Barone, and E. Rouiller. 2009. The thalamocortical projection systems in primate: An anatomical support for multisensory and sensorimotor interplay. Cerebral Cortex 19:2025–2037. Cappe, C., M.M. Murray, P. Barone, and E.M. Rouiller. 2010. Multisensory facilitation of behavior in monkeys: Effects of stimulus intensity. Journal of Cognitive Neuroscience 22:2850–2863. Cate, A.D., T.J. Herron, E.W. Yund, et al. 2009. Auditory attention activates peripheral visual cortex. PLoS ONE 4:e4645. Cavada, C., and P.S. Goldman-Rakic. 1989a. Posterior parietal cortex in rhesus monkey: I. Parcellation of areas based on distinctive limbic and sensory corticocortical connections. Journal of Comparative Neurology 287:393–421. Cavada, C., and P.S. Goldman-Rakic. 1989b. Posterior parietal cortex in rhesus monkey: II. Evidence for segregated corticocortical networks linking sensory and limbic areas with the frontal lobe. Journal of Comparative Neurology 287:422–445. Cavada, C., T. Company, J. Tejedor, R.J. Cruz-Rizzolo, and F. Reinoso-Suarez. 2000. The anatomical connections of the macaque monkey orbitofrontal cortex. A review. Cerebral Cortex 10:220–242. Chakladar, S., N.K. Logothetis, and C.I. Petkov. 2008. Morphing rhesus monkey vocalizations. Journal of Neuroscience Methods 170:45–55.
Audiovisual Integration in Nonhuman Primates
89
Chandrasekaran, C., and A.A. Ghazanfar. 2009. Different neural frequency bands integrate faces and voices differently in the superior temporal sulcus. Journal of Neurophysiology 101:773–788. Chandrasekaran, C., A. Trubanova, S. Stillittano, A. Caplier, and A.A. Ghazanfar. 2009. The natural statistics of audiovisual speech. PLoS Computational Biology 5:e1000436. Chen, C.M., P. Lakatos, A.S. Shah, et al. 2007. Functional anatomy and interaction of fast and slow visual pathways in macaque monkeys. Cerebral Cortex 17:1561–1569. Cheney, D.L., and Seyfarth, R.M. 1990. How Monkeys See the World. Chicago: Univ. of Chicago Press. Ciaramitaro, V.M., G.T. Buracas, and G.M. Boynton. 2007. Spatial and crossmodal attention alter responses to unattended sensory information in early visual and auditory human cortex. Journal of Neurophysiology 98:2399–2413. Clower, D.M., R.A. West, J.C. Lynch, and P.L. Strick. 2001. The inferior parietal lobule is the target of output from the superior colliculus, hippocampus, and cerebellum. Journal of Neuroscience. 21:6283–6291. Cohen, Y.E. 2009. Multimodal activity in the parietal cortex. Hearing Research 258:100–105. Cohen, Y.E., and R.A. Andersen. 2000. Reaches to sounds encoded in an eye-centered reference frame. Neuron 27:647–652. Cohen, Y.E., and R.A. Andersen. 2002. A common reference frame for movement plans in the posterior parietal cortex. Nature Reviews. Neuroscience 3:553–562. Cohen, Y.E., A.P. Batista, and R.A. Andersen. 2002. Comparison of neural activity preceding reaches to auditory and visual stimuli in the parietal reach region. Neuroreport 13:891–894. Cohen, Y.E., I.S. Cohen, and G.W. Gifford III. 2004. Modulation of LIP activity by predictive auditory and visual cues. Cerebral Cortex 14:1287–1301. Cohen, Y.E., B.E. Russ, S.J. Davis, A.E. Baker, A.L. Ackelson, and R. Nitecki. 2009. A functional role for the ventrolateral prefrontal cortex in non-spatial auditory cognition. Proceedings of the National Academy of Sciences of the United States of America 106:20045–20050. Colombo, M., and C.G. Gross. 1994. Responses of inferior temporal cortex and hippocampal neurons during delayed matching to sample in monkeys (Macaca fascicularis). Behavioral Neuroscience 108:443–455. Colombo, M., H.R. Rodman, and C.G. Gross. 1996. The effects of superior temporal cortex lesions on the processing and retention of auditory information in monkeys (Cebus apella). Journal of Neuroscience. 16:4501–4517. Cooke, D.F., and M.S.A. Graziano. 2004a. Super-flinchers and nerves of steel: Defensive movements altered by chemical manipulation of a cortical motor area. Neuron 43:585–593. Cooke, D.F., and M.S.A. Graziano. 2004b. Sensorimotor integration in the precentral gyrus: Polysensory neurons and defensive movements. Journal of Neurophysiology 91:1648–1660. Cui, Q.N., L. Bachus, E. Knoth, W.E. O’Neill, and G.D. Paige. 2008. Eye position and cross-sensory learning both contribute to prism adaptation of auditory space. Progress in Brain Research 171:265–270. Cui, J., M. Wilke, N.K. Logothetis, D.A. Leopold, and H. Liang. 2009. Visibility states modulate microsaccade rate and direction. Vision Research 49:228–236. Cusick, C.G., B. Seltzer, M. Cola, and E. Griggs. 1995. Chemoarchitectonics and corticocortical terminations within the superior temporal sulcus of the rhesus monkey: Evidence for subdivisions of superior temporal polysensory cortex. Journal of Comparative Neurology 360:513–535. Cynader, M., and N. Berman. 1972. Receptive field organization of monkey superior colliculus. Journal of Neurophysiology 35:187–201. Dahl, C.D., N.K. Logothetis, and C. Kayser. 2009. Spatial organization of multisensory responses in temporal association cortex. Journal of Neuroscience. 29:11924–11932. de la Mothe, L.A., S. Blumell, Y. Kajikawa, and T.A. Hackett. 2006a. Cortical connections of the auditory cortex in marmoset monkeys: Core and medial belt regions. Journal of Comparative Neurology 496:27–71. de la Mothe, L.A., S. Blumell, Y. Kajikawa, and T.A. Hackett. 2006b. Thalamic connections of the auditory cortex in marmoset monkeys: Core and medial belt regions. Journal of Comparative Neurology 496:72–96. De Souza, W.C., S. Eifuku, R. Tamura, H. Nishijo, and T. Ono. 2005. Differential characteristics of face neuron responses within the anterior superior temporal sulcus of macaques. Journal of Neurophysiology 94:1251–1566. Dehner, L.R., L.P. Keniston, H.R. Clemo, and M.A. Meredith. 2004. Cross-modal circuitry between auditory and somatosensory areas of the cat anterior ectosylvian sulcal cortex: A ‘new’ inhibitory form of multisensory convergence. Cerebral Cortex 14:387–403. Desimone, R., and C.G. Gross. 1979. Visual areas in the temporal cortex of the macaque. Brain Research 178:363–380.
90
The Neural Bases of Multisensory Processes
Diederich, A., and H. Colonius. 2004. Modeling the time course of multisensory interaction in manual and saccadic responses. In Handbook of Multisensory Processes, ed. G. Calvert, C. Spence, and B.E. Stein, 373–394. Cambridge, MA: MIT Press. Disbrow, E., E. Litinas, G.H. Recanzone, J. Padberg, and L. Krubitzer. 2003. Cortical connections of the second somatosensory area and the parietal ventral area in macaque monkeys. Journal of Comparative Neurology 462:382–399. Dobelle, W.H., M.G. Mladejovsky, and J.P. Girvin. 1974. Artificial vision for the blind: Electrical stimulation of visual cortex offers hope for a functional prosthesis. Science 183:440–444. Duffy, C.J., and R.H. Wurtz. 1991. Sensitivity of MST neurons to optic flow stimuli: I. A continuum of response selectivity to large-field stimuli. Journal of Neurophysiology 65:1329–1345. Eaccott, M.J., C.A. Heywood, C.G. Gross, and A. Cowey. 1993. Visual discrimination impairments following lesions of the superior temporal sulcus are not specific for facial stimuli. Neuropsychologia 31:609–619. Eifuku, S., W.C. De Souza, R. Tamura, H. Nishijo, and T. Ono. 2004. Neuronal correlates of face identification in the monkey anterior temporal cortical areas. Journal of Neurophysiology 91:358–371. Engbert, R., and R. Kliegl. 2003. Microsaccades uncover the orientation of covert attention. Vision Research 43:1035–1045. Evans, T.A., S. Howell, and G.C. Westergaard. 2005. Auditory–visual cross-modal perception of communicative stimuli in tufted capuchin monkeys (Cebus apella). Journal of Experimental Psychology. Animal Behavior Processes 31:399–406. Falchier, A., S. Clavagnier, P. Barone, and H. Kennedy. 2002. Anatomical evidence of multimodal integration in primate striate cortex. Journal of Neuroscience. 22:5749–5759. Falchier, A., C.E. Schroeder, T.A. Hackett, et al. 2010. Projection from visual areas V2 and prostriata to caudal auditory cortex in the monkey. Cerebral Cortex 20:1529–1538. Felleman, D.J., and J.H. Kaas. 1984. Receptive field properties of neurons in middle temporal visual area (MT) of owl monkeys. Journal of Neurophysiology 52:488–513. Fogassi, L., V. Gallese, L. Fadiga, F. Luppino, M. Matelli, and G. Rizzolatti. 1996. Coding of peripersonal space in inferior premotor cortex (area F4). Journal of Neurophysiology 76:141–157. Frens, M.A., and A.J. Van Opstal. 1998. Visual–auditory interactions modulate saccade-related activity in monkey superior colliculus. Brain Research Bulletin 46:211–224. Frens, M.A., A.J. Van Opstal, and R.F. Van der Willigen. 1995. Spatial and temporal factors determine auditory– `visual interactions in human saccadic eye movements. Perception & Psychophysics 57:802–816. Fu, K.G., T.A. Johnston, A.S. Shah, et al. 2003. Auditory cortical neurons respond to somatosensory stimulation. Journal of Neuroscience. 23:7510–7515. Fu, K.G., A.S. Shah, M.N. O’Connell, et al. 2004. Timing and laminar profile of eye-position effects on auditory responses in primate auditory cortex. Journal of Neurophysiology 92:3522–3531. Fuster, J.M., M. Bodner, and J.K. Kroger. 2000. Cross-modal and cross-temporal association in neurons of frontal cortex. Nature 405:347–351. Gaffan, D., and S. Harrison. 1991. Auditory–visual associations, hemispheric specialization and temporal– frontal interaction in the rhesus monkey. Brain 114:2133–2144. Ghazanfar, A.A., and N.K. Logothetis. 2003. Facial expressions linked to monkey calls. Nature 423:934–934. Ghazanfar, A.A., and L.R. Santos. 2004. Primate brains in the wild: The sensory bases for social interactions. Nature Reviews. Neuroscience 5:603–616. Ghazanfar, A.A., and C.E. Schroeder. 2006. Is neocortex essentially multisensory? Trends in Cognitive Sciences 10:278–285. Ghazanfar, A.A., J.G. Neuhoff, and N.K. Logothetis. 2002. Auditory looming perception in rhesus monkeys. Proceedings of the National Academy of Sciences of the United States of America 99:15755–15757. Ghazanfar, A.A., J.X. Maier, K.L. Hoffman, and N.K. Logothetis. 2005. Multisensory integration of dynamic faces and voices in rhesus monkey auditory cortex. Journal of Neuroscience. 25:5004–5012. Ghazanfar, A.A., K. Nielsen, and N.K. Logothetis. 2006. Eye movements of monkey observers viewing vocalizing conspecifics. Cognition 101:515–529. Ghazanfar, A.A., C. Chandrasekaran, and N.K. Logothetis. 2008. Interactions between the superior temporal sulcus and auditory cortex mediate dynamic face/voice integration in rhesus monkeys. Journal of Neuroscience. 28:4457–4469. Giard, M.H., and F. Peronnet. 1999. Auditory–visual integration during multimodal object recognition in humans: A behavioral and electrophysiological study. Journal of Cognitive Neuroscience 11:473–490. Gibson, J.R., and J.H.R. Maunsell. 1997. Sensory modality specificity of neural activity related to memory in visual cortex. Journal of Neurophysiology 78:1263–1275.
Audiovisual Integration in Nonhuman Primates
91
Gifford III, G.W., and Y.E. Cohen. 2005. Spatial and non-spatial auditory processing in the lateral intraparietal area. Experimental Brain Research 162:509–512. Gifford III, G.W., K.A. MacLean, M.D. Hauser, and Y.E. Cohen. 2005. The neurophysiology of functionally meaningful categories: Macaque ventrolateral prefrontal cortex plays a critical role in spontaneous categorization of species-specific vocalizations. Journal of Cognitive Neuroscience 17:1471–1482. Goldman-Rakic, P.S., A.R. Cools, and K. Srivastava. 1996. The prefrontal landscape: Implications of functional architecture for understanding human mentation and the central executive. Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences 351:1445–1453. Goodale, M.A., and A.D. Milner. 1992. Separate visual pathways for perception and action. Trends in Neurosciences 15:20–25. Graziano, M.S.A., and S. Gandhi. 2000. Location of the polysensory zone in the precentral gyrus of anesthetized monkeys. Experimental Brain Research 135:259–266. Graziano, M.S.A., X.T. Hu, and C.G. Gross. 1997. Visuospatial properties of ventral premotor cortex. Journal of Neurophysiology 77:2268–2292. Graziano, M.S.A., L.A.J. Reiss, and C.G. Gross. 1999. A neuronal representation of the location of nearby sounds. Nature 397:428–430. Graziano, M.S.A., G.S. Yap, and C.G. Gross. 1994. Coding of visual space by premotor neurons. Science 266:1054–1057. Green, K.P., P.K. Kuhl, A.N. Meltzoff, and E.B. Stevens. 1991. Integrating speech information across talkers, gender, and sensory modality: Female faces and male voices in the McGurk effect. Perception & Psychophysics 50:524–536. Groh, J.M., A.S. Trause, A.M. Underhill, K.R. Clark, and S. Inati. 2001. Eye position influences auditory responses in primate inferior colliculus. Neuron 29:509–518. Grunewald, A., J.F. Linden, and R.A. Andersen. 1999. Responses to auditory stimuli in macaque lateral intraparietal area I. Effects of training. Journal of Neurophysiology 82:330–342. Gu, Y., D.E. Angelaki, and G.C. DeAngelis. 2008. Neural correlates of multisensory cue integration in macaque MSTd. Nature Neuroscience 11:1201–1210. Hackett, T.A. 2002. The comparative anatomy of the primate auditory cortex. In: Primate Audition: Ethology and Neurobiology, ed. Asif A. Ghazanfar, 199–226. Boca Raton, FL: CRC. Hackett, T.A., L.A. de la Mothe, I. Ulbert, G. Karmos, J.F. Smiley, and C.E. Schroeder. 2007. Multisensory convergence in auditory cortex: II. Thalamocortical connections of the caudal superior temporal plane. Journal of Comparative Neurology 502:894–923. Hackett, T.A., T.M. Preuss, and J.H. Kaas. 2001. Architectonic identification of the core region in auditory cortex of macaques, chimpanzees, and humans. Journal of Comparative Neurology 441:197–222. Hackett, T.A, I. Stepniewska, and J.H. Kaas. 1999. Prefrontal connections of the parabelt auditory cortex in macaque monkeys. Brain Research 817:45–58. Hairston, W.D., D.A. Hodges, J.H. Burdette, and M.T. Wallace. 2006. Auditory enhancement of visual temporal order judgment. Neuroreport 17:791–795. Hikosaka, K., E. Iwai, H. Saito, and K. Tanaka. 1988. Polysensory properties of neurons in the anterior bank of the caudal superior temporal sulcus of the macaque monkey. Journal of Neurophysiology 60:1615–1637. Hikosaka, O., M. Sakamoto, and S. Usui. 1989. Functional properties of monkey caudate neurons: II. Visual and auditory responses. Journal of Neurophysiology 61:799–813. Hoffman, K.L., A.A. Ghazanfar, I. Gauthier, and N.K. Logothetis. 2008. Category-specific responses to faces and objects in primate auditory cortex. Frontiers in Systems Neuroscience 1:2. Hoffman, K.L., K.M. Gothard, M.C. Schmid, and N.K. Logothetis. 2007. Facial-expression and gaze-selective responses in the monkey amygdala. Current Biology 17:766–772. Ito, S. 1982. Prefrontal activity of macaque monkeys during auditory and visual reaction time tasks. Brain Research 247:39–47. Iversen, S.D., and M. Mishkin. 1973. Comparison of superior temporal and inferior prefrontal lesions on auditory and non-auditory task in rhesus monkeys. Brain Research 55:355–367. Izumi, A., and S. Kojima. 2004. Matching vocalizations to vocalizing faces in chimpanzee (Pan troglodytes). Animal Cognition 7:179–184. Jääskeläinen, I.P., J. Ahveninen, J.W. Belliveau, T. Raij, and M. Sams. 2007. Short-term plasticity in auditory cognition. Trends in Neurosciences 30:653–661. Jay, M.F., and D.L. Sparks. 1984. Auditory receptive fields in primate superior colliculus shift with changes in eye position. Nature 309:345–347. Jones, E.G. 1998. Viewpoint: The core and matrix of thalamic organization. Neuoroscience 85:331–345.
92
The Neural Bases of Multisensory Processes
Jordan, K.E., E.M. Brannon, N.K. Logothetis, and A.A. Ghazanfar. 2005. Monkeys match the number of voices they hear to the number of faces they see. Current Biology 15:1034–1038. Joseph, J.P., and P. Barone. 1987. Prefrontal unit activity during a delayed oculomotor task in the monkey. Experimental Brain Research 67:460–468. Kaas, J.H., and T.A. Hackett. 2000. Subdivisions of auditory cortex and processing streams in primates. Proceedings of the National Academy of Sciences of the United States of America 97:11793–11799. Kajikawa, Y., C.E. Schroeder. 2008. Face–voice integration and vocalization processing in the monkey. Abstracts Society for Neuroscience 852.22. Kayser, C., and N.K. Logothetis. 2009. Directed interactions between auditory and superior temporal cortices and their role in sensory integration. Frontiers in Integrative Neuroscience 3:7. Kayser, C.I., C.I. Petkov, M. Augath, and N.K. Logothetis. 2005. Integration of touch and sound in auditory cortex. Neuron 48:373–384. Kayser, C., C.I. Petkov, M. Augath, and N.K. Logothetis. 2007. Functional imaging reveals visual modulation of specific fields in auditory cortex. Journal of Neuroscience 27:1824–1835. Kayser, C., C.I. Petkov, and N.K. Logothetis. 2008. Visual modulation of neurons in auditory cortex. Cerebral Cortex 18:1560–1574. Kayser, C., and N.K. Logothetis. 2009. Directed interactions between auditory and superior temporal cortices and their role in sensory integration. Frontiers in Integrative Neuroscience 3:7. Kayser, C., N.K. Logothetis, and S. Panzeri. 2010. Visual enhancement of the information representation in auditory cortex. Current Biology 20:19–24. Keysers, C., E. Kohler, M.A. Umilta, L. Nanetti, L. Fogassi, and V. Gallese. 2003. Audiovisual mirror neurons and action recognition. Experimental Brain Research 153:628–636. Kikuchi-Yorioka, Y., and T. Sawaguchi. 2000. Parallel visuospatial and audiospatial working memory processes in the monkey dorsolateral prefrontal cortex. Nature Neuroscience 3:1075–1076. Kimura, M. 1992. Behavioral modulation of sensory responses of primate putamen neurons. Brain Research 578:204–214. Knudsen, E.I., and P.F. Knudsen. 1989. Vision calibrates sound localization in developing barn owls. Journal of Neuroscience 9:3306–3313. Kohler, E., C. Keysers, M.A. Umilta, L. Fogassi, V. Gallese, and G. Rizzolatti. 2002. Hearing sounds, understanding actions: Action representation in mirror neurons. Science 297:846–848. Kojima, S., A. Izumi, and M. Ceugniet. 2003. Identification of vocalizers by pant hoots, pant grants and screams in a chimpanzee. Primates 44:225–230. Kondo, H., K.S. Saleem, and J.L. Price. 2003. Differential connections of the temporal pole with the orbital and medial prefrontal networks in macaque monkeys. Journal of Comparative Neurology 465:499–523. Kosmal, A., M. Malinowska, and D.M. Kowalska. 1997. Thalamic and amygdaloid connections of the auditory association cortex of the superior temporal gyrus in rhesus monkey (Macaca mulatta). Acta Neurobiologiae Experimentalis 57:165–188. Kubota, K., M. Tonoike, and A. Mikami. 1980. Neuronal activity in the monkey dorsolateral prefrontal cortex during a discrimination task with delay. Brain Research 183:29–42. Kuraoka, K., and K. Nakamura. 2007. Responses of single neurons in monkey amygdala to facial and vocal emotions. Journal of Neurophysiology 97:1379–1387. Lakatos, P., C.-M. Chen, M. O’Connell, A. Mills, and C.E. Schroeder. 2007. Neuronal oscillations and multisensory interaction in primary auditory cortex. Neuron 53:279–292. Lakatos, P., G., Karmos, A.D. Mehta, I. Ulbert, and C.E. Schroeder. 2008. Entrainment of neural oscillations as a mechanism of attentional selection. Science 320:110–113. Lakatos, P., M.N. O’Connell, A. Barczak, A. Mills, D.C. Javitt, and C.E. Schroeder. 2009. The leading sense: Supramodal control of neurophysiological context by attention. Neuron 64:419–430. Lakatos, P., A.S. Shaw, K.H. Knuth, I. Ulbert, G. Karmos, and C.E. Schroeder. 2005. An oscillatory hierarchy controlling neuronal excitability and stimulu processing in the auditory cortex. Journal of Neurophysiology 94:1904–1911. Lehmann, C., M. Herdener, F. Esposito, et al. 2006. Differential patterns of multisensory interactions in core and belt areas of human auditory cortex. Neuroimage 31:294–300. Lehmann, S., and M.M. Murray. 2005. The role of multisensory memories in unisensory object discrimination. Brain Research. Cognitive Brain Research 24:326–334. Leonard, C.M., E.T. Rolls. F.A. Wilson and G.C. Baylis. 1985. Neurons in the amygdala of the monkey with responses selective for faces. Behavioural Brain Research 15:159–176. Levy, R., and P.S. Goldman-Rakic. 2000. Segregation of working memory functions within the dorsolateral prefrontal cortex. Experimental Brain Research 133:23–32.
Audiovisual Integration in Nonhuman Primates
93
Lewis, J.W., and D.C. Van Essen. 2000. Corticocortical connections of visual, sensorimotor, and multi modal pro cessing areas in the parietal lobe of the macaque monkey. Journal of Comparative Neurology 428:112–137. Linden, J.F., A. Grunewald, and R.A. Andersen. 1999. Responses to auditory stimuli in macaque lateral intraparietal area: II. Behavioral modulation. Journal of Neurophysiology 82:343–358. Maier, J.X., J.G. Neuhoff, N.K. Logothetis, and A.A. Ghazanfar. 2004. Multisensory integration of looming signals by rhesus monkeys. Neuron 43:177–181. Maier, J.X., C. Chandrasekaran, and A.A. Ghazanfar. 2008. Integration of bimodal looming signals through neuronal coherence in the temporal lobe. Current Biology 18:963–968. Martinez, L., and T. Matsuzawa. 2009. Auditory–visual intermodal matching based on individual recognition in a chimpanzee (Pan troglodytes). Animal Cognition 12:S71–S85. Matsumoto, N., T. Minamimoto, A.M. Graybiel, and M. Kimura. 2001. Neurons in the thalamic CM-Pf complex supply striatal neurons with information about behaviorally significant sensory events. Journal of Neurophysiology 85:960–976. Mazzoni, P., R.P. Bracewell, S. Barash, and R.A. Andersen. 1996. Spatially tuned auditory responses in area LIP of macaques performing delayed memory saccades to acoustic targets. Journal of Neurophysiology 75:1233–1241. McDonald, J.J., W.A. Teder-Sälejärvi, F. Di Russo, and S.A. Hillyard. 2003. Neural substrates of perceptual enhancement by cross-modal spatial attention. Journal of Cognitive Neuroscience 15:10–19. McGurk, H., and J. MacDonald. 1976. Hearing lips and seeing voices. Nature 264:746–748. McNaughton, B.L., F.P. Battagllia, O. Jensen, E.I. Moser, and M.B. Moser. 2006. Path integration and the neural basis of the ‘cognitive map.’ Nature Reviews. Neuroscience 7:663–678. Mehta, A.D., U. Ulbert, and C.E. Schroeder. 2000a. Intermodal selective attention in monkeys: I. Distribution and timing of effects across visual areas. Cerebral Cortex 10:343–358. Mehta, A.D., U. Ulbert, and C.E. Schroeder. 2000b. Intermodal selective attention in monkeys: II. Physiological mechanisms of modulation. Cerebral Cortex 10:359–370. Meredith, M.A., B.L. Allman, L.P. Keniston, and H.R. Clemo. 2009. Auditory influences on non-auditory cortices. Hearing Research 258:64–71. Meredith, M.A., J.W. Nemitz, and B.E. Stein. 1987. Determinants of multisensory integration in superior colliculus neurons: I. Temporal factors. Journal of Neuroscience 7:3215–3229. Meredith, M.A., and B.E. Stein. 1983. Interactions among converging sensory inputs in the superior colliculus. Science 221:389–391. Meyer K., J.T. Kaplan, R. Essec, C. Webber, H. Damasio, and A. Damasio. 2010. Predicting visual stimuli on the basis of activity in auditory cortices. Nature Neuroscience 13:667–668. Miller, J.O. 1982. Divided attention: Evidence for coactivation with redundant signals. Cognitive Psychology 14:247–279 Miller, J., R. Ulrich, and Y. Lanarre. 2001. Locus of the redundant-signals effect in bimodal divided attention: A neurophysiological analysis. Perception & Psychophysics 63:555–562. Mohedano-Moriano, A., P. Pro-Sistiaga, M.M. Arroyo-Jimenez, et al. 2007. Topographical and laminar distribution of cortical input to the monkey entorhinal cortex. Journal of Anatomy 211:250–260. Mohedano-Moriano, A., A. Martinez-Marcos, P. Pro-Sistiaga, et al. 2008. Convergence of unimodal and polymodal sensory input to the entorhinal cortex in the fascicularis monkey. Neuroscience 151:255–271. Molholm, S., W. Ritter, M.M. Murray, D.C. Javitt, C.E. Schroeder, and J.J. Foxe. 2002. Multisensory auditory– visual interactions during early sensory processing in humans: A high-density electrical mapping study. Brain Research. Cognitive Brain Research 14, 115–128. Molholm, S., A. Martinez, M. Shpaner, and J.J. Foxe. 2007. Object-based attention is multisensory: Co-activation of an object’s representations in ignored sensory modalities. European Journal of Neuroscience 26: 499–509. Mullette-Gilman, O.A., Y.E. Cohen, and J.M. Groh. 2005. Eye-centered, head-centered, and complex coding of visual and auditory targets in the intraparietal sulcus. Journal of Neurophysiology 94:2331–2352. Mullette-Gilman, O.A., Y.E. Cohen, and J.M. Groh. 2009. Motor-related signals in the intraparietal cortex encode locations in a hybrid, rather than eye-centered reference frame. Cerebral Cortex 19:1761–1775. Murata, A., L. Fadiga, L. Fogassi, V. Gallese, V. Raos, and G. Rizzolatti. 1997. Object representation in the ventral premotor cortex (area F5) of the monkey. Journal of Neurophysiology 78:2226–2230. Murray, E.A., and D. Gaffan. 1994. Removal of the amygdala plus subjacent cortex disrupts the retention of both intramodal and crossmodal associative memories in monkeys. Behavioral Neuroscience 108:494–500. Murray, E.A., and B.J. Richmond. 2001. Role of perirhinal cortex in object perception, memory, and associations Current Opinion in Neurobiology 11:188–193.
94
The Neural Bases of Multisensory Processes
Murray, M.M., C.M. Michel, R.G. de Peralta, et al. 2004. Rapid discrimination of visual and multisensory memories revealed by electrical neuroimaging. Neuroimage 21:125–135. Murray, M.M., J.J. Foxe, and G.R. Wylie. 2005. The brain uses single-trial multisensory memories to discriminate without awareness. Neuroimage 27:473–478. Musacchia, G., M. Sams, T. Nicol, and N. Kraus. 2006. Seeing speech affects acoustic information processing in the human brainstem. Experimental Brain Research 168:1–10. Musacchia, G., and C.E. Schroeder. 2009. Neuronal mechanisms, response dynamics and perceptual functions of multisensory interactions in auditory cortex. Hearing Research 258:72–79. Nager, W., K. Estorf, and T.F. Münte. 2006. Crossmodal attention effects on brain responses to different stimulus classes. BMC Neuroscience 7:31. Navarra, J., A. Alsius, S. Soto-Faraco, and C. Spence. 2010. Assessing the role of attention in the audiovisual integration of speech. Information Fusion 11:4–11. Neal, J.W., R.C. Pearson, and T.P. Powell. 1990. The connections of area PG, 7a, with cortex in the parietal, occipital and temporal lobes of the monkey. Brain Research 532:249–264. Nelissen, K., W. Vanduffel, and G.A. Orban. 2006. Charting the lower superior temporal region, a new motionsensitive region in monkey superior temporal sulcus. Journal of Neuroscience 26:5929–5947. Newman, J.D., and D.F. Lindsley. 1976. Single unit analysis of auditory processing in squirrel monkey frontal cortex. Experimental Brain Research 25:169–181. Nishijo, H., T. Ono, and H. Nishino. 1988a. Topographic distribution of modality-specific amygdalar neurons in alert monkey. Journal of Neuroscience 8:3556–3569. Nishijo, H., T. Ono, and H. Nishino. 1988b. Single neuron responses in amygdala of alert monkey during complex sensory stimulation with affective significance. Journal of Neuroscience 8:3570–3583. Nyberg, L., R. Habib, A.R. McIntosh, and E. Tulving. 2000. Reactivation of encoding-related brain activity during memory retrieval. Proceedings of the National Academy of Sciences of the United States of America 97:11120–11124. Ono, T., K. Nakamura, H. Nishijo, and S. Eifuku. 1993. Monkey hippocampal neurons related to spatial and nonspatial functions. Journal of Neurophysiology 70:1516–1529. Oram, M.W., and D.I. Perrett. 1996. Integration of form and motion in the anterior superior temporal polysensory area (STPa) of the macaque monkey. Journal of Neurophysiology 76:109–129. Oram, M.W., D.I. Perrett, and J.K. Hietanen. 1993. Directional tuning of motion-sensitive cells in the anterior superior temporal polysensory area of the macaque. Experimental Brain Research 97:274–294. Padberg, J., B. Seltzer, and C.G. Cusick. 2003. Architectonics and cortical connections of the upper bank of the superior temporal sulcus in the rhesus monkey: An analysis in the tangential plane. Journal of Comparative Neurology 467:418–434. Padberg, J., E. Disbrow, and L. Krubitzer. 2005. The organization and connections of anterior and posterior parietal cortex in titi monkeys: Do new world monkeys have an area 2? Cerebral Cortex 15:1938–1963. Parr, L.A., E. Hecht, S.K. Barks, T.M. Preuss, and J.R. Votaw. 2009. Face processing in the chimpanzee brain. Current Biology 19:50–53. Partan, S.R. 2002. Single and multichannel signal composition: Facial expressions and vocalizations of rhesus macaques (Macaca mulatta). Behavior 139:993–1027. Perrett, D.I., E.T. Rolls, and W. Caan. 1982. Visual neurones responsive to faces in the monkey temporal cortex. Experimental Brain Research 47:329–342. Perrott, D.R., K. Saberi, K. Brown, and T.Z. Strybel. 1990. Auditory psychomotor coordination and visual search performance. Perception & Psychophysics 48:214–226. Petkov, C.I., C. Kayser, T. Steudel, K. Whittingstall, M. Augath, and N.K. Logothetis. 2008. A voice region in the monkey brain. Nature Neuroscience 11:367–374. Petrides, M., and D.N. Pandya. 2002. Comparative cytoarchitectonic analysis of the human and the macaque ventrolateral prefrontal cortex and corticocortical connection patterns in the monkey. European Journal of Neuroscience 16:291–310. Petrides, M., and D.N. Pandya. 2009. Distinct parietal and temporal pathways to the homologues of Broca’s area in the monkey. PLoS Biology 7:e1000170. Phelps, E.A., and J.E. LeDoux. 2005. Contributions of the amygdala to emotion processing: From animal models to human behavior. Neuron 48:175–187. Pinsk, M.A., K. DeSimone, T. Moore, C.G. Gross, and S. Kastner. 2005. Representations of faces and body parts in macaque temporal cortex: A functional MRI study. Proceedings of the National Academy of Sciences of the United States of America 102:6996–7001. Poremba, A., R.C. Saunders, A.M. Crane, M. Cook, L. Sokoloff, and M. Mishkin. 2003. Functional mapping of the primate auditory system. Science 299:568–572.
Audiovisual Integration in Nonhuman Primates
95
Porter, K.K., R.R. Metzger, and J.M. Groh. 2007. Visual- and saccade-related signals in the primate inferior colliculus. Proceedings of the National Academy of Sciences of the United States of America 104:17855–17860. Posner, M.I., C.R.R. Snyder, and D.J. Davidson. 1980. Attention and the detection of signals. Journal of Experimental Psychology. General 109:160–174. Raab, D.H. 1962. Statistical facilitation of simple reaction times. Transactions of the New York Academy of Sciences 24:574–590. Rahne, T., and M. Böckmann-Barthel. 2009. Visual cues release the temporal coherence of auditory objects in auditory scene analysis. Brain Research 1300:125–134. Ramos-Estebanez, C., L.B. Merabet, K. Machii, et al. 2007. Visual phosphene perception modulated by subthreshold crossmodal sensory stimulation. Journal of Neuroscience 27:4178–4181. Rao, S.C., G. Rainer, and E.K. Miller. 1997. Integration of what and where in the primate prefrontal cortex. Science 276:821–824. Rauschecker, J.P., and B. Tian. 2000. Mechanisms and streams for processing of “what” and “where” in auditory cortex. Proceedings of the National Academy of Sciences of the United States of America 97:11800–11806. Rauschecker, J.P., B. Tian, and M. Hauser. 1995. Processing of complex sounds in the macaque nonprimary auditory cortex. Science 268:111–114. Rauschecker, J.P., and L.R. Harris. 1989. Auditory and visual neurons in the cat’s superior colliculus selective for the direction of apparent motion stimuli. Brain Research 490:56–63. Recanzone, G.H., D.C. Guard, M.L. Phan, and T.K. Su. 2000. Correlation between the activity of single auditory cortical neurons and sound-localization behavior in the macaque monkey. Journal of Neurophysiology 83:2723–2739. Ringo, J.L., and S.G. O’Neill. 1993. Indirect inputs to ventral temporal cortex of monkey: The influence on unit activity of alerting auditory input, interhemispheric subcortical visual input, reward, and the behavioral response. Journal of Neurophysiology 70:2215–2225. Rizzolatti, G., and L. Craighero. 2004. The mirror-neuron system. Annual Review of Neuroscience 27:169–192. Rizzolatti, G., L. Fadiga, V. Gallese, and L. Fogassi. 1996. Premotor cortex and the recognition of motor actions. Brain Research. Cognitive Brain Research 3:131–141. Rockland, K.S., and H. Ojima. 2003. Multisensory convergence in calcarine visual areas in macaque monkey. International Journal of Psychophysiology 50:19–26. Rolls, E.T., H.D. Critchley, A.S. Browning, and K. Inoue. 2006. Face-selective and auditory neurons in the primate orbitofrontal cortex. Experimental Brain Research 170:74–87. Romanski, L.M., B.B. Averbeck, and M. Diltz. 2005. Neural representation of vocalizations in the primate ventrolateral prefrontal cortex. Journal of Neurophysiology 93:734–747. Romanski, L.M., J.F. Bates, and P.S. Goldman-Rakic. 1999a. Auditory belt and parabelt projections to the prefrontal cortex in the rhesus monkey. Journal of Comparative Neurology 403:141–157. Romanski, L.M., and P.S. Goldman-Rakic. 2002. An auditory domain in primate prefrontal cortex. Nature Neuroscience 5:15–16. Romanski, L.M., B. Tian, J. Fritz, M. Mishkin, P.S. Goldman-Rakic, and J.P. Rauschecker. 1999b. Dual streams of auditory afferents target multiple domains in the primate prefrontal cortex. Nature Neuroscience 2:1131–1136. Romei, V., M.M. Murray, L.B. Merabet, and G. Thut. 2007. Occipital transcranial magnetic stimulation has opposing effects on visual and auditory stimulus detection: Implications for multisensory interactions. Journal of Neuroscience 27:11465–11472. Romei, V., M.M. Murray, C. Cappe, and G. Thut. 2009. Preperceptual and stimulus-selective enhancement of low-level human visual cortex excitability by sounds. Current Biology 19:1799–1805. Russ, B.E., A.L. Ackelson, A.E. Baker, and Y.E. Cohen. 2008. Coding of auditory-stimulus identity in the auditory non-spatial processing stream. Journal of Neurophysiology 99:87–95. Saleem, K.S., W. Suzuki, K. Tanaka, and T. Hashikawa. 2000. Connections between anterior inferotemporal cortex and superior temporal sulcus regions in the macaque monkey. Journal of Neuroscience 20:5083–5101. Saleem, K.S., H. Kondo, and J.L. Price. 2008. Complementary circuits connecting the orbital and medial prefrontal networks with the temporal, insular, and opercular cortex in the macaque monkey. Journal of Comparative Neurology 506:659–693. Sams, M., R. Aulanko, M. Hämäläinen, et al. 1991. Seeing speech: Visual information from lip movements modifies activity in the human auditory cortex. Neuroscience Letters 127:141–145. Santangelo V., and C. Spence. 2009. Crossmodal exogenous orienting improves the accuracy of temporal order judgments. Experimental Brain Research 194:577–586.
96
The Neural Bases of Multisensory Processes
Santos-Benitez, H., C.M. Magarinos-Ascone, and E. Garcia-Austt. 1995. Nucleus basalis of Meynert cell responses in awake monkeys. Brain Research Bulletin 37:507–511. Schiff, W., J.A. Caviness, and J.J. Gibson. 1962. Persistent fear responses in rhesus monkeys to the optical stimulus of “looming.” Science 136:982–983. Schlack, A., S.J. Sterbing-D’Angelo, K. Hartung, K.-P. Hoffmann, and F. Bremmer. 2005. Multisensory space representations in the macaque ventral intraparietal area. Journal of Neuroscience 25:4616–4625. Schmolesky, M.T., Y. Wang, D.P. Hanes, et al. 1998. Signal timing across the macaque visual system. Journal of Neurophysiology 79:3272–3278. Schroeder, C.E., and J.J. Foxe. 2002. The timing and laminar profile of converging inputs to multisensory areas of the macaque neocortex. Brain Research. Cognitive Brain Research 14:187–198. Schroeder, C.E., and J.J. Foxe. 2005. Multisensory contributions to low-level, ‘unisensory’ processing. Current Opinion in Neurobiology 15:454–458. Schroeder, C.E., and P. Lakatos. 2009. Low-frequency neuronal oscillations as instruments of sensory selection. Trends in Neurosciences 32:9–18. Schroeder, C.E., P. Lakatos, Y. Kajikawa, S. Partan, and A. Puce. 2008. Neuronal oscillations and visual amplification of speech. Trends in Cognitive Sciences 12:106–113. Schroeder, C.E., R.W. Lindsley, C. Specht, A. Marcovici, J.F. Smilery, and D.C. Javitt. 2001. Somatosensory input to auditory association cortex in the macaque monkey. Journal of Neurophysiology 85:1322–1327. Seltzer, B., M.G. Cola, C. Gutierrez, M. Massee, C. Weldon, and C.G. Cusick. 1996. Overlapping and nonoverlapping cortical projections to cortex of the superior temporal sulcus in the rhesus monkey: Double anterograde tracer studies. Journal of Comparative Neurology 370:173–190. Seltzer, B., and D.N. Pandya. 1978. Afferent cortical connections and architectonics of the superior temporal sulcus and surrounding cortex in the rhesus monkey. Brain Research 149:1–24. Seltzer, B., and D.N. Pandya. 1989. Frontal lobe connections of the superior temporal sulcus in the rhesus monkey. Journal of Comparative Neurology 281:97–113. Seltzer, B., and D.N. Pandya. 1994. Parietal, temporal, and occipital projections to cortex of the superior temporal sulcus in the rhesus monkey: A retrograde tracer study. Journal of Comparative Neurology 343:445–463. Sherman, S.M., and R.W. Guillery. 2002. The role of the thalamus in the flow of information to the cortex. Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences 357:1695–1708. Sliwa, J., J.-R. Duhamel, O. Paxsalis, and S.C. Wirth. 2009. Cross-modal recognition of identity in rhesus monkeys for familiar conspecifics and humans. Abstracts Society for Neuroscience 684.14. Smiley, J.F., T.A. Hackett, I. Ulbert, et al. 2007. Multisensory convergence in auditory cortex, I. Cortical connections of the caudal superior temporal plane in macaque monkeys. Journal of Comparative Neurology 502:894–923. Soto-Faraco, S., and A. Alsius. 2009. Deconstructing the McGurk–MacDonald illusion. Journal of Experimental Psychology. Human Perception and Performance 35:580–587. Squire, L.R., C.E.L. Stark, and R.E. Clark. 2004. The medial temporal lobe. Annual Review of Neuroscience 27:279–306. Starr, A., and M. Don. 1972. Responses of squirrel monkey (Samiri sciureus) medial geniculate units to binaural click stimuli. Journal of Neurophysiology 35:501–517. Stein, B.E., and M.A. Meredith. 1993. The Merging of the Senses. Cambridge, MA: MIT Press. Stein, B.E., W. Jiang, M.T. Wallace, and T.R. Stanford. 2001. Nonvisual influences on visual-information processing in the superior colliculus. Progress in Brain Research 134:143–156. Stein, B.E., M.W. Wallace, T.R. Stanford, and W. Jiang. 2002. Cortex governs multisensory integration in the midbrain. Neuroscientist 8:306–314. Stein, B.E., and T.R. Stanford. 2008. Multisensory integration: Current issues from the perspective of the single neuron. Nature Reviews. Neuroscience 9:255–266. Stevenson, R.A., and T.W. James. 2009. Audiovisual integration in human superior temporal sulcus: Inverse effectiveness and the neural processing of speech and object recognition. Neuroimage 44:1210–1223. Stricane, B., R.A. Andersen, and P. Mazzoni. 1996. Eye-centered, head-centered, and intermediate coding of remembered sound locations in area LIP. Journal of Neurophysiology 76:2071–2076. Sugihara, T., M.D. Diltz, B.B. Averbeck, and L.M. Romanski. 2006. Integration of auditory and visual communication information in the primate ventrolateral prefrontal cortex. Journal of Neuroscience 26:11138–11147. Sumby, W.H., and I. Pollack. 1954. Visual contribution to speech intelligibility in noise. Journal of the Acoustical Society of America 26:212–215.
Audiovisual Integration in Nonhuman Primates
97
Suzuki, W.A., and D.G. Amaral. 1994. Perirhinal and parahippocampal cortices of the macaque monkey: Cortical afferents. Journal of Comparative Neurology 350:497–533. Talsma, D., D. Senkowski, and M.G. Woldorff. 2009. Intermodal attention affects the processing of the temporal alignment of audiovisual stimuli. Experimental Brain Research 198:313–328. Tamura, R., T. Ono, M. Fukuda, and K. Nakamura. 1992. Spatial responsiveness of monkey hippocampal neurons to various visual and auditory stimuli. Hippocampus 2:307–322. Tanaka, K., K. Hikosaka, H. Saito, M. Yukie, Y. Fukada, and E. Iwai. 1986. Analysis of local and wide-field movements in the superior temporal visual areas of the macaque monkey. Journal of Neuroscience 6:134–144. Tanibuchi I., and P.S. Goldman-Rakic. 2003. Dissociation of spatial-, object-, and sound-coding neurons in the mediodorsal nucleus of the primate thalamus. Journal of Neurophysiology 89:1067–1077. Teder-Sälejärvi, W.A., T.F. Münte, F. Sperlich, and S.A. Hillyard. 1999. Intra-modal and cross-modal spatial attention to auditory and visual stimuli. An event-related brain potential study. Brain Research. Cognitive Brain Research 8:327–343. Théoret, H., L. Merabet, and A. Pascual-Leone. 2004. Behavioral and neuroplastic changes in the blind: Evidence for functionally relevant cross-modal interactions. Journal of Physiology, Paris 98:221–233. Tian, B., D. Reser, A. Durham, A. Kustov, and J.P. Rauschecker. 2001. Functional specialization in rhesus monkey auditory cortex. Science 292:290–293. Tsao, D.Y., W.A. Freiwald, R.B.H. Tootell, and M.S. Livingstone. 2006. A cortical region consisting entirely of face-selective cells. Science 311:670–674. Tsao, D.Y., S. Moeller, and W.A. Freiwald. 2008a. Comparing face patch systems in macaques and humans. Proceedings of the National Academy of Sciences of the United States of America 105:19514–19519. Tsao, D.Y., N. Schweers, S. Moeller, and W.A. Freiwald. 2008b. Patches of face-selective cortex in the macaque frontal lobe. Nature Neuroscience 11:877–879. Turner, B.H., M. Mishkin, and M. Knapp. 1980. Organization of the amygdalopetal projections from modalityspecific cortical association areas in the monkey. Journal of Comparative Neurology 191:515–543. Ungerleider, L.G., and M. Mishkin. 1982. Two cortical visual systems. In Analysis of Visual Behavior, ed. D.J. Ingle, M.A. Goodale, and R.J.W. Mansfield, 549–586. Cambridge: MIT Press. Ungerleider, L.G., S.M. Courtney, and J.V. Haxby. 1998. A neural system for human vision working memory. Proceedings of the National Academy of Sciences of the United States of America 95:883–890. Updyke, B.V. 1974. Characteristics of unit responses in superior colliculus of the cebus monkey. Journal of Neurophysiology 37:896–909. Vaadia, E., D.A. Benson, R.D. Hienz, and M.H. Goldstein Jr. 1986. Unit study of monkey frontal cortex: Active localization of auditory and of visual stimuli. Journal of Neurophysiology 56:934–952. van Attenveldt, N., A. Roebroeck, and R. Goebel. 2009. Interaction of speech and script in human auditory cortex: Insights from neuro-imaging and effective connectivity. Hearing Research 258:152–164. Vatakis, A., A.A. Ghazanfar, and C. Spence. 2008. Facilitation of multisensory integration by the “unity effect” reveals that speech is special. Journal of Vision 8(9):14. von Kriegstein, K., and A.-L. Giraud. 2006. Implicit multisensory associations influence voice recognition. PLoS Biology 4:e326. Wallace, M.T., L.K. Wilkinson, and B.E. Stein. 1996. Representation and integration of multiple sensory inputs in primate superior colliculus. Journal of Neurophysiology 76:1246–1266. Wang, Y., S. Celebrini, Y. Trotter, and P. Barone. 2008. Visuo-auditory interactions in the primary visual cortex of the behaving monkey: Electrophysiological evidence. BMC Neuroscience 9:79. Watanabe, M. 1992. Frontal units of the monkey coding the associative significance of visual and auditory stimuli. Experimental Brain Research 89:233–247. Watanabe, J., and E. Iwai. 1991. Neuronal activity in visual, auditory and polysensory areas in the monkey temporal cortex during visual fixation task. Brain Research Bulletin 26:583–592. Welch, R., and D. Warren. 1986. Intersensory interactions. In Handbook of Perception and Human Performance, ed. K.R. Boff, L. Kaufman, and J.P. Thomas, 21–36. New York: Wiley. Werner-Reiss, U., K.A. Kelly, A.S. Trause, A.M. Underhill, and J.M. Groh. 2006. Eye position affects activity in primary auditory cortex of primates. Current Biology 13:554–562. Wheeler, M.E., S.E. Petersen, and R.L. Buckner. 2000. Memory’s echo: Vivid remembering reactivates sensory-specific cortex. Proceedings of the National Academy of Sciences of the United States of America 97:11125–11129. Wilson, F.A.W., and E.T. Rolls. 1990. Neuronal responses related to reinforcement in the primate basal forebrain. Brain Research 509:213–231.
98
The Neural Bases of Multisensory Processes
Wilson, F.A.W., S.P.O. Scalaidhe, and P.S. Goldman-Rakic. 1993. Dissociation of object and spatial processing in primate prefrontal cortex. Science 260:1955–1958. Wollberg, Z., and J. Sela. 1980. Frontal cortex of the awake squirrel monkey: Responses of single cells to visual and auditory stimuli. Brain Research 198:216–220. Woods, T.M., and G.H. Recanzone. 2004. Visually induced plasticity of auditory spatial perception in macaques. Current Biology 14:1559–1564. Wurtz, R.H., and J.E. Albano. 1980. Visual–motor function of the primate superior colliculus. Annual Review of Neuroscience 3:189–226. Yeterian, E.H., and D.N. Pandya. 1989. Thalamic connections of the cortex of the superior temporal sulcus in the rhesus monkey. Journal of Comparative Neurology 282:80–97. Zangenehpour, S., A.A. Ghazanfar, D.J. Lewkowicz, and R.J. Zatorre. 2009. Heterochrony and cross-species intersensory matching by infant vervet monkeys. PLoS ONE 4:e4302.
6
Multisensory Influences on Auditory Processing Perspectives from fMRI and Electrophysiology Christoph Kayser, Christopher I. Petkov, Ryan Remedios, and Nikos K. Logothetis
CONTENTS 6.1 Introduction.............................................................................................................................99 6.2 The Where and How of Sensory Integration......................................................................... 100 6.3 Using Functional Imaging to Localize Multisensory Influences in Auditory Cortex........... 101 6.4 Multisensory Influences along the Auditory Processing Stream.......................................... 102 6.5 Multisensory Influences and Individual Neurons.................................................................. 104 6.6 Multisensory Influences and Processing of Communication Signals................................... 106 6.7 Conclusions............................................................................................................................ 109 References....................................................................................................................................... 109
6.1 INTRODUCTION Traditionally, perception has been described as a modular function, with the different sensory modalities operating as independent and separated processes. Following this view, sensory integration supposedly occurs only after sufficient unisensory processing and only in higher association cortices (Jones and Powell 1970; Ghazanfar and Schroeder 2006). Studies in the past decade, however, promote a different view, and demonstrate that the different modalities interact at early stages of processing (Kayser and Logothetis 2007; Schroeder and Foxe 2005; Foxe and Schroeder 2005). A good model for this early integration hypothesis has been the auditory cortex, where multisensory influences from vision and touch have been reported using a number of methods and experimental paradigms (Kayser et al. 2009c; Schroeder et al. 2003; Foxe and Schroeder 2005). In fact, anatomical afferents are available to provide information about nonacoustic stimuli (Rockland and Ojima 2003; Cappe and Barone 2005; Falchier et al. 2002) and neuronal responses showing cross-modal influences have been described in detail (Lakatos et al. 2007; Kayser et al. 2008, 2009a; Bizley et al. 2006). These novel insights, together with the traditional notion that multisensory processes are more prominent in higher association regions, suggest that sensory integration is a rather distributed process that emerges over several stages. Of particular interest in the context of sensory integration are stimuli with particular behavioral significance, such as sights and sounds related to communication (Campanella and Belin 2007; Petrini et al. 2009; Ghazanfar and Logothetis 2003; von Kriegstein and Giraud 2006; von Kriegstein et al. 2006). Indeed, a famous scenario used to exemplify sensory integration—the cocktail party— concerns exactly this: when in a loud and noisy environment, we can better understand a person talking to us when we observe the movements of his/her lips at the same time (Sumby and Polack 99
100
The Neural Bases of Multisensory Processes
1954; Ross et al. 2007). In this situation, the visual information about lip movements enhances the (perceived) speech signal, hence providing an example of how visual information can enhance auditory perception. However, as for many psychophysical phenomena, the exact neural substrate mediating the sensory integration underlying this behavioral benefit remains elusive. In this review, we discuss some of the results of early multisensory influences on auditory processing, and provide evidence that sensory integration occurs distributed and across several processing stages. In particular, we discuss some of the methodological aspects relevant for studies seeking to localize and characterize multisensory influences, and emphasize some of the recent results pertaining to speech and voice integration.
6.2 THE WHERE AND HOW OF SENSORY INTEGRATION To understand how the processing of acoustic information benefits from the stimulation of other modalities, we need to investigate “where” along auditory pathways influences from other modalities occur, and “how” they affect the neural representation of the sensory environment. Noteworthy, the questions of “where” and “how” address different scales and levels of organization. Probing the “where” question requires the observation of sensory responses at many stages of processing, and hence a large spatial field of view. This is, for example, provided by functional imaging, which can assess signals related to neural activity in multiple brain regions at the same time. Probing the “how” question, in contrast, requires an investigation of the detailed neural representation of sensory information in localized regions of the brain. Given our current understanding of neural information processing, this level is best addressed by electrophysiological recordings that assess the responses of individual neurons, or small populations thereof, at the same time (Donoghue 2008; Kayser et al. 2009b; Quian Quiroga 2009). These two approaches, functional imaging (especially functional magnetic resonance imaging (fMRI)-blood oxygenation level-dependent (BOLD) signal) and electrophysiology, complement each other not only with regard to the sampled spatiotemporal dimensions, but also with regard to the kind of neural activity that is seen by the method. Although electrophysiological methods sample neural responses at the timescale of individual action potentials (millisecond preci sion) and the spatial scale of micrometers, functional imaging reports an aggregate signal derived from (subthreshold) responses of millions of neurons sampled over several hundreds of micrometers and hundreds of milliseconds (Logothetis 2002, 2008; Lauritzen 2005). In fact, because the fMRIBOLD signal is only indirectly related to neuronal activity, it is difficult, at least at the moment, to make detailed inferences about neuronal responses from imaging data (Leopold 2009). As a result, both methods provide complementary evidence on sensory integration. In addition to defining methods needed to localize and describe sensory interactions, operational criteria are required to define what kind of response properties are considered multisensory influences. At the level of neurons, many criteria have been derived from seminal work on the superior colliculus by Stein and Meredith (1993). Considering an auditory neuron, as an example, visual influences would be assumed if the response to a bimodal (audiovisual) stimulus differs significantly from the unimodal (auditory) response. Although this criterion can be easily implemented as a statistical test to search for multisensory influences, it is, by itself, not enough to merit the conclusion that an observed process merits the label “sensory integration.” At the level of behavior, sensory integration is usually assumed if the bimodal sensory stimulus leads to a behavioral gain compared with the unimodal stimulus (Ernst and Bülthoff 2004). Typical behavioral gains are faster responses, higher detection rates, or improved stimulus discriminability. Often, these behavioral gains are highest when individual unimodal stimuli are least effective in eliciting responses, a phenomenon known as the principle of inverse effectiveness. In addition, different unimodal stimuli are only integrated when they are perceived to originate from the same source, i.e., when they occur coincident in space and time. Together, these two principles provide additional criteria to decide whether a particular neuronal process might be related to sensory integration (Stein 1998, 2008).
Multisensory Influences on Auditory Processing
101
This statistical criterion, in conjunction with the verification of these principles, has become the standard approach to detect neural processes related to sensory integration. In addition, recent work has introduced more elaborate concepts derived from information theory and stimulus decoding. Such methods can be used to investigate whether neurons indeed become more informative about the sensory stimuli, and whether they allow better stimulus discrimination in multisensory compared to unisensory conditions (Bizley et al. 2006; Bizley and King 2008; Kayser et al. 2009a).
6.3 USING FUNCTIONAL IMAGING TO LOCALIZE MULTISENSORY INFLUENCES IN AUDITORY CORTEX Functional imaging is by far the most popular method to study the cortical basis of sensory integration, and many studies report multisensory interactions between auditory, visual, and somatosensory stimulation in association cortices of the temporal and frontal lobes (Calvert 2001). In addition, a number of studies reported that visual or somatosensory stimuli activate regions in close proximity to the auditory cortex or enhance responses to acoustic stimuli in these regions (Calvert and Campbell 2003; Calvert et al. 1997, 1999; Pekkola et al. 2005; Lehmann et al. 2006; van Atteveldt et al. 2004; Schurmann et al. 2006; Bernstein et al. 2002; Foxe et al. 2002; Martuzzi et al. 2006; van Wassenhove et al. 2005). Together, these studies promoted the notion of early multisensory interactions in the auditory cortex. However, the localization of multisensory influences is only as good as the localization of those structures relative to which the multisensory influences are defined. To localize multisensory effects to the auditory core (primary) or belt (secondary) fields, one needs to be confident about the location of these auditory structures in the respective subjects. Yet, this can be a problem given the small scale and variable position of auditory fields in individual subjects (Kaas and Hackett 2000; Hackett et al. 1998; Fullerton and Pandya 2007; Clarke and Rivier 1998; Chiry et al. 2003). One way to overcome this would be to first localize individual areas in each subject and to analyze functional data within these regions of interest. Visual studies often follow this strategy by mapping visual areas using retinotopically organized stimuli, which exploit the well-known functional organization of the visual cortex (Engel et al. 1994; Warnking et al. 2002). Auditory studies, in principle, could exploit a similar organization of auditory cortex, known as tonotopy, to define individual auditory fields (Rauschecker 1998; Rauschecker et al. 1995; Merzenich and Brugge 1973). In fact, electrophysiological studies have demonstrated that several auditory fields contain an ordered representation of sound frequency, with neurons preferring similar sound frequencies appearing in clusters and forming continuous bands encompassing the entire range from low to high frequencies (Merzenich and Brugge 1973; Morel et al. 1993; Kosaki et al. 1997; Recanzone et al. 2000). In addition, neurons in the auditory core and belt show differences in their preferences to narrow and broadband sounds, providing a second feature to distinguish several auditory fields (Rauschecker 1998; Rauschecker et al. 1997) (Figure 6.1a). Yet, although these properties in principle provide characteristics to differentiate individual auditory fields, this has proven surprisingly challenging in human fMRI studies (Wessinger et al. 2001; Formisano et al. 2003; Talavage et al. 2004). To sidestep these difficulties, we exploited high-resolution imaging facilities in combination with a model system for which there exists considerably more prior knowledge about the organization of the auditory cortex: the macaque monkey. This model system allows imaging voxel sizes on the order of 0.5 × 0.5 mm, whereas conventional human fMRI studies operate on a resolution of 3 × 3 mm (Logothetis et al. 1999). Much of the evidence about the anatomical and functional structure of the auditory cortex originates from this model system, providing important a priori information about the expected organization (Kaas and Hackett 2000; Hackett et al. 1998; Rauschecker and Tian 2004; Recanzone et al. 2000). Combining this a priori knowledge with high-resolution imaging systems as well as optimized data acquisition for auditory paradigms, we were able to obtain a tonotopic functional parcellation in individual animals (Petkov et al. 2006, 2009). By comparing the activation to stimulation with sounds of different frequency compositions, we obtained a smoothed
102
The Neural Bases of Multisensory Processes (b)
Belt Parabelt
RTM RM MM CM
High
RT RTL R
A1
AL ML
CL
CPB
Caudal
Lateral
Core (PAC)
Low
RPB
Low High Low
Frequency preference Low freq.
Broad Narrow Broad Bandwidth preference
High freq.
5 mm
Rostral Caudal
Frequency map Frequency (kHz) 16 8 4 2 1 0.5
Lateral
Rostral
Frequency preferences
(a)
FIGURE 6.1 (See color insert.) Mapping individual auditory fields using fMRI. (a) Schematic of organization of monkey auditory cortex. Three primary auditory fields (core region) are surrounded by secondary fields (belt region) as well as higher association areas (parabelt). Electrophysiological studies have shown that several of these fields contain an ordered representation of sound frequency (tonotopic map, indicated on left), and that core and belt fields prefer narrow- and broadband sounds, respectively. These two functional properties can be exploited to map layout of these auditory fields in individual subjects using functional imaging. (b) Single-slice fMRI data showing frequency-selective BOLD responses to low and high tones (left panel) and a complete (smoothed) frequency map obtained from stimulation using six frequency bands (right panel). Combining frequency map with an estimate of core region and anatomical landmarks to delineate the parabelt results in a full parcellation of auditory cortex in individual subjects. This parcellation is indicated in the left panel as white dashed lines and is shown in full in panel a.
frequency preference map which allowed determining the anterior–posterior borders of potential fields. In addition, the preference to sounds of different bandwidths often allowed a segregation of core and belt fields, hence providing borders in medial–lateral directions. When combined with the known organization of auditory cortex, the evidence from these activation patterns allowed a more complete parcellation into distinct core and belt fields, and provided constraints for the localization of the parabelt regions (Figure 6.1b). This functional localization procedure for auditory fields now serves as a routine tool to delineate auditory structures in experiments involving auditory cortex.
6.4 MULTISENSORY INFLUENCES ALONG THE AUDITORY PROCESSING STREAM In search for a better localization of multisensory influences in the auditory cortex reported by human imaging studies, we combined the above localization technique with audiovisual and audio-tactile stimulation paradigms (Kayser et al. 2005, 2007). To localize multisensory influences, we searched for regions (voxels) in which responses to acoustic stimuli were significantly enhanced when a visual stimulus was presented at the same time. Because functional imaging poses particular constraints on statistical contrasts (Laurienti et al. 2005), we used a conservative formulation of this criterion in which multisensory influences are defined as significant superadditive effects, i.e., the response in the bimodal condition is required to be significantly stronger than the sum of the two unisensory responses: AV > (A + V). In our experiments, we employed naturalistic stimuli in order to activate those regions especially involved in the processing of everyday scenarios. These stimuli included scenes of conspecific animals vocalizing as well as scenes showing other animals in their natural settings. In concordance with previous reports, we found that visual stimuli indeed influence fMRI responses to acoustic stimuli within the classical auditory cortex. These visual influences were strongest in the caudal portions of the auditory cortex, especially in the caudo–medial and caudo–lateral belt, portions of the medial belt, and the caudal parabelt (Figure 6.2a and b). These multisensory
103
Multisensory Influences on Auditory Processing (a)
p Max(A,V)). A minority of neurons produce activity that is lower than the maximum criterion, which is considered multisensory suppression. Whether the effect is enhancement or suppression, a change in activity of a neuron when the subject is stimulated through a second sensory channel only occurs if those sensory channels interact. Thus, multisensory enhancement and suppression are indicators that information is being integrated. The third class of neurons is subthreshold. They have patterns of activity that look unisensory when they are tested with only unisensory stimuli, but when tested with multisensory stimuli, show multisensory enhancement (Allman and Meredith 2007; Allman et al. 2008; Meredith and Allman 2009). For example, a subthreshold neuron may produce significant activity with visual stimuli, but not with auditory stimuli. Because it does not respond significantly with both, it cannot be classified as bimodal. However, when tested with combined audiovisual stimuli, the neuron shows multisensory enhancement and thus integration. For graphical representations of each of these three classes of neurons, see Figure 8.1.
A
Impulse counts
Unisensory
Impulse counts
Unisensory neurons Unisensory auditory
Unisensory visual
A
V AV Input modality
V AV Input modality
A
V AV Input modality
Impulse counts
Impulse counts
V AV Input modality
Bimodal supressed
Bimodal superadditive
A
V AV Input modality
Subthreshold neurons
Subthreshold auditory
A
V AV Input modality
Impulse counts
A
Impulse counts
Impulse counts
Multisensory
Bimodal neurons Bimodal enhanced
Subthreshold Visual
A
V AV Input modality
FIGURE 8.1 Activity profiles of neurons found in multisensory brain regions.
The Use of fMRI to Assess Multisensory Integration
133
A majority of bimodal and subthreshold neurons show multisensory enhancement (i.e., exceed the maximum criterion when stimulated with a multisensory stimulus); however, neurons that show multisensory enhancement can be further subdivided into those that are superadditive and those that are subadditive. Superadditive neurons show multisensory activity that exceeds a criterion that is greater than the sum of the unisensory activities (AV > Sum(A,V); Stein and Meredith 1993). In the case of subthreshold neurons, neural activity is only elicited by a single unisensory modality; therefore, the criterion for superadditivity is the same as (or very similar to) the maximum criterion. However, in the case of bimodal neurons, the criterion for superadditivity is usually much greater than the maximum criterion. Thus, superadditive bimodal neurons can show extreme levels of multisensory enhancement. Although bimodal neurons that are superadditive are, by definition, multisensory (because they must also exceed the maximum criterion), the majority of multisensory enhancing neurons are not superadditive (Alvarado et al. 2007; Perrault et al. 2003; Stanford et al. 2007). To be clear, in single-unit studies, superadditivity is not a criterion for identifying multisensory enhancement, but instead is used to classify the degree of enhancement.
8.2 SUPERADDITIVITY AND BOLD fMRI BOLD activation is measured from the vasculature that supplies blood to a heterogeneous population of neurons. When modeling (either formally or informally) the underlying activity that produces BOLD activation, it is tempting to consider that all of the neurons in that population have similar response properties. However, there is little evidence to support such an idea, especially within multisensory brain regions. Neuronal populations within multisensory brain regions contain a mixture of unisensory neurons from different sensory modalities in addition to bimodal and subthreshold multisensory neurons (Allman and Meredith 2007; Allman et al. 2008; Barraclough et al. 2005; Benevento et al. 1977; Bruce et al. 1981; Hikosaka et al. 1988; Meredith 2002; Meredith and Stein 1983, 1986; Stein and Meredith 1993; Stein and Stanford 2008). It is this mixture of neurons of different classes in multisensory brain regions that necessitates the development of new criteria for assessing multisensory interactions using BOLD fMRI. The first guideline established for studying multisensory phenomena specific to population-based BOLD fMRI measures was superadditivity (Calvert et al. 2000), which we will refer to here as the additive criterion to differentiate it from superadditivity in single units. In her original fMRI study, Calvert used audio and visual presentations of speech (talking heads) and isolated an area of the superior temporal sulcus that produced BOLD activation with a multisensory speech stimulus that was greater than the sum of the BOLD activations with the two unisensory stimuli (AV > Sum(A,V)). The use of this additive criterion was a departure from the established maximum criterion that was used in single-unit studies, but was based on two supportable premises. First, BOLD activation can be modeled as a time-invariant linear system, that is, activation produced by two stimuli presented together can be modeled by summing the activity produced by those same two stimuli presented alone (Boynton et al. 1996; Dale and Buckner 1997; Glover 1999; Heeger and Ress 2002). Second, the null hypothesis to be rejected is that the neuronal population does not contain multisensory neurons (Calvert et al. 2000, 2001; Meredith and Stein 1983). Using the additive criterion, the presence of multisensory neurons can be inferred (and the null hypothesis rejected) if activation with the multisensory stimulus exceeds the additive criterion (i.e., superadditivity). The justification for an additive criterion as the null hypothesis is illustrated in Figure 8.2. Data in Figure 8.2 are simulated based on single-unit recording statistics taken from Laurienti et al. (2005). Importantly, the data are modeled based on a brain region that does not contain multisensory neurons. A brain region that only contains unisensory neurons is not a site of integration, and therefore represents an appropriate null hypothesis. The heights of the two left bars indicate stimulated BOLD activation with unisensory auditory (A) and visual (V) stimulation. The next bar is the simulated BOLD activation with simultaneously presented auditory and visual stimuli (AV). The rightmost bar, Sum(A,V), represents the additive criterion. Assuming that the pools of
134
The Neural Bases of Multisensory Processes
BOLD response
Two-population null hypothesis A cells V cells
A
V
AV
Input modality
Max(A,V) Sum(A,V) Criterion
FIGURE 8.2 Criteria for assessing multisensory interactions in neuronal populations.
unisensory neurons respond similarly under unisensory and multisensory stimulation (otherwise they would be classified as subthreshold neurons), the modeled AV activation is the same as the additive criterion. For comparison, we include the maximum criterion (the Max(A,V) bar), which is the criterion used in single-unit recording, and sometimes used with BOLD fMRI (Beauchamp 2005; van Atteveldt et al. 2007). The maximum criterion is clearly much more liberal than the additive criterion, and the model in Figure 8.2 shows that the use of the maximum criterion with BOLD data could produce false-positives in brain regions containing only two pools of unisensory neurons and no multisensory neurons. That is, if a single voxel contained only unisensory neurons and no neurons with multisensory properties, the BOLD response will still exceed the maximum criterion. Thus, the simple model shown in Figure 8.2 demonstrates both the utility of the additive criterion for assessing multisensory interactions in populations containing a mixture of unisensory and multisensory neurons, and that the maximum criterion, which is sometimes used in place of the additive criterion, may inappropriately identify unisensory areas as multisensory. It should be noted that the utility of the additive criterion applied to BOLD fMRI data is different conceptually from the superadditivity label used with single units. The additive criterion is used to identify multisensory interactions with BOLD activation. This is analogous to maximum criterion being used to identify multisensory interactions in single-unit activity. Thus, superadditivity with single units is not analogous to the additive criterion with BOLD fMRI. The term superadditivity is used with single-unit recordings as a label to describe a subclass of neurons that not only exceeded the maximum criterion, but also the superadditivity criterion.
8.3 PROBLEMS WITH ADDITIVE CRITERION Although the additive criterion tests a more appropriate null hypothesis than the maximum criterion, in practice, the additive criterion has had only limited success. Some early studies successfully identified brain regions that met the additive criterion (Calvert et al. 2000, 2001), but subsequent studies did not find evidence for additivity even in known multisensory brain regions (Beauchamp 2005; Beauchamp et al. 2004a, 2004b; Laurienti et al. 2005; Stevenson et al. 2007). These findings prompted researchers to suggest that the additive criterion may be too strict and thus susceptible to false negatives. As such, some suggested using the more liberal maximum criterion (Beauchamp 2005), which, as shown in Figure 8.2, is susceptible to false-positives. One possible reason for the discrepancy between theory and practice was described by Laurienti et al. (2005) and is demonstrated in Figure 8.3. The values in the bottom row of the table in Figure 8.3 are simulated BOLD activation. Each column in the table is a different stimulus condition,
135
The Use of fMRI to Assess Multisensory Integration Modeled BOLD responses
2.5 AV cells A cells V cells
Unisensory input
en
Modeled responses to AV input
0.60
Su
AV :L
Ma
au ri
ive dit pe
rad
dit
AV : su
0.80
)
0.60
A,V
0.60
0.54
m(
0.60
,V )
0.60
0.49
x(A
0.60
ti
0.80
ive
0.80
ax
V
0.80
rm
A
0.80
pe
0.0
0.80
0.80
ax
0.60
0.49
0.80 0.80
AV : su
0.5
0.79
0.54
0.54
1.03
AV : ad
1.5 1.0
0.49
1.55
AV :m
BOLD response
2.0
Criterion
Neural contributions by class A
V
Max Supermax Additive Superadditive Laurienti Max(A,V) Sum(A,V)
A cells 0.60 V cells 0.00 AV cells 0.54
0.00 0.80 0.48
0.60 0.80 0.54
0.60 0.80 0.79
0.60 0.80 1.03
0.60 0.80 1.88
0.60 0.80 0.80
0.00 0.80 0.49
0.60 0.80 1.03
BOLD
1.29
1.94
2.19
2.43
2.95
2.20
1.29
2.43
1.14
FIGURE 8.3 Models of BOLD activation with multisensory stimulation.
including unisensory auditory, unisensory visual, and multisensory audiovisual. The Sum(A,V) column is simply the sum of the audio and visual BOLD signals and represents the additive criterion (null hypothesis). The audiovisual stimulus conditions were simulated using five different models, the maximum model, the supermaximum model, the additive model, the superadditive model, and the Laurienti model. The first three rows of the table represent the contributions of different classes of neurons to BOLD activation, including auditory unisensory neurons (A cells), visual unisensory neurons (V cells), and audiovisual multisensory neurons (AV cells). To be clear, the BOLD value in the bottom-most row is the sum of the A, V, and AV cell’s contributions. Summing these contributions is based on the assumption that voxels (or clusters of voxels) contain mixtures of unisensory and multisensory neurons, not a single class of neurons. Although the “contributions” have no units, they are simulated based on the statistics of recorded impulse counts (spike counts) from neurons in the superior colliculus, as reported by Laurienti et al. (2005). Unisensory neurons were explicitly modeled to respond similarly under multisensory stimulation as they did under unisensory stimulation, otherwise they would be classified as subthreshold neurons, which were not considered in the models. The five models of BOLD activation under audiovisual stimulation differed in the calculation of only one value: the contribution of the AV multisensory neurons. For the maximum model, the contribution of AV cells was calculated as the maximum of the AV cell contributions with visual and auditory unisensory stimuli. For the super-max model, the contribution of AV neurons was calculated as 150% of the AV cell contribution used for the maximum model. For the additive model, the contribution of AV cells was calculated as the sum of AV cell contributions with visual and auditory unisensory stimuli. For the superadditive model, the contribution of AV cells was calculated as 150% of the AV cell contribution used for the additive model. Finally, for the Laurienti model, the
136
The Neural Bases of Multisensory Processes
contribution of the AV cells was based on the statistics of recorded impulse counts. What the table makes clear is that, based on Laurienti’s statistics, the additive criterion is too conservative, which is consistent with what has been found in practice (Beauchamp 2005; Beauchamp et al. 2004a, 2004b; Laurienti et al. 2005; Stevenson et al. 2007). Laurienti and colleagues (2005) suggest three reasons why the simulated BOLD activation may not exceed the additive criterion based on the known neurophysiology: first, the proportion of AV neurons is small compared to unisensory neurons; second, of those multisensory neurons, only a small proportion are superadditive; and third, superadditive neurons have low impulse counts relative to other neurons. To exceed the additive criterion, the average impulse count of the pool of bimodal neurons must be significantly superadditive for population-based measurements to exceed the additive criterion. The presence of superadditive neurons in the pool is not enough by itself because those superadditive responses are averaged with other subadditive, and even suppressive, responses. According to Laurienti’s statistics, the result of this averaging is a value somewhere between maximum and additive. Thus, even though the additive criterion is appropriate because it represents the correct null hypothesis, the statistical distribution of cell and impulse counts in multisensory brain regions may make it practically intractable as a criterion.
8.4 INVERSE EFFECTIVENESS The Laurienti model is consistent with recent findings suggesting that the additive criterion is too conservative (Beauchamp 2005; Beauchamp et al. 2004a, 2004b; Laurienti et al. 2005; Stevenson et al. 2007); however, those recent studies used stimuli that were highly salient. Another established principle of multisensory single-unit recording is the law of inverse effectiveness. Effectiveness in this case refers to how well a stimulus drives the neurons in question. Multisensory neurons usually increase their proportional level of multisensory enhancement as the stimulus quality is degraded (Meredith and Stein 1986; Stein et al. 2008). That is, the multisensory gain increases as the “effec tiveness” of the stimulus decreases. If the average level of multisensory enhancement of a pool of neurons increases when stimuli are degraded, then BOLD activation could exceed the additive criterion when degraded stimuli are used. Figure 8.4 shows this effect using the simulated data from the Laurienti model (Figure 8.3). In the high stimulus quality condition, the simulated AV activation clearly does not exceed the additive criterion, indicated as Sum(A,V), and it can be seen that this is because of the subadditive Inverse effectiveness with the Laurienti model Subadditive
2.5 AV cells A cells V cells
BOLD response
2.0
0.49 0.80
1.5 1.0 0.5 0.0
Superadditive
0.80
0.54
0.60
A
0.49
0.80
V
0.40
0.80
0.54
0.60
0.60
AV
Sum(A,V)
High stimulus quality
0.13 0.42
A
0.12 0.56
V
0.56
0.12 0.56 0.13
0.42
0.42
AV
Sum(A,V)
Low stimulus quality
FIGURE 8.4 Influence of inverse effectiveness on simulated multisensory BOLD activation.
137
The Use of fMRI to Assess Multisensory Integration
contribution of the multisensory neurons. On the right in Figure 8.4, a similar situation is shown, but with less effective, degraded stimuli. In general, neurons in multisensory regions decrease their impulse counts when stimuli are less salient. However, the size of the decrease is different across different classes of neurons and different stimulus conditions (Alvarado et al. 2007). In our simulation, impulse counts of unisensory neurons were reduced by 30% from the values simulated by the Laurienti model. Impulse counts of bimodal neurons were reduced by 75% under unisensory stimulus conditions, and by 50% under multisensory stimulus conditions. This difference in reduction for bimodal neurons between unisensory and multisensory stimulus conditions reflects inverse effectiveness, that is, the multisensory gain increases with decreasing stimulus effectiveness. Using these reductions in activity with stimulus degradation, BOLD activation with the AV stimulus now exceeds the additive criterion. Admittedly, the reductions that were assigned to the different classes of neurons were chosen somewhat arbitrarily. There are definitely different combinations of reductions that would lead to AV activation that would not exceed the criterion. However, the reductions shown are based on statistics of impulse counts taken from single-unit recording data, and are consistent with the principle of inverse effectiveness reported routinely in the single-unit recording literature (Meredith and Stein 1986). Furthermore, there is empirical evidence from neuroimaging showing an increased likelihood of exceeding the additive criterion as stimulus quality is degraded (Stevenson and James 2009; Stevenson et al. 2007, 2009). Figure 8.5 compares AV activation with the additive criterion at multiple levels of stimulus quality. These are a subset of data from a study reported elsewhere (Stevenson and James 2009). Stimulus quality was degraded by parametrically varying the signal-to-noise ratio (SNR) of the stimuli until participants were able to correctly identify the stimuli at a given accuracy. This was done by embedding the audio and visual signals in constant external noise and lowering the root mean square contrast of the signals. AV activation exceeded the additive criterion at low SNR, but failed to exceed the criterion at high SNR. Although there is significant empirical and theoretical evidence suggesting that the additive criterion is too conservative at high stimulus SNR, the data presented in Figure 8.5 suggest that the additive criterion may be a better criterion at low SNR. However, there are two possible problems with using low-SNR stimuli to assess multisensory integration with BOLD fMRI. First, based on the data in Figure 8.5, the change from failing to meet the additive criterion to exceeding the additive criterion is gradual, not a sudden jump at a particular level of SNR. Thus, the choice of SNR level(s) is extremely important for the interpretation of the results. Second, there may be problems with using the additive criterion with measurements that lack a natural zero, such as BOLD.
Inverse effectiveness in BOLD
BOLD response
0.3
AV response Sum(AV) response
0.25 0.2 0.15 0.1 0.05 0
95%
85% 75% 65% Stimulus quality by % accuracy
FIGURE 8.5 Assessing inverse effectiveness empirically with BOLD activation. These are a subset of data reported elsewhere. (From Stevenson, R.A. and James, T.W., NeuroImage, 44, 1210–23, 2009. With permission.)
138
The Neural Bases of Multisensory Processes
8.5 BOLD BASELINE: WHEN ZERO IS NOT ZERO It is established procedure with fMRI data to transform raw BOLD values to percentage signal change values by subtracting the mean activation for the baseline condition and dividing by the baseline. Thus, for BOLD measurements, “zero” is not absolute, but is defined as the activation produced by the baseline condition chosen by the experimenter (Binder et al. 1999; Stark et al. 2001). Statistically, this means that BOLD measurements would be considered an interval scale at best (Stevens 1946). The use of an interval scale affects the interpretation of the additive criterion because of the fact that calculating the additive criterion is reliant on summing two unisensory activations and comparing with a single multisensory activation. Because the activation values are measured relative to an arbitrary baseline, the value of the baseline condition has a different effect on the summed unisensory activations than on the single multisensory activation. In short, the value of the baseline is subtracted from the additive criterion twice, but is subtracted from the multisensory activation only once (see Equation 8.3). The additive criterion for audiovisual stimuli is described according to the following equation:
AV > A + V
(8.1)
But, Equation 8.1 is more accurately described by AV-baseline A-baseline V-baselinne > + baseline baseline baseline
The baseline problem
620
Raw BOLD signal
600 580 560 540 520 500 480
A
V
AV Baseline 1
A
Experiment 1 0.20
V
AV Baseline 2
Experiment 2
Subadditive
% BOLD change
0.16 Superadditive
0.12 0.08 0.04 0.00
A
V
AV Sum(A,V)
A
Experiment 1
FIGURE 8.6 Influence of baseline activation on additive criterion.
V
AV Sum(A,V)
Experiment 2
(8.2)
The Use of fMRI to Assess Multisensory Integration
139
Equation 8.2 can be rewritten as
AV – baseline > A + V – 2 × baseline,
(8.3)
AV > A + V – baseline.
(8.4)
and then
Equation 8.4 clearly shows that the level of activation produced by the baseline condition influences the additive criterion. An increase in activation of the baseline condition causes the additive criterion to become more liberal (Figure 8.6). The fact that the additive criterion can be influenced by the activation of the experimenter-chosen baseline condition may explain why similar experiments from different laboratories produce different findings when that criterion is used (Beauchamp 2005).
8.6 A DIFFERENCE-OF-BOLD MEASURE We have provided a theoretical rationale for the inconsistency of the additive criterion for assessing multisensory integration using BOLD fMRI as well as a theoretical rationale for the inappropriateness of the maximum criterion as a null hypothesis for this same assessment. The maximum criterion is appropriate when used with single-unit recording data, but when used with BOLD fMRI data, which represent populations of neurons, cannot account for the contribution of unisensory neurons that are found in multisensory brain regions. Without being able to account for the heterogeneity of neuronal populations, the maximum criterion is likely to produce false-positives when used with a population-based measure such as fMRI. Although the null hypothesis tested by the additive criterion is more appropriate than the maximum criterion, the additive criterion is not without issues. First, an implicit assumption with the additive criterion is that the average multisensory neuronal response shows a pattern that is superadditive, an assumption that is clearly not substantiated empirically. Second, absolute BOLD percentage signal change measurements are measured on an interval scale. An interval scale is one with no natural zero, and on which the absolute values are not meaningful (in a statistical sense). The relative differences between absolute values, however, are meaningful, even when the absolute values are measured on an interval scale. To specifically relate relative differences to the use of an additive criterion, imagine an experiment where A, V, and AV were not levels of a sensory modality factor, but instead A, V, and AV were three separate factors, each with at least two different levels (e.g., levels of stimulus quality). Rather than analyzing the absolute BOLD values associated with each condition, a relative difference measurement could be calculated between the levels of each factor, resulting in ΔA, ΔV, and ΔAV measurements. The use of relative differences alleviates the baseline problem because the baseline activations embedded in the measurements cancel out when a difference operation is performed across levels of a factor. If we replace the absolute BOLD values in Equation 8.1 with BOLD differences, the equation becomes
ΔAV ≠ ΔA + ΔV.
(8.5)
Note that the inequality sign is different in Equation 8.5 than in Equation 8.1. Equation 8.1 is used to test the directional hypothesis that AV activation exceeds the additive criterion. Subadditivity, the hypothesis that AV activation is less than the additive criterion, is rarely, if ever, used as a criterion by itself. It has used been used in combination with superadditivity, for instance, showing that a brain region exceeds the additive criterion with semantically congruent stimuli but does not exceed the additive criterion with semantically incongruent stimuli (Calvert et al. 2000). This example (using both superadditivity and subadditivity), however, is testing two directional hypotheses, rather than testing one nondirectional hypothesis. Equation 8.5 is used to test a nondirectional hypothesis,
140
The Neural Bases of Multisensory Processes
and we suggest that it should be nondirectional for two reasons. First, the order in which the two terms are subtracted to produce each delta is arbitrary. For each delta term, if the least effective stimulus condition is subtracted from the most effective condition, then Equation 8.5 can be rewritten as ΔAV < ΔA + ΔV to test for inverse effectiveness, that is, the multisensory difference should be less than the sum of the unisensory differences. If, however, the differences were taken in the opposite direction (i.e., most effective subtracted from least effective), Equation 8.5 would need to be rewritten with the inequality in the opposite direction (i.e., ΔAV > ΔA + ΔV). Second, inverse effectiveness may not be the only meaningful effect that can be seen with difference measures, perhaps especially if the measures are used to assess function across the whole brain. This point is discussed further at the end of the chapter (Figure 8.9). Each component of Equation 8.5 can be rewritten with the baseline activation made explicit. The equation for the audio component would be
∆A=
(A
1
− baseline baseline
) − (A
2
− baseline baseline
) ,
(8.6)
where A1 and A2 represent auditory stimulus conditions with different levels of stimulus quality. When Equation 8.5 is rewritten by substituting Equation 8.6 for each of the three stimulus conditions, all baseline variables in both the denominator and the numerator cancel out, producing the following equation:
(AV1 – AV2) ≠ (A1 – A2) – (V1 – V2).
(8.7)
The key importance of Equation 8.7 is that the baseline variable cancels out when relative differences are used instead of absolute values. Thus, the level of baseline activation has no influence on a criterion calculated from BOLD differences. The null hypothesis represented by Equation 8.5 is similar to the additive criterion in that the sum of two unisensory values is compared to a multisensory value. Those values, however, are relative differences instead of absolute BOLD percentage signal changes. If the multisensory difference is less (or greater) than the additive difference criterion, one can infer an interaction between sensory channels, most likely in the form of a third pool of multisensory neurons in addition to unisensory neurons. The rationale for using additive differences is illustrated in Figure 8.7. The simulated data for the null hypothesis reflect the contributions of neurons in a brain region that contains only unisensory auditory and visual neurons (Figure 8.7a). In the top panel, the horizontal axis represents the stimulus condition, either unisensory auditory (A) or visual (V), or multisensory audiovisual (AV). The subscripts 1 and 2 represent different levels of stimulus quality. For example, A1 is high-quality audio and A2 is low-quality audio. To relate these simulated data to the data in Figure 8.2 and the absolute additive criterion, the height of the stacked bar for AV1 is the absolute additive criterion (or null hypothesis) for the high-quality stimuli, and the height of the AV2 stacked bar is the absolute additive criterion for the low-quality stimuli. Those absolute additive criteria, however, suffer from the issues discussed above. Evaluating the absolute criterion at multiple levels of stimulus quality provides the experimenter with more information than evaluating it at only one level, but a potentially better way of assessing multisensory integration is to use a criterion based on differences between the high- and low-quality stimulus conditions. The null hypothesis for this additive differences criterion is illustrated in the bottom panel of Figure 8.7a. The horizontal axis shows the difference in auditory (ΔA), visual (ΔV), and audiovisual (ΔAV) stimuli, all calculated as differences in the heights of the stacked bars in the top panel. The additive differences criterion, labeled Sum(ΔA,ΔV), is also shown, and is the same as the difference in multisensory activation (ΔAV). Thus, for a brain region containing only two pools of unisensory neurons, the appropriate null hypothesis to be tested is provided by Equation 8.5.
141
The Use of fMRI to Assess Multisensory Integration
1.20
ΔAV
1.00
0.80
0.80
ΔV
0.60 0.40 0.20 0.00
0.45 0.40 0.35 0.30 0.25 0.20 0.15 0.10 0.05 0.00
ΔA 0.60
A1
0.80 0.56
0.42
A2
V1
V2
Input condition
0.24
ΔA
0.56
0.60
0.42
0.24
ΔV Differences
0.18
ΔAV
0.24
0.18
Sum(ΔA,ΔV) Criterion
ΔAV
0.80
1.5 1.0
0.0
AV1 AV2
AV cells A cells V cells
2.0
0.5
ΔAV = Sum(ΔA,ΔV)
A cells V cells
0.18
Additive differences: three-population hypothesis 2.5
A cells V cells
% BOLD change
% BOLD change
1.40
BOLD differences
(b)
Additive differences: two-population null hypothesis
1.40 BOLD differences
(a)
1.20 1.00
ΔA
0.54 0.60
0.49
0.80
0.42
A2
V1
0.56
V2
Input condition
0.20 0.00
0.56 0.60
0.42
AV1 AV2
0.37
0.40
0.60 0.41
0.40
ΔAV < Sum(ΔA,ΔV)
AV cells A cells V cells
0.80 0.40
0.80
0.12
0.13
A1
ΔV
0.37 0.24
0.18
0.24
0.18
ΔA
ΔV
ΔAV
Differences
0.24
0.41 0.18
Sum(ΔA,ΔV) Criterion
FIGURE 8.7 Additive differences criterion.
The data in Figure 8.7b apply the additive differences criterion to the simulated BOLD activation data shown in Figure 8.4. Recall from Figure 8.4 that the average contribution of the multisensory neurons is subadditive for high-quality stimuli (A1, V1, AV1), but is superadditive with low-quality stimuli (A2, V2, AV2). In other words, the multisensory pool shows inverse effectiveness. The data in the bottom panel of Figure 8.7b are similar to the bottom panel of Figure 8.7a, but with the addition of this third pool of multisensory neurons to the population. Adding the third pool makes ΔAV (the difference in multisensory activation) significantly less than the additive differences criterion (Sum(ΔA,ΔV)), and rejects the null hypothesis of only two pools of unisensory neurons. Figure 8.8 shows the same additive differences analysis performed on the empirical data from Figure 8.5 (Stevenson and James 2009; Stevenson et al. 2009). The empirical data show the same pattern as the simulated data. With both the simulated and empirical data, ΔAV was less than Sum(ΔA,ΔV), a pattern of activation similar to inverse effectiveness seen in single units. In singleunit recording, there is a positive relation between stimulus quality and impulse count (or effectiveness). This same relation was seen between stimulus quality and BOLD activation. Although most neurons show this relation, the multisensory neurons tend to show smaller decreases (proportionately) than the unisensory neurons. Thus, as the effectiveness of the stimuli decreases, the multisensory gain increases. Decreases in stimulus quality also had a smaller effect on multisensory BOLD activation than on unisensory BOLD activation, suggesting that the results in Figure 8.8 could (but do not necessarily) reflect the influence of inversely-effective neurons. In summary, we have demonstrated some important theoretical limitations of the criteria commonly used in BOLD fMRI studies to assess multisensory integration. First, the additive criterion
142
The Neural Bases of Multisensory Processes
BOLD differences
0.12 0.1
Additive differences in BOLD ΔAV Sum(ΔA,ΔV)
0.08 0.06 0.04 0.02 0
95-85% 85-75% 75-65% Stimulus quality by % accuracy
FIGURE 8.8 Assessing multisensory interactions empirically with additive differences.
is susceptible to variations in baseline. Second, the additive criterion is sensitive only if the average activity profile of the multisensory neurons in the neuronal population is superadditive, which, empirically, only occurs with very low-quality stimuli. A combination of these two issues may explain the inconsistency in empirical findings using the additive criterion (Beauchamp 2005; Calvert et al. 2000; Stevenson et al. 2007). Third, the maximum criterion tests a null hypothesis that is based on a homogeneous population of only multisensory neurons. Existing single-unit recording data suggest that multisensory brain regions have heterogeneous populations containing unisensory, bimodal, and sometimes, subthreshold neurons. Thus, the null hypothesis tested with the maximum criterion is likely to produce false-positive results in unisensory brain regions. Possible BOLD additive-difference interactions Direct gain enhancement ΔAV > Sum(ΔA,ΔV)
BOLD activity
BOLD activity
Direct gain suppression ΔAV < Sum(ΔA,ΔV)
A V AV High quality
A V AV Low quality
A V AV High quality
Indirect gain enhancement ΔAV < Sum(ΔA,ΔV)
BOLD activity
BOLD activity
Indirect gain suppression ΔAV > Sum(ΔA,ΔV)
A V AV Low quality
A V AV High quality
A V AV Low quality
A V AV High quality
A V AV Low quality
FIGURE 8.9 A whole-brain statistical parametric map of regions demonstrating audiovisual neuronal convergence as assessed by additive differences criterion.
The Use of fMRI to Assess Multisensory Integration
143
As a potential solution to these concerns, we have developed a new criterion for assessing multisensory integration using relative BOLD differences instead of absolute BOLD measurements. Relative differences are not influenced by changes in baseline, protecting the criterion from inconsistencies across studies. The null hypothesis to be tested is the sum of unisensory differences (additive differences), which is based on the assumption of a heterogeneous population of neurons. In addition to the appropriateness of the null hypothesis tested, the additive differences criterion produced positive results in known multisensory brain regions when tested empirically (Stevenson et al. 2009). Evidence for inverse effectiveness with audiovisual stimuli was found in known multisensory brain regions such as the superior temporal gyrus and inferior parietal lobule, but also in regions that have garnered less attention from the multisensory community, such as the medial frontal gyrus and parahippocampal gyrus (Figure 8.9). These results were found across different pairings of sensory modalities and with different experimental designs, suggesting the use of additive differences may be of general use for assessing integration across sensory channels. A number of different brain regions, such as the insula and caudate nucleus, also showed an effect that appeared to be the opposite of inverse effectiveness (Figure 8.9). BOLD activation in these brain regions showed the opposite relation with stimulus quality as sensory brain regions, that is, highquality stimuli produced less activation than low-quality stimuli. Because of this opposite relation, we termed the effect observed in these regions indirect inverse effectiveness. More research will be needed to assess the contribution of indirect inverse effectiveness to multisensory neural processing and behavior.
8.7 LIMITATIONS AND FUTURE DIRECTIONS All of the simulations above made the assumption that BOLD activation could be described by a time-invariant linear system. Although there is clearly evidence supporting this assumption (Boynton et al. 1996; Dale and Buckner 1997; Glover 1999; Heeger and Ress 2002), studies using serial presentation of visual stimuli suggest that nonlinearities in BOLD activation may exist when stimuli are presented closely together in time, that is, closer than a few seconds (Boynton and Finney 2003; Friston et al. 1999). Simultaneous presentation could be considered just a serial presentation with the shortest asynchrony possible. In that case, the deviations from linearity with simultaneous presentation may be substantial. A careful examination of unisensory integration and a comparison of unisensory with multisensory integration could provide valuable insights about the linearity assumption of BOLD responses. The simulations above were also based on only one class of multisensory neuron, the bimodal neurons, which respond with two or more sensory modalities. Another class of multisensory neurons has recently been discovered, which was not used in the simulations presented here. Subthreshold neurons respond to only one sensory modality when stimulated with unisensory stimuli. However, when stimulated with multisensory stimuli, these neurons show multisensory enhancement (Allman and Meredith 2007; Allman et al. 2008; Meredith and Allman 2009). Adding this class of neurons to the simulations may increase the precision of the predictions for population models with more than two populations of neurons. The goal of the simulations presented here, however, was to develop null hypotheses based on neuronal populations composed of only two unisensory pools of neurons. Rejecting the null hypothesis then implies the presence of at least one other pool of neurons besides the unisensory pools. In our simulations, we modeled that pool as bimodal; however, we could have also modeled subthreshold neurons or a combination of bimodal and subthreshold neurons. Our impression is that the addition of subthreshold neurons to the simulations would not qualitatively change the results, because subthreshold neurons are found in relatively small numbers (less than the number of subadditive bimodal neurons), and their impulse counts are low compared to other classes of neurons (Allman and Meredith 2007). The simulations above made predictions about levels of BOLD activation, but were based on principles of multisensory processing that were largely derived from spike (action potential) count data
144
The Neural Bases of Multisensory Processes
collected using single-unit recording. BOLD activation reflects a hemodynamic response, which itself is the result of local neural activity. The exact relationship, however, between neural activity and BOLD activation is unclear. There is evidence that increased spiking produces small brief local reductions in tissue oxygenation, followed by large sustained increases in tissue oxygenation (Thompson et al. 2003). Neural spike count, however, is not the only predictor of BOLD activation levels nor is it the best predictor. The correlation of BOLD activation with local field potentials is stronger than the correlation of BOLD with spike count (Heeger et al. 2000; Heeger and Ress 2002; Logothetis and Wandell 2004). Whereas spikes reflect the output of neurons, local field potentials are thought to reflect the postsynaptic potentials or input to neurons. This distinction between input and output and the relationship with BOLD activation raises some concerns about the relating studies using BOLD fMRI to studies using single-unit recording. Of course, spike count is also highly correlated with local field potentials, suggesting that spike count, local field potentials, and BOLD activation are all interrelated and, in fact, that the correlations among them may be related to another variable that is responsible for producing all of the phenomena (Attwell and Iadecola 2002). Multisensory single-unit recordings are mostly performed in monkey and cat superior colliculus and monkey superior temporal sulcus or cat posterolateral lateral suprasylvian area (Allman and Meredith 2007; Allman et al. 2008; Barraclough et al. 2005; Benevento et al. 1977; Bruce et al. 1981; Hikosaka et al. 1988; Meredith 2002; Meredith and Stein 1983, 1986; Stein and Meredith 1993; Stein and Stanford 2008). With BOLD fMRI, whole-brain imaging is routine, which allows for exploration of the entire cortex. The principles that are derived from investigation of specific brain areas may not always apply to other areas of the brain. Thus, whole-brain investigation has the distinct promise of producing unexpected results. The unexpected results could be because of the different proportions of known classes of neurons, or the presence of other classes of multisensory neurons that have not yet been found with single-unit recording. It is possible that the indirect inverse effectiveness effect described above (Figure 8.9) may reflect the combined activity of types of multisensory neurons with response profiles that have not yet been discovered with single-unit recording.
8.8 CONCLUSIONS We must stress that each method used to investigate multisensory interactions has a unique set of limitations and assumptions, whether the method is fMRI, high-density recording, single-unit recording, behavioral reaction time, or others. Differences between methods can have a great impact on how multisensory interactions are assessed. Thus, it should not be assumed that a criterion that is empirically tested and theoretically sound when used with one method will be similarly sound when applied to another method. We have developed a method for assessing multisensory integration using BOLD fMRI that makes fewer assumptions than established methods. Because BOLD measurements have an arbitrary baseline, a criterion that is based on relative BOLD differences instead of absolute BOLD values is more interpretable and reliable. Also, the use of BOLD differences is not limited to comparing across multisensory channels, but should be equally effective when comparing across unisensory channels. Finally, it is also possible that the use of relative differences may be useful with other types of measures, such as EEG, which also use an arbitrary baseline. However, before using the additive differences criterion with other measurement methods, it should be tested both theoretically and empirically, as we have done here with BOLD fMRI.
ACKNOWLEDGMENTS This research was supported in part by the Indiana METACyt Initiative of Indiana University, funded in part through a major grant from the Lilly Endowment, Inc., the IUB Faculty Research Support Program, and the Indiana University GPSO Research Grant. We appreciate the insights provided by Karin Harman James, Sunah Kim, and James Townsend, by other members the Perception and Neuroimaging Laboratory, and by other members of the Indiana University Neuroimaging Group.
The Use of fMRI to Assess Multisensory Integration
145
REFERENCES Allman, B.L., and M.A. Meredith. 2007. Multisensory processing in “unimodal” neurons: Cross-modal subthreshold auditory effects in cat extrastriate visual cortex. Journal of Neurophysiology 98:545–9. Allman, B.L., L.P. Keniston, and M.A. Meredith. 2008. Subthreshold auditory inputs to extrastriate visual neurons are responsive to parametric changes in stimulus quality: Sensory-specific versus non-specific coding. Brain Research 1242:95–101. Alvarado, J.C., J.W. Vaughan, T.R. Stanford, and B.E. Stein. 2007. Multisensory versus unisensory integration: Contrasting modes in the superior colliculus. Journal of Neurophysiology 97:3193–205. Attwell, D., and C. Iadecola. 2002. The neural basis of functional brain imaging signals. Trends in Neurosciences 25:621–5. Barraclough, N.E., D. Xiao, C.I. Baker, M.W. Oram, and D.I. Perrett. 2005. Integration of visual and auditory information by superior temporal sulcus neurons responsive to the sight of actions. Journal of Cognitive Neuroscience 17:377–91. Beauchamp, M.S. 2005. Statistical criteria in FMRI studies of multisensory integration. Neuroinformatics 3:93–113. Beauchamp, M.S., B.D. Argall, J. Bodurka, J.H. Duyn, and A. Martin. 2004a. Unraveling multisensory integration: Patchy organization within human STS multisensory cortex. Nature Neuroscience 7:1190–2. Beauchamp, M.S., K.E. Lee, B.D. Argall, and A. Martin. 2004b. Integration of auditory and visual information about objects in superior temporal sulcus. Neuron 41:809–23. Benevento, L.A., J. Fallon, B.J. Davis, and M. Rezak. 1977. Auditory–visual interaction in single cells in the cortex of the superior temporal sulcus and the orbital frontal cortex of the macaque monkey. Experimental Neurology 57:849–72. Binder, J.R., J.A. Frost, T.A. Hammeke et al. 1999. Conceptual processing during the conscious resting state. A functional MRI study. Journal of Cognitive Neuroscience 11:80–95. Boynton, G.M., S.A. Engel, G.H. Glover, and D.J. Heeger. 1996. Linear systems analysis of functional magnetic resonance imaging in human V1. Journal of Neuroscience 16:4207–21. Boynton, G.M., and E.M. Finney. 2003. Orientation-specific adaptation in human visual cortex. The Journal of Neuroscience 23:8781–7. Bruce, C., R. Desimone, and C.G. Gross. 1981. Visual properties of neurons in a polysensory area in superior temporal sulcus of the macaque. Journal of Neurophysiology 46:369–84. Calvert, G.A., M.J. Brammer, E.T. Bullmore et al. 1999. Response amplification in sensory-specific cortices during crossmodal binding. NeuroReport 10:2619–23. Calvert, G.A., R. Campbell, and M.J. Brammer. 2000. Evidence from functional magnetic resonance imaging of crossmodal binding in the human heteromodal cortex. Current Biology 10:649–57. Calvert, G.A., P.C. Hansen, S.D. Iversen, and M.J. Brammer. 2001. Detection of audio-visual integration sites in humans by application of electrophysiological criteria to the BOLD effect. NeuroImage 14:427–38. Dale, A.M., and R.L. Buckner. 1997. Selective averaging of rapidly presented individual trials using fMRI. Human Brain Mapping 5:329–40. Friston, K.J., E. Zarahn, O. Josephs, R.N. Henson, and A.M. Dale. 1999. Stochastic designs in event-related fMRI. NeuroImage 10:607–19. Glover, G.H. 1999. Deconvolution of impulse response in event-related BOLD fMRI. NeuroImage 9:416–29. Heeger, D.J., A.C. Huk, W.S. Geisler, and D.G. Albrecht. 2000. Spikes versus BOLD: What does neuroimaging tell us about neuronal activity? Nature Neuroscience 3:631–3. Heeger, D.J., and D. Ress. 2002. What does fMRI tell us about neuronal activity? Nature Reviews Neuroscience 3:142–51. Hikosaka, K., E. Iwai, H. Saito, and K. Tanaka. 1988. Polysensory properties of neurons in the anterior bank of the caudal superior temporal sulcus of the macaque monkey. Journal of Neurophysiology 60:1615–37. James, W. 1890. The Principles of Psychology. New York: Henry Holt & Co. Laurienti, P.J., T.J. Perrault, T.R. Stanford, M.T. Wallace, and B.E. Stein. 2005. On the use of superadditivity as a metric for characterizing multisensory integration in functional neuroimaging studies. Experimental Brain Research 166:289–97. Logothetis, N.K., and B.A. Wandell. 2004. Interpreting the BOLD signal. Annual Review of Physiology 66:735–69. Meredith, M.A. 2002. On the neuronal basis for multisensory convergence: A brief overview. Brain Research. Cognitive Brain Research 14:31–40. Meredith, M.A., and B.L. Allman. 2009. Subthreshold multisensory processing in cat auditory cortex. NeuroReport 20:126–31.
146
The Neural Bases of Multisensory Processes
Meredith, M.A., and B.E. Stein. 1983. Interactions among converging sensory inputs in the superior colliculus. Science 221:389–91. Meredith, M.A., and B.E. Stein. 1986. Visual, auditory, and somatosensory convergence on cells in superior colliculus results in multisensory integration. Journal of Neurophysiology 56:640–62. Molyneux, W. 1688. Letter to John Locke. In E.S. de Beer (ed.), The correspondence of John Locke. Oxford: Clarendon Press. Perrault Jr., T.J., J.W. Vaughan, B.E. Stein, and M.T. Wallace. 2003. Neuron-specific response characteristics predict the magnitude of multisensory integration. Journal of Neurophysiology 90:4022–6. Scannell, J.W., and M.P. Young. 1999. Neuronal population activity and functional imaging. Proceedings of the Royal Society of London. Series B. Biological Sciences 266:875–81. Stanford, T.R., and B.E. Stein. 2007. Superadditivity in multisensory integration: Putting the computation in context. NeuroReport 18:787–92. Stark, C.E., and L.R. Squire. 2001. When zero is not zero: The problem of ambiguous baseline conditions in fMRI. Proceedings of the National Academy of Sciences of the United States of America 98:12760–6. Stein, B.E., and M.A. Meredith. 1993. The Merging of the Senses. Cambridge, MA: The MIT Press. Stein, B.E., and T.R. Stanford. 2008. Multisensory integration: Current issues from the perspective of the single neuron. Nature Reviews Neuroscience 9:255–66. Stein, B.E., T.R. Stanford, R. Ramachandran, T.J. Perrault Jr., and B.A. Rowland. 2009. Challenges in quantifying multisensory integration: Alternative criteria, models, and inverse effectiveness. Experimental Brain Research 198:113–26. Stevens, S.S. 1946. On the theory of scales of measurement. Science 103:677–80. Stevenson, R.A., and T.W. James. 2009. Audiovisual integration in human superior temporal sulcus: Inverse effectiveness and the neural processing of speech and object recognition. NeuroImage 44:1210–23. Stevenson, R.A., M.L. Geoghegan, and T.W. James. 2007. Superadditive BOLD activation in superior temporal sulcus with threshold non-speech objects. Experimental Brain Research 179:85–95. Stevenson, R.A., S. Kim, and T.W. James. 2009. An additive-factors design to disambiguate neuronal and areal convergence: Measuring multisensory interactions between audio, visual, and haptic sensory streams using fMRI. Experimental Brain Research 198:183–94. Thompson, J.K., M.R. Peterson, and R.D. Freeman. 2003. Single-neuron activity and tissue oxygenation in the cerebral cortex. Science 299:1070–2. van Atteveldt, N.M., E. Formisano, L. Blomert, and R. Goebel. 2007. The effect of temporal asynchrony on the multisensory integration of letters and speech sounds. Cerebral Cortex 17:962–74.
9
Perception of Synchrony between the Senses Mirjam Keetels and Jean Vroomen
CONTENTS 9.1 Introduction........................................................................................................................... 147 9.2 Measuring Intersensory Synchrony: Temporal Order Judgment Task and Simultaneity Judgment Task....................................................................................................................... 148 9.3 Point of Subjective Simultaneity............................................................................................ 150 9.3.1 Attention Affecting PSS: Prior Entry........................................................................ 151 9.4 Sensitivity for Intersensory Asynchrony............................................................................... 152 9.4.1 Spatial Disparity Affects JND................................................................................... 153 9.4.2 Stimulus Complexity Affects JND............................................................................ 154 9.4.3 Stimulus Rate Affects JND....................................................................................... 155 9.4.4 Predictability Affects JND........................................................................................ 155 9.4.5 Does Intersensory Pairing Affect JND?.................................................................... 156 9.5 How the Brain Deals with Lags between the Senses............................................................ 156 9.5.1 Window of Temporal Integration.............................................................................. 156 9.5.2 Compensation for External Factors........................................................................... 158 9.5.3 Temporal Recalibration............................................................................................. 161 9.5.4 Temporal Ventriloquism............................................................................................ 164 9.6 Temporal Synchrony: Automatic or Not?.............................................................................. 167 9.7 Neural Substrates of Temporal Synchrony............................................................................ 169 9.8 Conclusions............................................................................................................................ 170 References....................................................................................................................................... 171
9.1 INTRODUCTION Most of our real-world perceptual experiences are specified by synchronous redundant and/or complementary multisensory perceptual attributes. As an example, a talker can be heard and seen at the same time, and as a result, we typically have access to multiple features across the different senses (i.e., lip movements, facial expression, pitch, speed, and temporal structure of the speech sound). This is highly advantageous because it increases perceptual reliability and saliency and, as a result, it might enhance learning, discrimination, or the speed of a reaction to the stimulus (Sumby and Pollack 1954; Summerfield 1987). However, the multisensory nature of perception also raises the question about how the different sense organs cooperate so as to form a coherent representation of the world. In recent years, this has been the focus of much behavioral and neuroscientific research (Calvert et al. 2004). The most commonly held view among researchers in multisensory perception is what has been referred to as the “assumption of unity.” It states that, as information from different modalities share more (amodal) properties, the more likely the brain will treat them as originating from a common object or source (see, e.g., Bedford 1989; Bertelson 1999; Radeau 1994; Stein and Meredith 1993; Welch 1999; Welch and Warren 1980). Without a doubt, the most important amodal 147
148
The Neural Bases of Multisensory Processes
property is temporal coincidence (e.g., Radeau 1994). From this perspective, one expects intersensory interactions to occur if, and only if, information from the different sense organs arrives at around the same time in the brain; otherwise, two separate events are perceived rather than a single multimodal one. The perception of time and, in particular, synchrony between the senses is not straightforward because there is no dedicated sense organ that registers time in an absolute scale. Moreover, to perceive synchrony, the brain has to deal with differences in physical (outside the body) and neural (inside the body) transmission times. Sounds, for example, travel through air much slower than visual information does (i.e., 300,000,000 m/s for vision vs. 330 m/s for audition), whereas no physical transmission time through air is involved for tactile stimulation as it is presented directly at the body surface. The neural processing time also differs between the senses, and it is typically slower for visual than it is for auditory stimuli (approximately 50 vs. 10 ms, respectively), whereas for touch, the brain may have to take into account where the stimulation originated from as the traveling time from the toes to the brain is longer than from the nose (the typical conduction velocity is 55 m/s, which results in a ~30 ms difference between toe and nose when this distance is 1.60 m; Macefield et al. 1989). Because of these differences, one might expect that for audiovisual events, only those occurring at the so-called “horizon of simultaneity” (Pöppel 1985; Poppel et al. 1990)—a distance of approximately 10 to 15 m from the observer—will result in the approximate synchronous arrival of auditory and visual information at the primary sensory cortices. Sounds will arrive before visual stimuli if the audiovisual event is within 15 m from the observer, whereas vision will arrive before sounds for events farther away. Although surprisingly, despite these naturally occurring lags, observers perceive intersensory synchrony for most multisensory events in the external world, and not only for those at 15 m. In recent years, a substantial amount of research has been devoted to understanding how the brain handles these timing differences (Calvert et al. 2004; King 2005; Levitin et al. 2000; Spence and Driver 2004; Spence and Squire 2003). Here, we review several key issues about intersensory timing. We start with a short overview of how intersensory timing is generally measured, and then discuss several factors that affect the point of subjective simultaneity and sensitivity. In the sections that follow, we address several ways in which the brain might deal with naturally occurring lags between the senses.
9.2 MEASURING INTERSENSORY SYNCHRONY: TEMPORAL ORDER JUDGMENT TASK AND SIMULTANEITY JUDGMENT TASK Before examining some of the basic findings, we first devote a few words on how intersensory synchrony is usually measured. There are two classic tasks that have been used most of the time in the literature. In both tasks, observers are asked to judge—in a direct way—the relative timing of two stimuli from different modalities: the temporal order judgment (TOJ) task and the simultaneity judgment (SJ) task. In the TOJ task, stimuli are presented in different modalities at various stimulus onset asynchronies (SOA; Dixon and Spitz 1980; Hirsh and Sherrick 1961; Sternberg and Knoll 1973), and observers may judge which stimulus came first or which came second. In an audiovisual TOJ task, participants may thus respond with “sound-first” or “light-first.” If the percentage of “sound-first” responses is plotted as a function of the SOA, one usually obtains an S-shaped logistic psychometric curve. From this curve, one can derive two measures: the 50% crossover point, and the steepness of the curve at the 50% point. The 50% crossover point is the SOA at which observers were—presumably—maximally unsure about temporal order. In general, this is called the “point of subjective simultaneity” (PSS) and it is assumed that at this SOA, the information from the different modalities is perceived as being maximally simultaneous. The second measure—the steepness at the crossover point—reflects the observers’ sensitivity to temporal asynchronies. The steepness can also be expressed in terms of the just noticeable difference (JND; half the difference in SOA between the 25% and 75% point), and it represents the smallest interval observers can reliably
149
Perception of Synchrony between the Senses
notice. A steep psychometric curve thus implies a small JND, and sensitivity is thus good as observers are able to detect small asynchronies (see Figure 9.1). The second task that has been used often is the SJ task. Here, stimuli are also presented at various SOAs, but rather than judging which stimulus came first, observers now judge whether the stimuli were presented simultaneously or not. In the SJ task, one usually obtains a bell-shaped Gaussian curve if the percentage of “simultaneous” responses is plotted as a function of the SOA. For the audiovisual case, the raw data are usually not mirror-symmetric, but skewed toward more “simultaneous” responses on the “light-first” side of the axis. Once a curve is fitted on the raw data, one can, as in the TOJ task, derive the PSS and the JND: the peak of the bell shape corresponds to the PSS, and the width of the bell shape corresponds to the JND. The TOJ and SJ tasks have, in general, been used more or less interchangeably, despite the fact that comparative studies have found differences in performance measures derived from both tasks. Possibly, it reflects that judgments about simultaneity and temporal order are based on different sources of information (Hirsh and Fraisse 1964; Mitrani et al. 1986; Schneider and Bavelier 2003; Zampini et al. 2003a). As an example, van Eijk et al. (2008) examined task effects on the PSS. They presented observers a sound and light, or a bouncing ball and an impact sound at various SOAs, and had them perform three tasks: an audiovisual TOJ task (“sound-first” or “light-first” responses required), an SJ task with two response categories (SJ2; “synchronous” or “asynchronous” responses required), and an SJ task with three response categories (SJ3; “sound-first,” “synchronous,” or “light-first” responses required). Results from both stimulus types showed that the individual PSS values for the two SJ tasks correlated well, but there was no correlation between the
Simultaneity judgment task: Synchronous or asynchronous?
Temporal order judgment task: Sound or light first?
Percentage of “Synchronous” or “V-first” responses
100
75
JND
50
25 PSS 0 A-first –80
–60
–40
–20
20
40
60
Stimulus onset asynchrony (in ms)
80 V-first
FIGURE 9.1 S-shaped curve that is typically obtained for a TOJ task and a bell-shaped curve typically obtained in a simultaneity task (SJ). Stimuli from different modalities are presented at varying SOAs, ranging from clear auditory-first (A-first) to clear vision-first (V-first). In a TOJ task, the participant’s task is to judge which stimulus comes first, sound or light, whereas in a SJ task, subjects judge whether stimuli are synchronous or not. The PSS represents the interval at which information from different modalities is perceived as being maximally simultaneous (~0 ms). In a SJ task, this is the point at which the most synchronous responses are given; in TOJ task, it is the point at which 50% of responses is vision-first and 50% is auditory-first. The JND represents the smallest interval observers can reliably notice (in this example ~27 ms). In a SJ task, this is the average interval (of A-first and V-first) at which a participant responds with 75% synchronous responses. In a TOJ task, it is the difference in SOA at 25% and 75% point divided by two.
150
The Neural Bases of Multisensory Processes
TOJ and SJ tasks. This made the authors conclude, arguably, that the SJ task should be preferred over the TOJ task if one wants to measure perception of audiovisual synchrony. In our view, there is no straightforward solution about how to measure the PSS or JND for intersensory timing because the tasks are subject to different kinds of response biases (see Schneider and Bavelier 2003; Van Eijk et al. 2008; Vatakis et al. 2007, 2008b for discussion). In the TOJ task, in which only temporal order responses can be given (“sound-first” or “light-first”), observers may be inclined to adopt the assumption that stimuli are never simultaneous, which thus may result in rather low JNDs. On the other hand, in the SJ task, observers may be inclined to assume that stimuli actually belong together because the “synchronous” response category is available. Depending on criterion settings, this may result in many “synchronous” responses, and thus, a wide bell-shaped curve which will lead to the invalid conclusion that sensitivity is poor. In practice, both the SJ and TOJ task will have their limits. The SJ2 task suffers heavily from the fact that observers have to adopt a criterion about what counts as “simultaneous/nonsimultaneous.” And in the SJ3 task, the participant has to dissociate sound-first stimuli from synchronous ones, and light-first stimuli from synchronous ones. Hence, in the SJ3 task there are two criteria: a “sound-first/ simultaneous” criterion, and a “light-first/simultaneous” criterion. If observers change, for whatever reason, their criterion (or criteria) along the experiment or between experimental manipulations, it changes the width of the curve and the corresponding JND. If sensitivity is the critical measure, one should thus be careful using the SJ task because JNDs depend heavily on these criterion settings. A different critique can be applied to the TOJ task. Here, the assumption is made that observers respond at about 50% for each of the two response alternatives when maximally unsure about temporal order. Although in practice, participants may adopt a different strategy and respond, for example, “sound-first” (and others may, for arbitrary reasons, respond “light-first”) whenever unsure about temporal order. Such a response bias will shift the derived 50% point toward one side of the continuum or the other, and the 50% point will then not be a good measure of the PSS, the point at which simultaneity is supposed to be maximal. If performance of an individual observer on an SJ task is compared with a TOJ task, it should thus not come as too big of a surprise that the PSS and JND derived from both tasks do not converge.
9.3 POINT OF SUBJECTIVE SIMULTANEITY The naïve reader might think that stimuli from different modalities are perceived as being maximally simultaneous if they are presented the way nature does, that is, synchronous, so at 0 ms SOA. Although surprisingly, most of the time, this is not the case. For audiovisual stimuli, the PSS is usually shifted toward a visual–lead stimulus, so perceived simultaneity is maximal if vision comes slightly before sounds (e.g., Kayser et al. 2008; Lewald and Guski 2003; Lewkowicz 1996; Slutsky and Recanzone 2001; Zampini et al. 2003a, 2005b, 2005c). This bias was found in a classic study by Dixon and Spitz (1980). Here, participants monitored continuous videos consisting of an audiovisual speech stream or an object event consisting of a hammer hitting a peg. The videos started off in synchrony and were then gradually desynchronized at a constant rate of 51 ms/s up to a maximum asynchrony of 500 ms. Observers were instructed to respond as soon as they noticed the asynchrony. They were better at detecting the audiovisual asynchrony if the sound preceded the video rather than if the video preceded the sound (131 vs. 258 ms thresholds for speech, and 75 vs. 188 ms thresholds for the hammer, respectively). PSS values also pointed in the same direction as simultaneity was maximal when the video preceded the audio by 120 ms for speech, and by 103 ms for the hammer. Many other studies have reported this vision-first PSS (Dinnerstein and Zlotogura 1968; Hirsh and Fraisse 1964; Jaskowski et al. 1990; Keetels and Vroomen 2005; Spence et al. 2003; Vatakis and Spence 2006a; Zampini et al. 2003a), although some also reported opposite results (Bald et al. 1942; Rutschmann and Link 1964; Teatini et al. 1976; Vroomen et al. 2004). There have been many speculations about the underlying reason for this overall visual–lead asymmetry, the main one being that observers are tuned toward the natural situation in which lights arrive before
Perception of Synchrony between the Senses
151
sounds on the sense organs (King and Palmer 1985). There will then be a preference for vision to have a head start over sound so as to be perceived as simultaneous. Besides this possibility, though, there are many other reasons why the PSS can differ quite substantially from 0 ms SOA. To point out just a few: the PSS depends, among others, on stimulus intensity (more intense stimuli are processed faster or come to consciousness more quickly (Jaskowski 1999; Neumann and Niepel 2004; Roefs 1963; Sanford 1971; Smith 1933), stimulus duration (Boenke et al. 2009), the nature of the response that participants have to make (e.g., “Which stimulus came first?” vs. “Which stimulus came second?”; see Frey 1990; Shore et al. 2001), individual differences (Boenke et al. 2009; Mollon and Perkins 1996; Stone et al. 2001), and the modality to which attention is directed (Mattes and Ulrich 1998; Schneider and Bavelier 2003; Shore et al. 2001, 2005; Stelmach and Herdman 1991; Zampini et al. 2005c). We do not intend to list all the factors known thus far, but we only pick out the one that has been particularly important in theorizing about perception in general, that is, the role of attention.
9.3.1 Attention Affecting PSS: Prior Entry A vexing issue in experimental psychology is the idea that attention speeds up sensory processing. Titchener (1908) termed it the “law of prior entry,” implying that attended objects come to consciousness more quickly than unattended ones. Many of the old studies on prior entry suffered from the fact that they might simply reflect response biases (see Schneider and Bavelier 2003; Shore et al. 2001; Spence et al. 2001; Zampini et al. 2005c for discussions on the role of response bias in prior entry). As an example, observers may, whenever unsure, just respond that the attended stimulus was presented first without really having that impression. This strategy would reflect a change in decision criterion rather than a low-level sensory interaction between attention and the attended target stimulus. To disentangle response biases from truly perceptual effects, Spence et al. (2001) performed a series of important TOJ experiments in which visual–tactile, visual–visual, or tactile– tactile stimulus pairs were presented from the left or right of fixation. The focus of attention was directed toward either the visual or tactile modality by varying the probability of each stimulus modality (e.g., in the attend–touch condition, there were 50% tactile–tactile pairs, 0% visual–visual, and 50% critical tactile–visual pairs). Participants had to indicate whether the left or right stimulus was presented first. The idea tested was that attention to one sensory modality would speed up perception of stimuli in that modality, thus resulting in a change of the PSS (see also Mattes and Ulrich 1998; Schneider and Bavelier 2003; Shore et al. 2001, 2005; Stelmach and Herdman 1991; Zampini et al. 2005c). Their results indeed supported this notion: when attention was directed to touch, visual stimuli had to lead by much greater intervals (155 ms) than when attention was directed to vision (22 ms) for them to be perceived as simultaneous. Additional experiments demonstrated that attending to one side (left or right) also speeded perception of stimuli presented at that side. Therefore, both spatial attention and attention to modality were effective in shifting the PSS, presumably because they speeded up perceptual processes. To minimize the contribution of any simple response bias on the PSS, Spence et al. (2001) performed these experiments in which attention was manipulated in a dimension (modality or side) that was orthogonal to that of responding (side or modality, respectively). Thus, while attending to vision or touch, participants had to judge which side came first; and while attending to the left or right, participants judged which modality came first. The authors reported similar shifts of the PSS in these different tasks, thus favoring a perceptual basis for prior entry. Besides such behavioral data, there is also extensive electrophysiological support for the idea that attention affects perceptual processing. Very briefly, in the electroencephalogram (EEG) one can measure the event-related response (ERP) of stimuli that were either attended or unattended. Naïvely speaking, if attention speeds up stimulus processing, one would expect ERPs of attended stimuli to be faster than unattended ones. In a seminal study by Hillyard and Munte (1984), participants were presented a stream of brief flashes and tones on the left or right of fixation. The
152
The Neural Bases of Multisensory Processes
participant’s task was to attend either the auditory or visual modality, and to respond to infrequent targets in that modality at an attended location (e.g., respond to a slightly longer tone on the left). The attended modality was constant during the experiment (but varied between subjects), and the relevant location was specified at the beginning of each block of trials. The authors found enhanced negativity in the ERP for stimuli at attended locations if compared to nonattended locations. The negativity started at about 150 ms poststimulus for visual stimuli and at about 100 ms for auditory stimuli. Evidence for a cross-modal link in spatial attention was also found, as the enhancement (although smaller) was also found for stimuli at the attended location in the unattended modality (see also Spence and Driver 1996; Spence et al. 2000 for behavioral results). Since then, analogous results have been found by many others. For example, Eimer and Schröger (1998) found similar results using a different design in which the side of the attended location varied from trial to trial. Again, their results demonstrated enhanced negativities (between 160 and 280 ms after stimulus onset) for attended locations as compared to unattended locations, and the effect was again bigger for the relevant rather than irrelevant modality. The critical issue for the idea prior entry is whether these ERP effects also reflect that attended stimuli are processed faster. In most EEG studies, attention affects the amplitude of the ERP rather than speed (for a review, see Eimer and Driver 2001). The problem is that there are many other interpretations for an amplitude modulation rather than increased processing speed (e.g., less smearing of the EEG signal over trials if attended). A shift in the latencies of the ERP would have been easier to interpret in terms of increased processing speed, but the problem is that even if a latency shift in the ERP is obtained, it is usually small if compared to the behavioral data. As an example, in an ERP study by Vibell et al. (2007), attention was directed toward the visual or tactile modality in a visual–tactile TOJ task. Results showed that the peak latency of the visual evoked potentials (P1 and N1) was earlier when attention was directed to vision (P1 = 147 ms, and N1 = 198 ms) rather than when directed to touch (P1 = 151 ms, and N1 = 201 ms). This shift in the P1 may be taken as evidence that attention indeed speeds up perception in the attended modality, but it should also be noted that the 4-ms shift in the ERP is in a quite different order of magnitude than the 38 ms shift of the PSS in the behavioral data, or the 133 ms shift reported by Spence et al. (2001) in a similar study. In conclusion, there is both behavioral and electrophysiological support for the idea that attention speeds up perceptual processing, but the underlying neural mechanisms remain, for the time being, elusive.
9.4 SENSITIVITY FOR INTERSENSORY ASYNCHRONY Besides the point at which simultaneity is perceived to be maximal (the PSS), the second measure that one can derive from the TOJ and SJ task—but which is unfortunately not always reported—is the observers’ sensitivity to timing differences, the JND. The sensitivity to intersensory timing differences is not only of interest for theoretical reasons, but it is also of practical importance, for example, in video broadcasting or multimedia Internet where standards are required for allowable audio or video delays (Finger and Davis 2001; Mortlock et al. 1997; Rihs 1995). One of the classic studies on sensitivity for intersensory synchrony was done by Hirsh and Sherrick (1961). They presented audio–visual, visual–tactile, and audio–tactile stimuli in a TOJ task and reported JNDs to be approximately 20 ms regardless of the modalities used. Although more recent studies have found substantially bigger JNDs and larger differences between the sensory modalities. For simple cross-modal stimuli such as auditory beeps and visual flashes, JNDs have been reported in the order of approximately 25 to 50 ms (Keetels and Vroomen 2005; Zampini et al. 2003a, 2005b), but for audio–tactile pairs, Zampini et al. (2005a) obtained JNDs of about 80 ms, and for visual–tactile pairs, JNDs have been found in the order of 35 to 65 ms (Keetels and Vroomen 2008b; Spence et al. 2001). More importantly, JNDs are not constant, but have been shown to depend on various other factors like the spatial separation between the components of the stimuli, stimulus complex-
Perception of Synchrony between the Senses
153
ity, whether it is speech or not, and—more controversial—the semantic congruency. Some of these factors will be described below.
9.4.1 Spatial Disparity Affects JND A factor that has been shown to affect sensitivity for intersensory timing is the spatial separation between the components of a stimulus pair. Typically, sensitivity for temporal order improves if the components of the cross-modal stimuli are spatially separated (i.e., lower JNDs; Bertelson and Aschersleben 2003; Spence et al. 2003; Zampini et al. 2003a, 2003b, 2005b). Bertelson and Aschersleben, for example, reported audiovisual JNDs to be lower when a beep and a flash were presented from different locations rather than from a common and central location. Zampini et al. (2003b) qualified these findings and observed that sensitivity in an audiovisual TOJ task improved if the sounds and lights were presented from different locations, but only so if presented at the left and right from the median (at 24°). No effect of separation was found for vertically separated stimuli. This made Zampini et al. conclude that the critical factor for the TOJ improvement was that the individual components of an audiovisual stimulus were presented in different hemifields. Keetels and Vroomen (2005), though, examined this notion and varied the (horizontal) size of the spatial disparity. Their results showed that JNDs also improved when spatial disparity was large rather than small, even if stimuli did not cross hemifields. Audiovisual JNDs thus depend on both the relative position from which stimuli are presented and on whether hemifields are crossed or not. Spence et al. (2001) further demonstrated that sensitivity improves for spatially separated visual–tactile stimulus pairs, although no such effect was found for audio–tactile pairs (Zampini et al. 2005a). In blind people, on the other hand, audio–tactile temporal sensitivity was found to be affected by spatial separation (Occelli et al. 2008) and similar spatial modulation effects were demonstrated in rear space (Kitagawa 2005). What is the underlying reason that sensitivity to temporal differences improves if the sources are spatially separated? Or, why does the brain fail to notice temporal intervals when stimuli comes from a single location? Two accounts have been proposed (Spence et al. 2003). First, it has been suggested that intersensory pairing impairs sensitivity for temporal order. The idea underlying “intersensory pairing” is that the brain has a list of criteria on which it decides whether information from different modalities belong together or not. Commonality in time is, without a doubt, a very important criterion, but there may be others like commonality in space, association based on cooccurrence, or semantic congruency. Stimuli from the same location may, for this reason, be more likely paired into a single multimodal event if compared to stimuli presented far apart (see Radeau 1994). Any such tendency to pair stimuli could then make the relative temporal order of the components lost, thereby worsening the temporal sensitivity in TOJ or SJ tasks. In contrast with this notion, many cross-modal effects occur despite spatial discordance, and there are reasons to argue that spatial congruency may not be an important criterion for intersensory pairing (Bertelson 1994; Colin et al. 2001; Jones and Munhall 1997; Keetels et al. 2007; Keetels and Vroomen 2007, 2008a; Stein et al. 1996; Teder-Salejarvi et al. 2005; Vroomen and Keetels 2006). But why, then, does sensitivity for temporal order improve with spatially separated stimuli if not because intersensory pairing is impeded? A second reason why JNDs may improve is that of spatial redundancy. Whenever multisensory information is presented from different locations, observers actually have extra spatial information on which to base their response. That is, observers may initially not know which modality had been presented first, but still know on which side the first stimulus appeared, and they may then infer which modality had been presented first. As an example, in an audiovisual TOJ task, an observer may have noticed that the first stimulus came from the left (possibly because attention was captured by the first stimulus toward that side). They may also remember that the light was presented on the right. By inference, then, the sound must have been presented first. Sensitivity for temporal order for spatially separated stimuli then improves because there are extra spatial cues that are not present for colocated stimuli.
154
The Neural Bases of Multisensory Processes
9.4.2 Stimulus Complexity Affects JND Many studies exploring temporal sensitivity have used relatively simple stimuli such as flashes and beeps that have a single and rather sharp transient onset. However, in real-world situations, the brain has to deal with much more complex stimuli that often have complicated variations in temporal structure over time (e.g., seeing and hearing someone speaking; or seeing, hearing, and touching the keys on a computer keyboard). How does the brain notice timing differences between these more complicated and dynamic stimuli? Theoretically, one might expect that more complex stimuli also provide a richer base on which to judge temporal order. Audiovisual speech would be the example “par excellence” because it is rich in content and fluctuating over time. Although in fact, several studies have found the opposite, and in particular for audiovisual speech, the “temporal window” for which the auditory and visual streams are perceived as synchronous is rather wide (Conrey and Pisoni 2006; Dixon and Spitz 1980; Jones and Jarick 2006; Stekelenburg and Vroomen 2007; a series of studies by Vatakis and Spence 2006a; Vatakis, Ghanzanfar and Spence 2008a; van Wassenhove et al. 2007). For example, in a study by van Wassenhove et al. (2007), observers judged in an SJ task whether congruent audiovisual speech stimuli and incongruent McGurk-like speech stimuli* (McGurk and MacDonald 1976) were synchronous or not. The authors found a temporal window of 203 ms for the congruent pairs (ranging from −76 ms sound-first to +127 ms vision-first, with PSS at 26 ms vision-first) and a 159 ms window for the incongruent pairs (ranging from –40 to +119 ms, with PSS at 40 ms vision-first). These windows are rather wide if compared to the much smaller windows found for simple flashes and beeps (mostly below 50 ms; Hirsh and Sherrick 1961; Keetels and Vroomen 2005; Zampini et al. 2003a, 2005b). The relatively wide temporal window for complex stimuli has also been demonstrated by indirect tests. For example, the McGurk effect was found to diminish if the auditory and visual information streams are out of sync, but this only occurred at rather long intervals (comparable with the ones found in SJ tasks; Grant et al. 2004; Massaro et al. 1996; McGrath and Summerfield 1985; Munhall et al. 1996; Pandey et al. 1986; Tanaka et al. 2009b; van Wassenhove et al. 2007). There have been several recent attempts to compare sensitivity for intersensory timing in audiovisual speech with other audiovisual events such as music (guitar and piano) and object actions (e.g., smashing a television set with a hammer, or hitting a soda can with a block of wood; Vatakis and Spence 2006a, 2006b). Observers made TOJs about which stream (auditory or visual) appeared first. Overall, results showed better temporal sensitivity for audiovisual stimuli of “lower complex ity” in comparison with stimuli having continuously varying properties (i.e., syllables vs. words and/or sentences). Similar findings were reported by Stekelenburg and Vroomen (2007), who compared JNDs of audiovisual speech (pronunciation of the syllable /bi/) with that of natural nonspeech events (a video of a handclap) in a TOJ task. Again, JNDs were much better for the nonspeech events (64 ms) than for speech (105 ms). On the basis of these findings, some have concluded that “speech is special” (van Wassenhove et al. 2007; Vatakis et al. 2008a) or that when “stimulus complexity” increases, sensitivity for temporal order deteriorates (Vatakis and Spence 2006a). Although in our view, these proposals do not really clarify the issue because the notion of “speech is special” and “stimulus complexity” are both ill-defined, and most likely, these concepts are confounded with other stimulus factors that can be described more clearly. As an example, it is known that the rate at which stimuli are presented affects audiovisual JNDs for intersensory timing (Benjamins et al. 2008; Fujisaki and Nishida 2005). Sensitivity may also be affected by whether there is anticipatory information that predicts the onset of an audiovisual event (Stekelenburg and Vroomen 2007; Van Eijk 2008; Vroomen and * In the McGurk illusion (McGurk and MacDonald 1976), it is shown that the perception of nonambiguous speech tokens can be modified by the simultaneous presentation of visually incongruent articulatory gestures. Typically, when presented with an auditory syllable /ba/ dubbed onto a face articulating /ga/, participants report hearing /da/. The occurrence of this so-called McGurk effect has been taken as a particularly powerful demonstration of the use of visual information in speech perception.
Perception of Synchrony between the Senses
155
Stekelenburg 2009), and by whether there is a sharp transition that can serve as a temporal anchor (Fujisaki and Nishida 2005). Each of these stimulus characteristics—and likely many others—need to be controlled if one wants to compare across stimuli in a nonarbitrary way. Below, we address some of these factors.
9.4.3 Stimulus Rate Affects JND It has been demonstrated that perception of intersensory synchrony breaks down if stimuli are presented with a temporal frequency of above ~4Hz. This is very slow if compared to unimodal visual or auditory sensitivity for temporal coherence. Fujisaki and Nishida (2005) examined this using audiovisual stimuli consisting of a luminance-modulated Gaussian blob and an amplitudemodulated white noise presented at various rates. They demonstrated that synchrony–asynchrony discrimination for temporally dense random pulse trains became nearly impossible at temporal frequencies larger than 4 Hz, even when the audiovisual interval was large enough for discrimination of single pulses (the discrimination thresholds were 75, 81, and 119 ms for single pulses, 2 and 4 Hz repetitive stimuli, respectively). This 4-Hz boundary was also reported by Benjamins et al. (2008). They explored the temporal limit of audiovisual integration using a visual stimulus that alternated in color (red or green) and a sound that alternated in frequency (high or low). Observers had to indicate which sound (high or low) accompanied the red disk. Their results demonstrated that at rates of 4.2 Hz and higher, observers were no longer able to match the visual and auditory stimuli across modalities (proportion correct matches dropped from 0.9 at 1.9 Hz to 0.5 at a 4.2 Hz). Further experiments also demonstrated that manipulating other temporal stimulus characteristics such as the stimulus offsets and/or audiovisual SOAs did not change the 4-Hz threshold. Here, it should be mentioned that the 4-Hz rate is also the approximate rate with which syllables are spoken in continuous speech, and temporal order in audiovisual speech might thus be difficult simply because stimulus presentation is too fast, and not because speech is special.*
9.4.4 Predictability Affects JND Another factor that may play a role in intersensory synchrony judgments, but one that has not yet been studied extensively, is the extent to which (one of the components of) a multisensory event can be predicted. As an example, for many natural events—such as the clapping of hands—vision provides predictive information about when a sound is to occur, as there is visual anticipatory information about sound onset. Stimuli with predictive information allow observers to make a clear prediction about when a sound is to occur, and this might improve sensitivity for temporal order. A study by van Eijk et al. (2008, Chapter 4) is of relevance here. They explored the effect of visual predictive information (or, the way the authors called it, “apparent causality”) on perceived audiovisual synchrony. Visual predictive information was either present or absent by showing all or part of a Newton’s cradle toy (i.e., a ball that appears to fall from a suspended position on the left of the display, strikes the leftmost of four contiguous balls, and then launches the rightmost ball into an arc motion away from the other balls). The collision of the balls was accompanied by a sound that varied around the time of the impact. The predictability of the sound was varied by showing either the left side of the display (motion followed by a collision and sound so that visual motion predicted sound occurrence) or the right side of the display (a sound followed by visual motion; so no predictable information about sound onset). In line with the argument made here, the authors reported * It has also been reported that the presentation rate may shift the PSS. In a study by Arrighi et al. (2006), participants were presented a video of hands drumming on a conga at various rates (1, 2, and 4 Hz). Observers were asked to judge whether the auditory and visual streams appeared to be synchronous or not (an SJ task). Results showed that the auditory delay for maximum simultaneity (the PSS) varied inversely with drumming tempo from about 80 ms at 1 Hz, and 60 ms at 2 Hz, to 40 ms at 4 Hz. Video sequences of random drumming motion and of a disk moving along the motion profile matching the hands of to the drummer produced similar results, with higher tempos requiring less auditory delay.
156
The Neural Bases of Multisensory Processes
better temporal sensitivity if visual predictive information about sound onset was available (the left display) rather than if it was absent (the right display).
9.4.5 Does Intersensory Pairing Affect JND? A more controversial issue in the literature on intersensory timing is the extent to which information from different modalities is treated by the brain as belonging to the same event. Some have headed it under the already mentioned notion of “intersensory pairing,” others under the “unity assumption” (Welch and Warren 1980). The idea is that observers find it difficult to judge temporal order if the information streams naturally belong together, for reasons other than temporal coincidence, because there is then more intersensory integration; in which case, temporal order is lost. Several studies have examined this issue but with varying outcomes. In a study by Vatakis and Spence (2007), participants judged the temporal order of audiovisual speech stimuli that varied in gender and phonemic congruency. Face and voice congruency could vary in gender (a female face articulating /pi/ with a sound of either a female or male /pi/), or phonemic content (a face saying /ba/ with a voice saying /ba/ or /da/). In support of the unity assumption, results showed that for both the gender and phonemic congruency manipulation, sensitivity for temporal order improved if the auditory and visual streams were incongruent rather than congruent. In a recent study, Vatakis et al. (2008a) qualified these findings and reported that this effect may be specific for human speech. In this study, the effect of congruency was examined using matching or mismatching call types of monkeys (“cooing” vs. “grunt” or threat calls). For audiovisual speech, the sensitivity of temporal order was again better for the incongruent rather than congruent trials, but there was no congruency effect for the monkey calls. In another study, Vatakis and Spence (2008) also found no congruency effect for audiovisual music and object events that either matched (e.g., the sight of a note being played on a piano together with the corresponding sound, or the video of a bouncing ball with a corresponding sound) or mismatched. At this stage, it therefore appears that the “unity assumption” may only apply to audiovisual speech. It leaves one to wonder, though, whether this effect is best explained in terms of the “special” nature of audiovisual speech, or whether other factors are at play (e.g., the high level of exposure to speech stimuli in daily life, the possibly more attention-grabbing nature of speech stimuli, or the specific low-level acoustic stimulus features of speech; Vatakis et al. 2008a).
9.5 HOW THE BRAIN DEALS WITH LAGS BETWEEN THE SENSES In any multisensory environment, the brain has to deal with lags in arrival and processing time between the different senses. Surprisingly though, despite these lags, temporal coherence is usually maintained, and only in exceptional circumstances such as the thunder, which is heard after the lightning, a single multisensory event is perceived as being separated. This raises the question of how temporal coherence is maintained. In our view, at least four options are available: (1) the brain might be insensitive for small lags, or it could just ignore them (a window of temporal integration); (2) the brain might be “intelligent” and bring deeply rooted knowledge about the external world into play that allows it to compensate for various external factors; (3) the brain might be flexible and shift its criterion about synchrony in an adaptive fashion (recalibration); or (4) the brain might actively shift the time at which one information stream is perceived to occur toward the other (temporal ventriloquism). Below, we discuss each of these notions. It should be noted beforehand that none of these options mutually excludes the other.
9.5.1 Window of Temporal Integration The first notion, that the brain is rather insensitive to lags, comes close to the idea that there is a “window of temporal integration.” Any information that falls within this hypothetical window is potentially assigned to the same external event and streams within the window are then treated as to
157
Perception of Synchrony between the Senses
have occurred simultaneously (see Figure 9.2, panel 1). Many have alluded to this concept, but what is less satisfying about it is that it is basically a description rather than an explanation. To make this point clear, some have reported that the temporal window for audiovisual speech can be quite large because it can range from approximately 40 ms audio-first to 240 ms vision-first. However, sensitivity for intersensory asynchronies (JND) is usually much smaller than the size of this window. For example, Munhall et al. (1996) demonstrated that exact temporal coincidence between the auditory and visual parts of audiovisual speech stimuli is not a very strict constraint on the McGurk effect (McGurk and MacDonald 1976). Their results demonstrated that the McGurk effect was biggest when vowels were synchronized (see also McGrath and Summerfield 1985), but the effect survived even if audition lagged vision by 180 ms (see also Soto-Faraco and Alsius 2007, 2009; these studies 1) A wide window of temporal integration
Time
= Air travel time = Neural processing time
= Actual stimulus onset time
= Window of integration
= Perceived temporal occurrence
2) The brain compensates for auditory delays caused by sound distance Close sound:
Far sound:
3) Adaptation to intersensory asynchrony via: a. Adjustment of criterion
b. Widening of the window
c. Adjustment of the sensory threshold
4) Temporal ventriloquism: The perceived visual onset time is shifted towards audition
FIGURE 9.2 Synchrony can be perceived despite lags. How is this accomplished? Four possible mechanisms are depicted for audiovisual stimuli like a flash and beep. Similar mechanisms might apply for other stimuli and other modality pairings. Time is represented on the x-axis, and accumulation of sensory evidence on the y-axis. A stimulus is time-stamped once it surpasses a sensory threshold. Stimuli in audition and vision are perceived as being synchronous if they occur within a certain time window. (1) The brain might be insensitive for naturally occurring lags because the window of temporal integration is rather wide. (2) The brain might compensate for predictable variability—here, sound distance—by adjusting perceived occurrence of a sound in accordance with sound travel time. (3) Temporal recalibration. Three different mechanisms might underlie adaptation to asynchrony: (a) a shift in criterion about synchrony for adapted stimuli or modalities, (b) a widening of temporal window for adapted stimuli or modalities, and (c) a change in threshold of sensory detection (when did the stimulus occur?) within one of adapted modalities. (4) Temporal ventriloquism: a visual event is actively shifted toward an auditory event.
158
The Neural Bases of Multisensory Processes
show that participants can still perceive a McGurk effect when they can quite reliably perform TOJs). Outside the speech domain, similar findings have been reported. In a study by Shimojo et al. (2001), the role of temporal synchrony was examined using the streaming–bouncing illusion (i.e., two identical visual targets that move across each other and are normally perceived as a streaming motion are typically perceived to bounce when a brief sound is presented at the moment that the visual targets coincide; Sekuler et al. 1997). The phenomenon is dependent on the timing of the sound relative to the coincidence of the moving objects. Although it has been demonstrated that a brief sound induced the visual bouncing percept most effectively when it was presented about 50 ms before the moving objects coincide, their data furthermore showed a rather large temporal window of integration because intervals ranging from 250 ms before visual coincidence to 150 ms after coincidence still induced the bouncing percept (see also Bertelson and Aschersleben 1998, for the effect of temporal asynchrony on spatial ventriloquism; or Shams et al. 2002, for the illusory-flash effect). All these intersensory effects thus occur at asynchronies that are much larger than JNDs normally reported when directly exploring the effect of asynchrony using TOJ or SJ tasks (van Wassenhove et al. 2007). One might argue that despite the fact that observers do notice small delays between the senses, the brain can still ignore it if it is of help for other purposes, such as understanding speech (Soto-Faraco and Alsius 2007, 2009). But the question then becomes, why is there more than one window; that is, one for understanding, the other for noticing timing differences. Besides the width of the temporal window varying with the purpose of the task, it has also been found to vary for different kinds of stimuli. As already mentioned, the temporal window is much smaller for clicks and flashes than it is for audiovisual speech. However, why would the size be different for different stimuli? Does the brain have a separate window for each stimulus and each purpose? If so, we are left with explaining how and why it varies. Some have taken the concept of a window quite literally, and have argued that “speech is special” because the window for audiovisual speech is wide (van Wassenhove et al. 2007; Vatakis et al. 2008a). Although we would rather refrain from such speculations, and consider it more useful to examine what the critical features are that determine when perception of simultaneity becomes easy (a small window) or difficult (a large window). The size of the window is thus, in our view, the factor that needs to be explained rather than that it is the explanation itself.
9.5.2 Compensation for External Factors The second possibility—the intelligent brain that compensates for various delays—is a controversial issue that has received support mainly from studies that examined whether observers take distance into account when judging audiovisual synchrony (see Figure 9.2, panel 2). The relatively slow transduction time of sounds through air causes natural differences in arrival time between sounds and lights. It implies that the farther away an audiovisual event, the more the sound will lag the visual stimulus; although such a lag might be compensated for by the brain if distance were known. The brain might then treat a lagging sound as being synchronous to a light, provided that the audiovisual event occurred at the right distance. Some have indeed reported that the brain does just that as judgments about audiovisual synchrony were found to depend on perceived distance (Alais and Carlile 2005; Engel and Dougherty 1971; Heron et al. 2007; Kopinska and Harris 2004). Although others have failed to demonstrate compensation for distance (Arnold et al. 2005; Lewald and Guski 2004). Sugita and Suzuki (2003) explored compensation for distance with an audiovisual TOJ task. The visual stimuli were delivered by light-emitting diodes (LEDs) at distances ranging from 1 to 50 m in free-field circumstances (and were compensated for by intensity, although not size). Of importance, the sounds were delivered through headphones, and no attempt was made to equate the distance of the sound with that of the light. Note that this, in essence, undermines the whole idea that the brain compensates for lags of audiovisual events out in space. Nevertheless, PSS values were found to shift with visual stimulus distance. When the visual stimulus was 1 m away, the PSS was at about a
Perception of Synchrony between the Senses
159
~5 ms sound delay, and the delay increased when the LEDs were farther away. The increment was consistent with the velocity of sounds up to a viewing distance of about 10 m, after which it leveled off. This led the authors to conclude that lags between auditory and visual inputs are perceived as synchronous not because the brain has a wide temporal window for audiovisual integration, but because the brain actively changes the temporal location of the window depending on the distance of the source. Alais and Carlile (2005) came to similar conclusions, but with different stimuli. In their study, auditory stimuli were presented over a loudspeaker and auditory distance was simulated by varying the direct-to-reverberant energy ratio as a depth cue for sounds (Bronkhorst 1995; Bronkhorst and Houtgast 1999). The near sounds simulated a depth of 5 m and had substantial amounts of direct energy with a sharp transient onset; the far sounds simulated a depth of 40 m and did not have a transient. The visual stimulus was a Gaussian blob on a computer screen in front of the observer without variations in the distance. Note that, again, no attempt was made to equate auditory and visual distance, thus again undermining the underlying notion. The effect of apparent auditory distance on temporal alignment with the blob on the screen was measured in a TOJ task. The authors found compensation for depth, thus the PSS in the audiovisual TOJ task shifted with the apparent distance of the sound in accordance with the speed of sounds through air up to 40 m. Although on closer inspection of their data, it is clear that the shift in the PSS was mainly caused by the fact that sensitivity for intersensory synchrony became increasingly worse for more distant sounds. Judging from their figures, sensitivity for nearby sounds at 5 m was in the normal range, but for the most distant sound, sensitivity was extremely poor as it never reached plateau, and even at a sound delay of 200 ms, 25% of the responses was still “auditory-first” (see also Arnold et al. 2005; Lewald and Guski 2004). This suggests that observers, while performing the audiovisual TOJ task, could not use the onset of the far sound as a cue for temporal order, possibly because it lacks a sharp transient and that they had to rely on other cues instead. Besides controversial stimuli and data, there are others who simply failed to observe compensation for distance (Arnold et al. 2005; Heron et al. 2007; Lewald and Guski 2004; Stone et al. 2001). For example, Stone et al. (2001) used an audiovisual SJ task and varied stimulus–observer distances from 0.5 m in the near condition to 3.5 m in the far condition. This resulted in a 3-m difference that would theoretically correspond to an 11 ms difference in the PSS if sound–travel time would not be compensated (sound velocity of 330 m/s corresponds to ~3.5 m/11 ms). For three out of five subjects, the PSS values were indeed shifted in that direction, which led the authors to conclude that distance was not compensated. Against this conclusion, it should be said that the SJ tasks depend heavily on criterion settings, that “three-out-of-five” is not persuasively above chance, and that the range of distances was rather restricted. Less open to these kinds of criticisms is a study by Lewald and Guski (2004). They used a rather wide range of distances (1, 5, 10, 20, and 50 m), and their audiovisual stimuli (a sequence of five beeps/flashes) were delivered by colocated speakers/LEDs placed in the open field. Note that in this case, there were no violations in the “naturalness” of the audiovisual stimuli and that they were physically colocated. Using this setup, the authors did not observe compensation for distance. Rather, their results showed that when the physical observer–stimulus distance increased, the PSS shifted precisely with the variation in sound transmission time through air. For audiovisual stimuli that are far away, sounds thus had to be presented earlier than for nearby stimuli to be perceived as simultaneous, and there was no sign that the brain would compensate for sound–traveling time. The authors also suggested that the discrepancy between their findings and those who did find compensation for distance lies in the fact that the latter simulated distance rather than using the natural situation. Similar conclusions were also reached by Arnold et al. (2005), who examined whether the stream/ bounce illusion (Sekuler et al. 1997) varies with distance. The authors examined whether the optimal time to produce a “bounce” percept varied with the distance of the display, which ranged from ~1 to ~15 m. The visual stimuli were presented on a computer monitor—keeping retinal properties constant—and the sounds were presented either over loudspeakers at these distances or over
160
The Neural Bases of Multisensory Processes
headphones. The optimal time to induce a bounce percept shifted with the distance of the sound if they were presented over loudspeakers, but there was no shift if the sound was presented over headphones. Similar effects of timing shifts with viewing distance after loudspeaker, but not headphone, presentation were obtained in an audiovisual TOJ task in which observers judged whether a sound came before or after two disks collided. This led the authors to conclude that there is no compensation for distance if distance is real and presented over speakers rather than simulated and presented over headphones. This conclusion might well be correct, but it raises the question of how to account for the findings by Kopinska and Harris (2004). These authors reported complete compensation for distance despite using colocated sounds and lights produced at natural distances. In their study, the audiovisual stimulus was a bright disk that flashed once on a computer monitor and it was accompanied by a tone burst presented from the computer’s inbuilt speaker. Participants were seated at various distances from the screen (1, 4, 8, 16, 24, and 32 m) and made TOJs about the flash and the sound. The authors also selectively slowed down visual processing by presenting the visual stimulus at 20° of eccentricity rather than in the fovea, or by having observers wear darkened glasses. As an additional control, they used simple reaction time tasks and found that all these variations—distance, eccentricity, and dark glasses—had predictable effects on auditory or visual speeded reaction. However, audiovisual simultaneity was not affected by distance, eccentricity, or darkened glasses. Thus, there was no shift in the PSS despite the fact that the change in distance, illumination, and retinal location affected simple reaction times. This made the authors conclude that observers recover the external world by taking into account all kinds of predictable variations, most importantly distance, alluding to similar phenomena such as size or color constancy. There are some studies that varied audiovisual distance in a natural way, but came to diametrically opposing conclusions: Lewald and Guski (2004) and Arnold et al. (2005) found no compensation for distance, whereas Kopinska and Harris (2004) reported complete compensation. What’s the critical difference between them? Our conjecture is that they differ in two critical aspects, that is, (1) whether distance was randomized on a trial-by-trial basis or blocked, and (2) whether sensitivity for temporal order was good or poor. In the study by Lewald and Guski, the distance of the stimuli was varied on a trial-by-trial basis as they used a setup of five different speakers/LEDs. In Kopinska and Harris’s study, though, the distance between the observer and the screen was blocked over trials because otherwise subjects would have to be shifted back and forth after each trial. If the distance is blocked, then either adaptation to the additional sound lag may occur (i.e., recalibration), or subjects may equate response probabilities to the particular distance that they are seated. Either way, the effect of distance on the PSS will diminish if trials are blocked, and no shift in the PSS will then be observed, leading to the “wrong” conclusion that distance is compensated. This line of reasoning corresponds with a recent study by Heron et al. (2007). In their study, participants performed a TOJ task in which audiovisual stimuli (a white disk and a click) were presented at varying distances (0, 5, 10, 20, 30, and 40 m). Evidence for compensation was only found after a period of adaptation (1 min + 5 top-up adaptation stimuli between trials) to the naturally occurring audiovisual asynchrony associated with a particular viewing distance. No perceptual compensation for distanceinduced auditory delays could be demonstrated whenever there was no adaptation period (although we should notice that in the present study, observer distance was always blocked). The second potentially relevant difference between studies that do or do not demonstrate compensation is the difficulty of the stimuli. Lewald and Guski (2004) used a sequence of five pulses/ sounds, whereas Kopinska and Harris (2004) presented a single sound/flash. In our experience, a sequence of pulses/flashes drastically improves accuracy for temporal order if compared to a single pulse/flash because there are many more cues in the signal. In the study by Arnold et al. (2005), judgments about temporal order could also be relatively accurate because the two colliding disks provided anticipatory information about when to expect the sound. Most likely, observers in the study of Kopinska and Harris were inaccurate because their single sound/flash stimuli without anticipatory information were difficult (unfortunately, none of the studies reported JNDs). In effect,
Perception of Synchrony between the Senses
161
this amounts to adding noise to the psychometric function, which then effectively masks the effect of distance on temporal order. It might easily lead one to conclude “falsely” that there is compensation for distance.
9.5.3 Temporal Recalibration The third possibility of how the brain might deal with lags between the senses entails that the brain is flexible in adopting what it counts as synchronous (see Figure 9.2, panel 3). This phenomenon is also known as “temporal recalibration.” Recalibration is a well-known phenomenon in the spatial domain, but it has only recently been demonstrated in the temporal domain (Fujisaki et al. 2004; Vroomen et al. 2004). As for the spatial case, more than a century ago, von Helmholtz (1867) had already shown that the visual–motor system was remarkably flexible as it adapts to shifts of the visual field induced by wedge prisms. If prism-wearing subjects had to pick up a visually displaced object, they would quickly adapt to the new sensor–motor arrangement and even after only a few trials, small visual displacements might get unnoticed. Recalibration was the term used to explain this phenomenon. In essence, recalibration is thought to be driven by a tendency of the brain to minimize discrepancies between the senses about objects or events that normally belong together. For the prism case, it is the position of where the hand is seen and felt. Nowadays, it is also known that the least reliable source is adjusted toward the more reliable one (Ernst and Banks 2002; Ernst et al. 2000; Ernst and Bulthoff 2004). The first evidence of recalibration in the temporal domain came from two studies with very similar designs: an exposure–test paradigm. Both Fujisaki et al. (2004) and Vroomen et al. (2004) first exposed observers to a train of sounds and light flashes with a constant but small intersensory interval, and then tested them by using an audiovisual TOJ or SJ task. The idea was that observers would adapt to small audiovisual lags in such a way that the adapted lag is eventually perceived as synchronous. Therefore, after a light-first exposure, light-first trials would be perceived as synchronous, and after a sound-first exposure, a sound-first stimulus would be perceived as synchronous (see Figure 9.3). Both studies indeed observed that the PSS was shifted in the direction of the exposure lag. For example, Vroomen and Keetels exposed subjects for ~3 min to a sequence of sound bursts/ light flashes with audiovisual lags of either ±100 or ±200 ms (sound-first or light-first). During the test, the PSS was shifted, on average, by 27 and 18 ms (PSS difference between sound-first and light-first) for the SJ and TOJ tasks, respectively. Fujisaki et al. used slightly bigger lags (±235 ms sound-first or light-first) and found somewhat bigger shifts in the PSS (59 ms shifts of the PSS in SJ and 51 ms in TOJ), but data were, in essence, comparable. Many others have reported similar effects (Asakawa et al. 2009; Di Luca et al. 2007; Hanson et al. 2008; Keetels and Vroomen 2007, 2008b; Navarra et al. 2005, 2007, 2009; Stetson et al. 2006; Sugano et al. 2010; Sugita and Suzuki 2003; Takahashi et al. 2008; Tanaka et al. 2009a; Yamamoto et al. 2008). The mechanism underlying temporal recalibration, though, remains elusive at this point. One option is that there is a shift in the criterion for simultaneity in the adapted modalities (Figure 9.2, panel 3a). After exposure to light-first pairings, participants may thus change their criterion for audiovisual simultaneity in such a way that light-first stimuli are taken to be simultaneous. On this view, other modality-pairings (e.g., vision–touch) would be unaffected and the change in criterion should then not affect unimodal processing of visual and auditory stimuli presented in isolation. Another strong prediction is that stimuli that were once synchronous, before adaptation, can become asynchronous after adaptation. The most dramatic case of this phenomenon can be found in motor–visual adaptation. In a study by Eagleman and Holcombe (2002), participants were asked to repeatedly tap their finger on a key, and after each key tap, a delayed flash was presented. If the visual flash occurred at an unexpectedly short delay after the tap (or synchronous), it was actually perceived as occurring before the tap, an experience that runs against the law of causality. It may also be the case that one modality (vision, audition, or touch) is “shifted” toward the other, possibly because the sensory threshold for stimulus detection in one of the adapted modalities is
162
The Neural Bases of Multisensory Processes (a) Exposure lag –100 ms
100 ms AV-lag Time
Exposure lag 0 ms
Exposure lag 100 ms
(b) Exposure lag –100 ms
100 ms TV-lag Time
= Visual stimulus = Sound = Vibro-tactile stimulus
FIGURE 9.3 Schematic illustration of exposure conditions typically used in a temporal recalibration paradigm. During exposure, participants are exposed to a train of auditory–visual (AV) or tactile–visual (TV) stimulus pairs (panels a and b, respectively) with a lag of –100, 0, or +100 ms. To explore possible shifts in perceived simultaneity or sensitivity to asynchrony, typically a TOJ or SJ task is performed in a subsequent test phase. (From Fujisaki, W. et al., Nat. Neurosci., 7, 773–8, 2004; Vroomen, J. et al., Cogn. Brain Res., 22, 32–5, 2004; Keetels, M., Vroomen, J., Percept. Psychophys., 70, 765–71, 2008; Keetels, J., Vroomen, M., Neurosci. Lett., 430, 130–4, 2008. With permission.)
changed (see Figure 9.2, panel 3b). For example, as an attempt to perceive simultaneity during lightfirst exposure, participants might delay processing time in the visual modality by adopting a more stringent criterion for sensory detection of visual stimuli. After exposure to light-first audiovisual pairings, one might then expect slower processing times of visual stimuli in general, and other modality pairings that involve the visual modality, say vision–touch, would then also be affected. Two strategies have been undertaken to explore the mechanism underlying temporal recalibration. The first is to examine whether temporal recalibration generalizes to other stimuli within the adapted modalities, the second is to examine whether temporal recalibration affects different modality pairings than the ones adapted. Fujisaki et al. (2004) have already demonstrated that the effect of adaptation in temporal misalignment was effective even when the visual test stimulus was very different from the exposure situation. The authors exposed observers to asynchronous toneflash stimulus pairs and later tested them on the “stream/bounce” illusion (Sekuler et al. 1997). Fujisaki et al. reported that the optimal delay for obtaining a bounce percept in the stream/bounce illusion was shifted in the same direction as the adapted lag. Furthermore, after exposure to a “walldisplay,” in which tones were timed with a ball bouncing off the inner walls of a square, similar shifts in the PSS on the bounce percept were found (a ~45 ms difference when comparing the PSS of the –235 ms sound-first exposure with the +235 ms vision-first exposure). Audiovisual temporal recalibration thus generalized well to other visual stimuli. Navarra et al. (2005) and Vatakis et al. (2008b) also tested generalization for audiovisual temporal recalibration using stimuli from different domains (speech/nonspeech). Their observers had to monitor a continuous speech stream for target words that were presented either in synchrony with the video of a speaker, or with the audio stream lagging 300 ms behind. During the monitoring
Perception of Synchrony between the Senses
163
task, participants performed a TOJ (Navarra et al. 2005; Vatakis et al. 2007) or SJ task (Vatakis et al. 2008b) on simple flashes and white noise bursts that were overlaid on the video. Their results showed that sensitivity, rather than a shift in the PSS, became worse if subjects were exposed to desynchronized rather than synchronized audiovisual speech. Similar effects (larger JNDs) were found with music stimuli. This led the authors to conclude that the “window of temporal integration” was widened (see Figure 9.2, panel 3c) because of asynchronous exposure (see also Navarra et al. 2007 for effects on JND after adaptation to asynchronous audio–tactile stimuli). The authors argued that this effect on the JND may reflect an initial stage of recalibration in which a more lenient criterion is adopted for simultaneity. With prolonged exposure, subjects may then shift the PSS. An alternative explanation—also considered by the authors, but rejected—might be that subjects became confused by the nonmatching exposure stimuli, which as a result may also affect the JND rather than the PSS because it adds noise to the distribution. The second way to study the underlying mechanisms of temporal recalibration is to examine whether temporal recalibration generalizes to different modality pairings. Hanson et al. (2008) explored whether a “supramodal” mechanism might be responsible for the recalibration of multisensory timing. They examined whether adaptation to audiovisual, audio–tactile, and tactile–visual asynchronies (10 ms flashes, noise bursts, and taps on the left index finger) generalized across modalities. The data showed that a brief period of repeated exposure to ±90 ms asynchrony in any of these pairings resulted in shifts of about 70 ms of the PSS on subsequent TOJ tasks, and that the size and nature of the shifts were very similar across all three pairings. This made them conclude that there is a “general mechanism.” Opposite conclusions though, were reached by Harrar and Harris (2005). They exposed participants for 5 min to audiovisual pairs with a fixed time lag (250 ms light-first), but did not obtain shifts in the PSSs for touch–light pairs. In an extension of this topic (Harrar and Harris 2008), observers were exposed for 5 min to ~100 ms lags of light-first stimuli for the audiovisual case, and touch-first stimuli for the auditory–tactile and visual–tactile case. Participants were tested on each of these pairs before and after exposure. Shifts of the PSS in the predicted direction were only found in the audiovisual exposure–test stimuli, but not for the other cases. Di Luca et al. (2007) also exposed participants to asynchronous audiovisual pairs (~200 ms lags of sound-first and light-first) and measured the PSS for audiovisual, audio–tactile, and visual– tactile test stimuli. Besides obtaining a shift in the PSS for audiovisual pairs, the effect was found to generalize to audio–tactile, but not to visual–tactile test pairs. This pattern made the authors conclude that adaptation resulted in a phenomenal shift of the auditory event (Di Luca et al. 2007). Navarra et al. (2009) also recently reported that the auditory rather than visual modality is more flexible. Participants were exposed to synchronous or asynchronous audiovisual stimuli (224 ms vision-first, or 84 ms auditory-first for 5 min of exposure) after which they performed a speeded reaction time task on unimodal visual or auditory stimuli. In contrast with the idea that visual stimuli get adjusted in time to the relatively more accurate auditory stimuli (Hirsh and Sherrick 1961; Shipley 1964; Welch 1999; Welch and Warren 1980), their results seemed to show the opposite, namely, that auditory rather than visual stimuli were shifted in time. The authors reported that simple reaction times to sounds became approximately 20 ms faster after vision-first exposure and about 20 ms slower after auditory-first exposure, whereas simple reaction times for visual stimuli remained unchanged. They explained this finding by alluding to the idea that visual information can serve as the temporal anchor because it is a more exact estimate of the time of occurrence of a distal event rather than auditory information because light travel time does not depend on distance. Further research is needed, however, to examine whether a change in simple reaction times is truly reflective of a change in the timing of that event, as there is quite some evidence showing that the two do not always go hand-in-hand (e.g., reaction times are more affected by variations in intensity than TOJs; Jaskowski and Verleger 2000; Neumann and Niepel 2004). To summarize, until now, there is no clear explanation for the mechanism underlying temporal recalibration as there is some discrepancy in the data regarding generalization across modalities. It seems safe to conclude that the audiovisual exposure–test situation is the most reliable one to obtain
164
The Neural Bases of Multisensory Processes
a shift in the PSS. Arguably, audiovisual pairs are more flexible because the brain has to correct for timing differences between auditory and visual stimuli because of naturally occurring delays caused by distance. Tactile stimuli might be more rigid in time because visual–tactile and audio– tactile events always occur at the body surface, so less compensation for latency differences might be required here. As already mentioned above, a widening of the JND, rather than a shift in the PSS, has also been observed and it might possibly reflect an initial stage of recalibration in which a more lenient criterion about simultaneity is adopted. The reliability of each modality on its own is also likely to play a role. For visual stimuli, it is known that they are less reliable in time than auditory or tactile stimuli (Fain 2003), and as a consequence they may be more malleable (Ernst and Banks 2002; Ernst et al. 2000; Ernst and Bulthoff 2004), but there is also evidence that the auditory modality is, in fact, shifted.
9.5.4 Temporal Ventriloquism The fourth possibility of how the brain might deal with lags between the senses, and how they may get unnoticed, is that the perceived timing of a stimulus in one modality is actively shifted toward the other (see Figure 9.2, panel 4). This phenomenon is also known as “temporal ventriloquism,” and it is named in analogy with the spatial ventriloquist effect. For spatial ventriloquism, it was already known for a long time that listeners who heard a sound while seeing a spatially displayed flash had the (false) impression that the sound originated from the flash. This phenomenon was named the “ventriloquist illusion” because it was considered a stripped-down version of what the ventriloquist was doing when performing on stage. The temporal ventriloquist effect is analogous to the spatial variant, except that here, sound attracts vision in the time dimension rather than vision attracting sound in the spatial dimension. There are, by now, many demonstrations of this phenomenon, and we describe several in subsequent paragraphs. They all show that small lags between sound and vision go unnoticed because the perceived timing of visual events is flexible and is attracted toward events in other modalities. Scheier et al. (1999) were one of the first to demonstrate temporal ventriloquism using a visual TOJ task (see Figure 9.4). Observers were presented with two lights at various SOAs, one above and one below a fixation point, and their task was to judge which light came first (the upper or the lower). To induce temporal ventriloquism, Scheier et al. added two sounds that could either be presented before the first and after the second light (condition AVVA), or the sounds could be presented in between the two lights (condition VAAV). Note that they used a visual TOJ task, and that sounds were task-irrelevant. The results showed that observers were more sensitive (i.e., smaller intervals were still perceived correctly) in the AVVA condition compared to the VAAV condition (visual JNDs were approximately 24 and 39 ms, respectively). Presumably, the two sounds attracted the temporal occurrence of the two lights, and thus, effectively pulled the lights farther apart in the AVVA condition, and closer together in the VAAV condition. In single-sound conditions, AVV and VVA, sensitivity was not different from a visual-only baseline, indicating that the effects were not because of the initial sound acting as a warning signal, or some cognitive factor related to the observer’s awareness of the sounds. Morein-Zamir et al. (2003) replicated these effects and further explored the sound–light intervals at which the effect occurred. Sound–light intervals of ~100 to ~600 ms were tested, and it was shown that the second sound was mainly responsible for the temporal ventriloquist effect up to a sound–light interval of 200 ms, whereas the interval of the first sound had little effect. The results were also consistent with earlier findings of Fendrich and Corballis (2001) who used a paradigm in which participants judged when a flash occurred by reporting the clock position of a rotating marker. The repeating flash was seen earlier when it was preceded by a click and later when the click lagged the visual stimulus. Another demonstration of temporal ventriloquism using a different paradigm came from a study by Vroomen and de Gelder (2004b). Here, temporal ventriloquism was demonstrated using the flash-lag effect (FLE). In the typical FLE (Mackay 1958;
165
Perception of Synchrony between the Senses (a) 0 ms AV Interval
100 ms AV Interval
Actual SOA
‘Perceived’ SOA Time
(b) 0 ms TV Interval
100 ms TV Interval
= Visual stimulus = Sound = Vibro-tactile stimulus
FIGURE 9.4 A schematic illustration of conditions typically used to demonstrate auditory–visual temporal ventriloquism (panel a) and tactile–visual temporal ventriloquism (panel b). The first capturing stimulus (i.e., either a sound or a vibro–tactile stimulus) precedes the first light by 100 ms, whereas the second capturing stimulus trails the second light by 100 ms. Baseline condition consists of presentation of two capturing stimuli simultaneous with light onsets. Temporal ventriloquism is typically shown by improved visual TOJ sensitivity when capture stimuli are presented with a 100-ms interval. (From Scheier, C.R. et al., Invest. Ophthalmol. Vis. Sci., 40, 4169, 1999; Morein-Zamir, S. et al., Cogn. Brain Res., 17, 154–63, 2003; Vroomen, J., Keetels, M., J. Exp. Psychol. Hum. Percept. Perform., 32, 1063–71, 2006; Keetels, M. et al., Exp. Brain Res., 180, 449–56, 2007; Keetels, M., Vroomen, J., Percept. Psychophys., 70, 765–71, 2008, Keetels, M., Vroomen, J., Neurosci. Lett., 430, 130–4, 2008. With permission.)
Nijhawan 1994, 1997, 2002), a flash appears to lag behind a moving visual stimulus even though the stimuli are presented at the same physical location. To induce temporal ventriloquism, Vroomen and de Gelder added a single click presented slightly before, at, or after the flash (intervals of 0, 33, 66, and 100 ms). The results showed that the sound attracted the temporal onset of the flash and shifted it in the order of ~5%. A sound ~100 ms before the flash thus made the flash appear ~5 ms earlier, and a sound 100 ms after the flash made the flash appear ~5 ms later. A sound, including the synchronous one, also improved sensitivity on the visual task because JNDs on the visual task were better if a sound was present rather than absent. Yet another recent manifestation of temporal ventriloquism used an apparent visual motion paradigm. Visual apparent motion occurs when a stimulus is flashed in one location and is followed by another identical stimulus flashed in another location (Korte 1915). Typically, an illusory movement is observed that starts at the lead stimulus and is directed toward the second lagging stimulus (the strength of the illusion depends on the exposure time of the stimuli, and the temporal and spatial separation between them). Getzmann (2007) explored the effects of irrelevant sounds on this motion illusion. In their study, two temporally separated visual stimuli (SOAs ranged from 0 to 350 ms) were presented and participants classified their impression of motion using a categorization system. The results demonstrated that sounds intervening between the visual stimuli facilitated the impression of apparent motion relative to no sounds, whereas sounds presented before the first and after the second visual stimulus reduced motion perception (see Bruns and Getzmann 2008 for similar results). The idea was that because exposure time and spatial separation were both held constant in this study, the impression of apparent motion was systematically affected by the perceived length of the interstimulus interval. The effect was explained in terms of temporal ventriloquism, as sounds attracted the illusory onset of visual stimuli. Freeman and Driver (2008) investigated whether the timing of a static sound could influence spatiotemporal processing of visual apparent motion. Apparent motion was induced by visual stimuli
166
The Neural Bases of Multisensory Processes
alternating between opposite hemifields. The perceived direction typically depends on the relative timing interval between the left–right and right–left flashes (e.g., rightward motion dominating when left–right interflash intervals are shortest; von Grunau 1986). In their study, the interflash intervals were always 500 ms (ambiguous motion), but sounds could slightly lead the left flash and lag the right flash by 83 ms or vice versa. Because of temporal ventriloquism, this variation made visual apparent motion depend on the timing of the sound stimuli (e.g., more rightward responses if a sound preceded the left flash, and lagged the right flash, and more leftward responses if a sound preceded the right flash, and lagged the left flash). The temporal ventriloquist effect has also been used as a diagnostic tool to examine whether commonality in space is a constraint on intersensory pairing. Vroomen and Keetels (2006) adopted the visual TOJ task of Scheier et al. (1999) and replicated that sounds improved sensitivity in the AVVA version of the visual TOJ task. Importantly, the temporal ventriloquist effect was unaffected by whether sounds and lights were colocated or not. For example, the authors varied whether the sounds came from a central location or a lateral one, whether the sounds were static or moving, and whether the sounds and lights came from the same or different sides of fixation at either small or large spatial disparities. All these variations had no effect on the temporal ventriloquist effect, despite that discordant sounds were shown to attract reflexive spatial attention and to interfere with speeded visual discrimination. These results made the author conclude that intersensory interactions in general do not require spatial correspondence between the components of the cross-modal stimuli (see also Keetels et al. 2007). In another study (Keetels and Vroomen 2008a), it was explored whether touch affects vision on the time dimension as audition does (visual–tactile ventriloquism), and whether spatial disparity between the vibrator and lights modifies this effect. Given that tactile stimuli are spatially better defined than tones because of their somatotopic rather than tonotopic initial coding, this study provided a strong test case for the notion that spatial co-occurrence between the senses is required for intersensory temporal integration. The results demonstrated that tactile–visual stimuli behaved like audiovisual stimuli, in that temporally misaligned tactile stimuli captured the onsets of the lights and spatial discordance between the stimuli did not harm this phenomenon. Besides exploring whether spatial disparity affects temporal ventriloquism, the effect of synesthetic congruency between modalities was also recently explored (Keetels and Vroomen 2010; Parise and Spence 2008). Parise and Spence (2008) suggested that pitch size synesthetic congruency (i.e., a natural association between the relative pitch of a sound and the relative size of a visual stimulus) might affect temporal ventriloquism. In their study, participants made visual TOJs about small-sized and large-sized visual stimuli whereas high-pitched or low-pitched tones were presented before the first and after the second light. The results showed that, at large sound–light intervals, sensitivity for visual temporal order was better for synesthetically congruent than incongruent pairs. In a more recent study, Keetels and Vroomen (2010) reexamined this effect and showed that this congruency effect could not be attributed to temporal ventriloquism, as it disappeared at short sound–light intervals if compared to a synchronous AV baseline condition that excludes response biases. In addition, synesthetic congruency did not affect temporal ventriloquism even if participants were made explicitly aware of congruency before testing, challenging the view that synesthetic congruency affects temporal ventriloquism. Stekelenburg and Vroomen (2005) also investigated the time course and the electrophysiological correlates of the audiovisual temporal ventriloquist effect using ERPs in the FLE. Their results demonstrated that the amplitude of the visual N1 was systematically affected by the temporal interval between the visual target flash and the task-irrelevant sound in the FLE paradigm (Mackay 1958; Nijhawan 1994, 1997, 2002). If a sound was presented in synchrony with the flash, the N1 amplitude was larger than when the sound lagged the visual stimulus, and it was smaller when the sound lead the flash. No latency shifts, however, were found. Yet, based on the latency of the crossmodal effect (N1 at 190 ms) and its localization in the occipitoparietal cortex, this study confirmed the sensory nature of temporal ventriloquism. An explanation for the absence of a temporal shift of
Perception of Synchrony between the Senses
167
the ERP components may lie in the small size of the temporal ventriloquist effect found (3 ms). Such a small temporal difference may not be reliably reflected in the ERPs because it reaches the lower limit of the temporal resolution of the sampled EEG. In most of the studies examining temporal ventriloquism (visual TOJ, FLE, reporting clock position or motion direction), the timing of the visual stimulus is the task-relevant dimension. Although recently, Vroomen and Keetels (2009) explored whether a temporally offset sound could improve the identification of a visual stimulus whereas temporal order is not involved. In this study, it was examined whether four-dot masking was affected by temporal ventriloquism. In the four-dot masking paradigm, visual target identification is impaired when a briefly presented target is followed by a mask that consists of four dots that surround but do not touch the visual target (Enns 2004; Enns and DiLollo 1997, 2000). The idea tested was that a sound presented slightly before the target and slightly after the mask might lengthen the perceived interval between target and mask. By lengthening the perceived target–mask interval, there is more time for the target to consolidate, and in turn target identification should be easier. Results were in line with this hypothesis as a small release from four-dot masking was reported (1% improvement, which corresponds to an increase of the target–mask ISI of 4.4 ms) if two sounds were presented at approximately 100-ms intervals before the target and after the mask, rather than if only a single sound was presented before the target or a silent condition. To summarize, there are by now many demonstrations that vision is flexible on the time dimension. In general, the perceived timing of a visual event is attracted toward other events in audition and touch, provided that the lag between them is less than ~200 ms. The deeper reason why there is this mutual attraction is still untested. Although in our view, it serves to reduce natural lags between the senses so that they become unnoticed, thus maintaining coherence between the senses. If so, one can ask what the relationship is between temporal ventriloquism and temporal recalibration. Despite the fact that occurs immediately when a temporal asynchrony is presented, whereas temporal recalibration manifests itself as an aftereffect, both effects are explained as perceptual solutions to maintain intersensory synchrony. The question can then be asked whether the same mechanism underlies the two phenomena. At first sight, one might argue that the magnitude of the temporal ventriloquist effect seems smaller than the temporal recalibration effects (temporal ventriloquism: Morein-Zamir et al. 2003, ~15 ms JND improvement; Scheier et al. 1999, 15 ms JND improvement; Vroomen and Keetels 2006, ~6 ms JND improvement; temporal recalibration: Fujisaki et al. 2004, ~30 ms PSS shifts for 225 ms adaptation lags; Hanson et al. 2008, ~35 ms PSS shifts for 90 ms adaptation lags; Navarra et al. 2009, ~20 ms shifts in reaction times; although relatively small effects were found by Vroomen et al. 2004, ~8 ms PSS shifts for 100 ms adaptation lags). However, these magnitudes cannot be compared directly because the temporal ventriloquist effect refers to an improvement in JNDs, whereas the temporal recalibration effect is typically a shift of the PSS. Moreover, in studies measuring temporal recalibration, there is usually much more exposure to temporal asynchronies than in studies measuring temporal ventriloquism. Therefore, it remains up to future studies to examine whether the mechanisms that are involved in temporal ventriloquism and temporal recalibration are the same.
9.6 TEMPORAL SYNCHRONY: AUTOMATIC OR NOT? An important property about the perception of intersensory synchrony is to know whether it is perceived in an automatic fashion or not. As is often the case, there are two opposing views on this issue. Some have reported that the detection of temporal alignment is a slow, serial, and attentiondemanding process, whereas others have argued that it is fast and only requires a minimal amount of attention that is needed to perceive the visual stimulus, but once this criterion is met, audiovisual or visual–tactile integration comes for free. An important signature of automatic processing is that the stimulus in question is salient and “pops out.” If so, the stimulus is easy to find among distracters. What about intersensory synchrony:
168
The Neural Bases of Multisensory Processes
does it “pop out”? In a study by van de Par and Kohlrausch (2004), this question was addressed by presenting observers a visual display of a number of independently moving circles moving up and down along a Gaussian profile. Along with the motion display, a concurrent sound was presented in which amplitude was modulated coherently with one of the circles. The participant’s task was to identify the coherently moving visual circle as quickly as possible. The authors found that response times increased approximately linearly with the numbers of distracters (~500 ms/distracter), indicating a slow serial search process rather than pop-out. Fujisaki et al. (2006) came to similar conclusions. They examined search functions for a visual target that changed in synchrony with an auditory stimulus. The visual display consisted of two, four, or eight luminance-modulated Gaussian blobs presented at 5, 10, 20, and 40 Hz that were accompanied by a white noise sound whose amplitude was modulated in synch with one of the visual stimuli. Other displays contained clockwise/counterclockwise rotations of windmills synchronized with a sound whose frequency was modulated up or down at a rate of 10 Hz. The observers’ task was to indicate which visual stimulus was luminance-modulated in synch with the sound. Search functions for both displays were slow (~1 s/distractor in target-present displays), and increased linearly with the number of visual distracters. In a control experiment, it was also shown that synchrony discrimination was unaffected by the presence of distractors if attention was directed at the visual target. Fujisaki et al. therefore concluded that perception of audiovisual synchrony is a slow and serial process based on a comparison of salient temporal features that need to be individuated from within-modal signal streams. Others, though, came to quite opposing conclusions and found that intersensory synchrony can be detected in an automatic fashion. Most notably, van der Burg et al. (2008b) reported an interesting study in which they showed that a simple auditory pip can drastically reduce search times for a color-changing object that is synchronized with the pip. The authors presented a horizontal or vertical target line among a large array of oblique lines. Each of the lines (target and distracters) changed color from green-to-red or red-to-green in a random fashion. If a pip sound was synchronized with a color change, visual attention was automatically drawn to the location of the line that changed color. When the sound was synchronized with the color change of the target, search times improved drastically and the number of irrelevant distracters had virtually no effect on search times (a nearly flat slope indicating pop-out). The authors concluded that the temporal information of the auditory signal was integrated with the visual signal generating a relatively salient emergent feature that automatically draw spatial attention (see also van der Burg et al. 2008a). Similar effects were also demonstrated for tactile stimuli instead of auditory pips (Olivers and van der Burg 2008; van der Burg et al. 2009). Kanai et al. (2007) also explored temporal correspondences in visually ambiguous displays. They presented multiple disks flashing sequentially at one of eight locations in a circle, thus inducing the percept of a disk revolving around fixation. A sound was presented at one particular location in every cycle, and participants had to indicate the disk that was temporally aligned with the sound. The disk seen as being synchronized with the sound was perceived as brighter with a sharper onset and offset (Vroomen and de Gelder 2000). Moreover, it fluctuated over time and its position changed every 5 to 10 s. Kanai et al. explored whether this flexibility was dependent on attention by having observers perform a concurrent task in which they had to count the number of X’s in a letter stream. The results demonstrated that the transitions disappeared whenever attention was distracted from the stimulus. On the other hand, if attention was directed to one particular visual event—either by making it “pop-out” by a using a different color, by presenting a cue next to the target dot, or by overtly cueing it—the perceived timing of the sound was attracted toward that event. These results thus suggest that perception of intersensory synchrony is flexible, and is not completely immune to attention. These opposing views on the role of attention can be reconciled on the assumption that perception of synchrony depends on a matching process of salient temporal features (Fujisaki et al. 2006; Fujisaki and Nishida 2007). Saliency may be lost when stimuli are presented at fast rates (typically above 4 Hz), when perceptually grouped into other streams, or if they lack a sharp transition
Perception of Synchrony between the Senses
169
(Keetels et al. 2007; Sanabria et al. 2004; Vroomen and de Gelder 2004a; Watanabe and Shimojo 2001). In line with this notion, studies reporting that audiovisual synchrony detection is slow, either presented stimuli at fast rates (>4 Hz up to 80/s) or the stimuli lacked a sharp onset/offset (e.g., van de Par, using a Gaussian amplitude modulation). Others reporting automatic detection of auditory– visual synchrony used much slower rates (1.11 Hz; van der Burg et al. 2008b) and sharp transitions (a pip).
9.7 NEURAL SUBSTRATES OF TEMPORAL SYNCHRONY Although temporal correspondence is frequently considered one of the most important constraints on cross-modal integration (e.g., Bedford 1989; Bertelson 1999; Radeau 1994; Stein and Meredith 1993; Welch 1999; Welch and Warren 1980), the neural correlates for the ability to detect and use temporal synchrony remain largely unknown. Most likely, however, a whole network is involved. Seminal studies examining the neural substrates of intersensory temporal correspondence were done in animals. The finding that the firing rate of a subsample of cells in the superior colliculi (SC) increases dramatically and more than what can be expected by summing the unimodal impulses when auditory (tones) and visual stimuli (flashes) occur in close temporal and spatial proximity (Meredith et al. 1987; Stein et al. 1993) is well-known. More recently, Calvert et al. (2001) used functional magnetic resonance imaging (fMRI) on human subjects for studying brain areas that demonstrate facilitation and suppression effects in the blood oxygenation level–dependent (BOLD) signal for temporally aligned and temporally misaligned audiovisual stimuli. Their stimulus consisted of a reversing checkerboard pattern of alternating black and white squares with sounds presented either simultaneously with the onset of a reversal (synchronous condition) or a randomly phase-shifted asynchronous condition. The results showed an involvement of the SC as its response was superadditive for temporally matched stimuli and depressed for the temporally mismatched ones. Other cross-modal interactions were also identified in a network of cortical brain areas that included several frontal cortical sites; the right inferior frontal gyrus, multiple sites within the right lateral sulcus, and the ventromedial frontal gyrus. Furthermore, response enhancement and depression was observed in the insula bilaterally, right superior parietal lobule, right inferior parietal sulcus, left superior occipital gyrus, and left superior temporal sulcus (STS). Bushara et al. (2001) examined the effect of temporal asynchrony in a positron emission tomography study. Here, observers had to decide whether a colored circle was presented simultaneously with a tone or not. The stimulus pairs could either be auditory-first (AV) or vision-first (VA) at three levels of SOAs that varied in difficulty. A control condition (C) was included in which the auditory and visual stimuli were presented simultaneously, and in which participants performed a visual color discrimination task whenever a sound was present. The brain areas involved in auditory–visual synchrony detection were identified by subtracting the activity of the control condition from that in the asynchronous conditions (AV-C and VA-C). Results revealed a network of heteromodal brain areas that included the right anterior insula, the right ventrolateral prefrontal cortex, right inferior parietal lobe, and left cerebellar hemisphere. The activity in the areas that correlated positively with decreasing asynchrony revealed a cluster within the right insula, suggesting that this region is most important for the detection of auditory–visual synchrony. Given that interactions were also found between the insula, the posterior thalamus, and the SC, it was suggested that intersensory temporal processing is mediated via subcortical tecto-thalamo-insula pathways. In a positron emission tomography study by Macaluso et al. (2004), subjects were looking at a video monitor that showed a face mouthing words. In different blocks of trials, the audiovisual signals were either presented synchronously or asynchronously (the auditory stimulus was leading by a clearly noticeable 240 ms). In addition, the visual and auditory sources were either presented at the same location or in opposite hemifields. Results showed that activity in ventral occipital areas and left STS increased during synchronous audiovisual speech, regardless of the relative location of the auditory and visual input.
170
The Neural Bases of Multisensory Processes
More recently, in an fMRI study, Dhamala et al. (2007) examined the networks that are involved in the perception of physically synchronous versus asynchronous audiovisual events. Two timing parameters were varied: the SOA between sound and light (–200 to +200 ms) and the stimulation rate (0.5–3.5 Hz). In the behavioral task, observers had to report whether stimuli were perceived as simultaneous, sound-first, light-first, or “Can’t tell,” resulting in the classification of three distinct perceptual states, that is, the perception of synchrony, asynchrony, and “no clear perception.” The fMRI data showed that each of these stages involved activation in different brain networks. Perception of asynchrony activated the primary sensory, prefrontal, and inferior parietal cortices, whereas perception of synchrony disengaged the inferior parietal cortex and further recruited the SC. An fMRI study by Noesselt et al. (2007) also explored the effect of temporal correspondence between auditory and visual streams. The stimuli were arranged such that auditory and visual streams were temporally corresponding or not, using irregular and arrhythmic temporal patterns that either matched between audition and vision or mismatched substantially whereas maintaining the same overall temporal statistics. For the coincident audiovisual streams, there was an increase in the BOLD response in multisensory STS contralateral to the visual stream. The contralateral primary visual and auditory cortex were also found to be affected by the synchrony–asynchrony manipulations, and a connectivity analysis indicated enhanced influence from mSTS on primary sensory areas during temporal correspondence. In an EEG paradigm, Senkowski et al. (2007) examined the neural mechanisms underlying intersensory synchrony by measuring oscillatory gamma-band responses (GBRs; 30–80 Hz). Oscillatory GBRs have been linked to feature integration mechanisms and to multisensory processing. The authors reasoned that GBRs might also be sensitive to the temporal alignment of intersensory stimulus components. The temporal synchrony of auditory and visual components of a multisensory signal was varied (tones and horizontal gratings with SOAs ranging from –125 to +125 ms). The GBRs to the auditory and visual components of multisensory stimuli were extracted for five subranges of asynchrony and compared with GBRs to unisensory control stimuli. The results revealed that multisensory interactions were strongest in the early GBRs when the sound and light stimuli were presented with the closest synchrony. These effects were most evident over medial– frontal brain areas after 30 to 80 ms and over occipital areas after 60 to 120 ms, indicating that temporal synchrony may have an effect on early intersensory interactions in the human cortex. Overall, it should be noted that there is a lot of variation in the outcomes of studies that have examined the neural basis of intersensory temporal synchrony. At present, the issue is far from resolved and more research has to be performed to unravel the exact neural substrates underlying it. The overall finding is that the SC and mSTS are repeatedly reported in intersensory synchrony detection studies, which at least suggests a prominent role for these structures in the processing of intersensory stimuli based on their temporal correspondence. For the time being, however, it is unknown how these areas would affect the perception of intersensory synchrony if they were damaged or temporarily blocked by, for example, transcranial magnetic stimulation.
9.8 CONCLUSIONS In recent years, a substantial amount of research has been devoted to understanding how the brain handles lags between the senses. The most important conclusion we draw is that intersensory timing is flexible and adaptive. The flexibility is clearly demonstrated by studies showing one or another variant of temporal ventriloquism. In that case, small lags go unnoticed because the brain actively shifts one information stream (usually vision) toward the other, possibly to maintain temporal coherence. The adaptive part rests on studies of temporal recalibration demonstrating that observers are flexible in adopting what counts as synchronous. The extent to which temporal recalibration generalizes to other stimuli and domains, however, remains to be further explored. The idea that the brain compensates for predictable variability between the senses—most notably distance—is, in
Perception of Synchrony between the Senses
171
our view, not well-founded. We are more enthusiastic about the notion that intersensory synchrony is perceived mostly in an automatic fashion, provided that the individual components of the stimuli are sufficiently salient. The neural mechanisms that underlie this ability are of clear importance for future research.
REFERENCES Alais, D., and S. Carlile. 2005. Synchronizing to real events: Subjective audiovisual alignment scales with perceived auditory depth and speed of sound. Proceedings of the National Academy of Sciences of the United States of America 102(6);2244–7. Arnold, D.H., A. Johnston, and S. Nishida. 2005. Timing sight and sound. Vision Research 45(10);1275–84. Arrighi, R., D. Alais, and D. Burr. 2006. Perceptual synchrony of audiovisual streams for natural and artificial motion sequences. Journal of Vision 6(3);260–8. Asakawa, K., A. Tanaka, and H. Imai. 2009. Temporal Recalibration in Audio-Visual Speech Integration Using a Simultaneity Judgment Task and the McGurk Identification Task. Paper presented at the 31st Annual Meeting of the Cognitive Science Society (July 29–August 1, 2009). Amsterdam, The Netherlands. Bald, L., F.K. Berrien, J.B. Price, and R.O. Sprague. 1942. Errors in perceiving the temporal order of auditory and visual stimuli. Journal of Applied Psychology 26;283–388. Bedford, F.L. 1989. Constraints on learning new mappings between perceptual dimensions. Journal of Experimental Psychology. Human Perception and Performance 15(2);232–48. Benjamins, J.S., M.J. van der Smagt, and F.A. Verstraten. 2008. Matching auditory and visual signals: Is sensory modality just another feature? Perception 37(6);848–58. Bertelson, P. 1994. The cognitive architecture behind auditory-visual interaction in scene analysis and speech identification. Cahiers de Psychologie Cognitive 13(1);69–75. Bertelson, P. 1999. Ventriloquism: A case of crossmodal perceptual grouping. In G. Aschersleben, T. Bachmann, and J. Musseler (eds.), Cognitive Contributions to the Perception of Spatial and Temporal Events, 347–63. North-Holland: Elsevier. Bertelson, P., and G. Aschersleben. 1998. Automatic visual bias of perceived auditory location. Psychonomic Bulletin & Review 5(3);482–89. Bertelson, P., and G. Aschersleben. 2003. Temporal ventriloquism: Crossmodal interaction on the time dimension: 1. Evidence from auditory–visual temporal order judgment. International Journal of Psychophysiology 50(1–2);147–55. Boenke, L.T., M. Deliano, and F.W. Ohl. 2009. Stimulus duration influences perceived simultaneity in audiovisual temporal-order judgment. Experimental Brain Research 198(2–3);233–44. Bronkhorst, A.W. 1995. Localization of real and virtual sound sources. Journal of the Acoustical Society of America 98(5);2542–53. Bronkhorst, A.W., and T. Houtgast. 1999. Auditory distance perception in rooms. Nature 397;517–20. Bruns, P., and S. Getzmann. 2008. Audiovisual influences on the perception of visual apparent motion: Exploring the effect of a single sound. Acta Psychologica 129(2);273–83. Bushara, K.O., J. Grafman, and M. Hallett. 2001. Neural correlates of auditory-visual stimulus onset asynchrony detection. Journal of Neuroscience 21(1);300–4. Calvert, G., P.C. Hansen, S.D. Iversen, and M.J. Brammer. 2001. Detection of audio-visual integration sites in humans by application of electrophysiological criteria to the BOLD effect. NeuroImage 14(2);427–38. Calvert, G., C. Spence, and B. Stein. 2004. The Handbook of Multisensory Processes. Cambridge, MA: The MIT Press. Colin, C., M. Radeau, P. Deltenre, and J. Morais. 2001. Rules of intersensory integration in spatial scene analysis and speechreading. Psychologica Belgica 41(3);131–44. Conrey, B., and D.B. Pisoni. 2006. Auditory–visual speech perception and synchrony detection for speech and nonspeech signals. Journal of the Acoustical Society of America 119(6);4065–73. Dhamala, M., C.G. Assisi, V.K. Jirsa, F.L. Steinberg, and J.A. Kelso. 2007. Multisensory integration for timing engages different brain networks. NeuroImage 34(2);764–73. Di Luca, M., T. Machulla, and M.O. Ernst. 2007. Perceived Timing Across Modalities. Paper presented at the International Intersensory Research Symposium 2007: Perception and Action (July 3, 2007). Sydney, Australia. Dinnerstein, A.J., and P. Zlotogura. 1968. Intermodal perception of temporal order and motor skills: Effects of age. Perceptual and Motor Skills 26(3);987–1000. Dixon, N.F., and L. Spitz. 1980. The detection of auditory visual desynchrony. Perception 9(6);719–21.
172
The Neural Bases of Multisensory Processes
Eagleman, D.M., and A.O. Holcombe. 2002. Causality and the perception of time. Trends in Cognitive Sciences 6(8);323–5. Eimer, M., and J. Driver. 2001. Crossmodal links in endogenous and exogenous spatial attention: Evidence from event-related brain potential studies. Neuroscience and Biobehavioral Reviews 25(6);497–511. Eimer, M., and E. Schroger. 1998. ERP effects of intermodal attention and cross-modal links in spatial attention. Psychophysiology 35(3);313–27. Engel, G.R., and W.G. Dougherty. 1971. Visual–auditory distance constancy. Nature 234(5327);308. Enns, J.T. 2004. Object substitution and its relation to other forms of visual masking. Vision Research 44(12);1321–31. Enns, J.T., and V. DiLollo. 1997. Object substitution: A new form of masking in unattended visual locations. Psychological Science 8;135–9. Enns, J.T., and V. DiLollo. 2000. What’s new in visual masking? Trends in Cognitive Sciences 4(9);345–52. Ernst, M.O., and M.S. Banks. 2002. Humans integrate visual and haptic information in a statistically optimal fashion. Nature 415(6870);429–33. Ernst, M.O., and H.H. Bulthoff. 2004. Merging the senses into a robust percept. Trends in Cognitive Sciences 8(4);162–9. Ernst, M.O., M.S. Banks, and H.H. Bulthoff. 2000. Touch can change visual slant perception. Nature Neuro science 3(1);69–73. Fain, G.L. 2003. Sensory Transduction. Sunderland, MA: Sinauer Associates. Fendrich, R., and P.M. Corballis. 2001. The temporal cross-capture of audition and vision. Perception & Psychophysics 63(4);719–25. Finger, R., and A.W. Davis. 2001. Measuring Video Quality in Videoconferencing Systems. Technical Report SN187-D. Los Gatos, CA: Pixel Instrument Corporation. Freeman, E., and J. Driver. 2008. Direction of visual apparent motion driven solely by timing of a static sound. Current Biology 18(16);1262–6. Frey, R.D. 1990. Selective attention, event perception and the criterion of acceptability principle: Evidence supporting and rejecting the doctrine of prior entry. Human Movement Science 9;481–530. Fujisaki, W., and S. Nishida. 2005. Temporal frequency characteristics of synchrony–asynchrony discrimination of audio-visual signals. Experimental Brain Research 166(3–4);455–64. Fujisaki, W., and S. Nishida. 2007. Feature-based processing of audio-visual synchrony perception revealed by random pulse trains. Vision Research 47(8);1075–93. Fujisaki, W., S. Shimojo, M. Kashino, and S. Nishida. 2004. Recalibration of audiovisual simultaneity. Nature Neuroscience 7(7);773–8. Fujisaki, W., A. Koene, D. Arnold, A. Johnston, and S. Nishida. 2006. Visual search for a target changing in synchrony with an auditory signal. Proceedings of Biological Science 273(1588);865–74. Getzmann, S. 2007. The effect of brief auditory stimuli on visual apparent motion. Perception 36(7);1089–103. Grant, K.W., V. van Wassenhove, and D. Poeppel. 2004. Detection of auditory (cross-spectral) and auditory– visual (cross-modal) synchrony. Speech Communication 44;43–53. Hanson, J.V., J. Heron, and D. Whitaker. 2008. Recalibration of perceived time across sensory modalities. Experimental Brain Research 185(2);347–52. Harrar, V., and L.R. Harris. 2005. Simultaneity constancy: Detecting events with touch and vision. Experimental Brain Research 166(3–4);465–73. Harrar, V., and L.R. Harris. 2008. The effect of exposure to asynchronous audio, visual, and tactile stimulus combinations on the perception of simultaneity. Experimental Brain Research 186(4);517–24. Heron, J., D. Whitaker, P.V. McGraw, and K.V. Horoshenkov. 2007. Adaptation minimizes distance-related audiovisual delays. Journal of Vision 7(13);51–8. Hillyard, S.A., and T.F. Munte. 1984. Selective attention to color and location: An analysis with event-related brain potentials. Perception & Psychophysics 36(2);185–98. Hirsh, I.J., and P. Fraisse. 1964. Simultaneous character and succession of heterogenous stimuli. L’Année Psychologique 64;1–19. Hirsh, I.J., and C.E. Sherrick. 1961. Perceived order in different sense modalities. Journal of Experimental Psychology 62(5);423–32. Jaskowski, P. 1999. Reaction time and temporal-order judgment as measures of perceptual latency: The problem of dissociations. In G. Aschersleben, T. Bachmann, and J. Müsseler (eds.), Cognitive Contributions to the Perception of Spatial and Temporal Events (pp. 265–82). North-Holland: Elsevier Science B.V. Jaskowski, P., and R. Verleger. 2000. Attentional bias toward low-intensity stimuli: An explanation for the intensity dissociation between reaction time and temporal order judgment? Consciousness and Cognition 9(3);435–56.
Perception of Synchrony between the Senses
173
Jaskowski, P., F. Jaroszyk, and D. Hojan-Jezierska. 1990. Temporal-order judgments and reaction time for stimuli of different modalities. Psychological Research, 52(1);35–8. Jones, J.A., and M. Jarick. 2006. Multisensory integration of speech signals: The relationship between space and time. Experimental Brain Research 174(3);588–94. Jones, J.A., and K.G. Munhall. 1997. The effects of separating auditory and visual sources on the audiovisual integration of speech. Canadian Acoustics 25(4);13–9. Kanai, R., B.R. Sheth, F.A. Verstraten, and S. Shimojo. 2007. Dynamic perceptual changes in audiovisual simultaneity. PLoS ONE 2(12);e1253. Kayser, C., C.I. Petkov, and N.K. Logothetis. 2008. Visual modulation of neurons in auditory cortex. Cerebral Cortex 18(7);1560–74. Keetels, M., and J. Vroomen. 2005. The role of spatial disparity and hemifields in audio-visual temporal order judgements. Experimental Brain Research 167;635–40. Keetels, M., and J. Vroomen. 2007. No effect of auditory-visual spatial disparity on temporal recalibration. Experimental Brain Research 182(4);559–65. Keetels, M., and J. Vroomen. 2008a. Tactile–visual temporal ventriloquism: No effect of spatial disparity. Perception & Psychophysics 70(5);765–71. Keetels, M., and J. Vroomen. 2008b. Temporal recalibration to tactile–visual asynchronous stimuli. Neuroscience Letters 430(2);130–4. Keetels, M., and J. Vroomen. 2010. No effect of synesthetic congruency on temporal ventriloquism. Attention, Perception, & Psychophysics 72(4);871–4. Keetels, M., J. Stekelenburg, and J. Vroomen. 2007. Auditory grouping occurs prior to intersensory pairing: Evidence from temporal ventriloquism. Experimental Brain Research 180(3);449–56. King, A.J. 2005. Multisensory integration: Strategies for synchronization. Current Biology 15(9);R339–41. King, A.J., and A.R. Palmer. 1985. Integration of visual and auditory information in bimodal neurones in the guinea-pig superior colliculus. Experimental Brain Research 60(3);492–500. Kitagawa, N., M. Zampini, and C. Spence. 2005. Audiotactile interactions in near and far space. Experimental Brain Research 166(3–4);528–37. Kopinska, A., and L.R. Harris. 2004. Simultaneity constancy. Perception 33(9);1049–60. Korte, A. 1915. Kinematoskopische untersuchungen. Zeitschrift für Psychologie mit Zeitschrift für Angewandte Psychologie 72;194–296. Levitin, D., K. MacLean, M. Mathews, and L. Chu. 2000. The perception of cross-modal simultaneity. International Journal of Computing and Anticipatory Systems, 323–9. Lewald, J., and R. Guski. 2003. Cross-modal perceptual integration of spatially and temporally disparate auditory and visual stimuli. Cognitive Brain Research 16(3);468–78. Lewald, J., and R. Guski. 2004. Auditory–visual temporal integration as a function of distance: No compensation for sound-transmission time in human perception. Neuroscience Letters 357(2);119–22. Lewkowicz, D.J. 1996. Perception of auditory-visual temporal synchrony in human infants. Journal of Experimental Psychology. Human Perception and Performance 22(5);1094–106. Macaluso, E., N. George, R. Dolan, C. Spence, and J. Driver. 2004. Spatial and temporal factors during processing of audiovisual speech: A PET study. NeuroImage 21(2);725–32. Macefield, G., S.C. Gandevia, and D. Burke. 1989. Conduction velocities of muscle and cutaneous afferents in the upper and lower limbs of human subjects. Brain 112(6);1519–32. Mackay, D.M. 1958. Perceptual stability of a stroboscopically lit visual field containing self-luminous objects. Nature 181(4607);507–8. Massaro, D.W., M.M. Cohen, and P.M. Smeele. 1996. Perception of asynchronous and conflicting visual and auditory speech. Journal of the Acoustical Society of America 100(3);1777–86. Mattes, S., and R. Ulrich. 1998. Directed attention prolongs the perceived duration of a brief stimulus. Perception & Psychophysics 60(8);1305–17. McGrath, M., and Q. Summerfield. 1985. Intermodal timing relations and audio-visual speech recognition by normal-hearing adults. Journal of the Acoustical Society of America 77(2);678–85. McGurk, H., and J. MacDonald. 1976. Hearing lips and seeing voices. Nature 264(5588);746–8. Meredith, M.A., J.W. Nemitz, and B.E. Stein. 1987. Determinants of multisensory integration in superior colliculus neurons. I. Temporal factors. Journal of Neuroscience 7(10);3215–29. Mitrani, L., S. Shekerdjiiski, and N. Yakimoff. 1986. Mechanisms and asymmetries in visual perception of simultaneity and temporal order. Biological Cybernetics 54(3);159–65. Mollon, J.D., and A.J. Perkins. 1996. Errors of judgement at Greenwich in 1796. Nature 380(6570);101–2. Morein-Zamir, S., S. Soto-Faraco, and A. Kingstone. 2003. Auditory capture of vision: Examining temporal ventriloquism. Cognitive Brain Research 17(1);154–63.
174
The Neural Bases of Multisensory Processes
Mortlock, A.N., D. Machin, S. McConnell, and P. Sheppard. 1997. Virtual conferencing. BT Technology Journal 15;120–9. Munhall, K.G., P. Gribble, L. Sacco, and M. Ward. 1996. Temporal constraints on the McGurk effect. Perception & Psychophysics 58(3);351–62. Navarra, J., A. Vatakis, M. Zampini et al. 2005. Exposure to asynchronous audiovisual speech extends the temporal window for audiovisual integration. Cognitive Brain Research 25(2);499–507. Navarra, J., S. Soto-Faraco, and C. Spence. 2007. Adaptation to audiotactile asynchrony. Neuroscience Letters 413(1);72–6. Navarra, J., J. Hartcher-O’Brien, E. Piazza, and C. Spence. 2009. Adaptation to audiovisual asynchrony modulates the speeded detection of sound. Proceedings of the National Academy of Sciences of the United States of America 106(23);9169–73. Neumann, O., and M. Niepel. 2004. Timing of “perception” and perception of “time.” In C. Kaernbach, E. Schröger, and H. Müller (eds.), Psychophysics Beyond Sensation: Laws and Invariants of Human Cognition (pp. 245–70): Lawrence Erlbaum Associates, Inc. Nijhawan, R. 1994. Motion extrapolation in catching. Nature 370(6487);256–7. Nijhawan, R. 1997. Visual decomposition of colour through motion extrapolation. Nature 386(6620);66–9. Nijhawan, R. 2002. Neural delays, visual motion and the flash-lag effect. Trends in Cognitive Science 6(9);387. Noesselt, T., J.W. Rieger, M.A. Schoenfeld et al. 2007. Audiovisual temporal correspondence modulates humultisensory man superior temporal sulcus plus primary sensory cortices. Journal of Neuroscience 27(42);11431–41. Occelli, V., C. Spence, and M. Zampini. 2008. Audiotactile temporal order judgments in sighted and blind individuals. Neuropsychologia 46(11);2845–50. Olivers, C.N., and E. van der Burg. 2008. Bleeping you out of the blink: Sound saves vision from oblivion. Brain Research 1242;191–9. Pandey, P.C., H. Kunov, and S.M. Abel. 1986. Disruptive effects of auditory signal delay on speech perception with lipreading. Journal of Auditory Research 26(1);27–41. Parise, C., and C. Spence. 2008. Synesthetic congruency modulates the temporal ventriloquism effect. Neuroscience Letters 442(3);257–61. Pöppel, E. 1985. Grenzes des bewusstseins, Stuttgart: Deutsche Verlags-Anstal, translated as Mindworks: Time and Conscious Experience. New York: Harcourt Brace Jovanovich. 1988. Poppel, E., K. Schill, and N. von Steinbuchel. 1990. Sensory integration within temporally neutral systems states: A hypothesis. Naturwissenschaften 77(2);89–91. Radeau, M. 1994. Auditory-visual spatial interaction and modularity. Cahiers de Psychologie Cognitive 13(1);3–51. Rihs, S. 1995. The Influence of Audio on Perceived Picture Quality and Subjective Audio-Visual Delay Tolerance. Paper presented at the MOSAIC Workshop: Advanced methods for the evaluation of television picture quality, Eindhoven, 18–19 September. Roefs, J.A.J. 1963. Perception lag as a function of stimulus luminance. Vision Research 3;81–91. Rutschmann, J., and R. Link. 1964. Perception of temporal order of stimuli differing in sense mode and simple reaction time. Perceptual and Motor Skills 18;345–52. Sanabria, D., S. Soto-Faraco, and C. Spence. 2004. Exploring the role of visual perceptual grouping on the audiovisual integration of motion. Neuroreport 15(18);2745–9. Sanford, A.J. 1971. Effects of changes in the intensity of white noise on simultaneity judgements and simple reaction time. Quarterly Journal of Experimental Psychology 23;296–303. Scheier, C.R., R. Nijhawan, and S. Shimojo. 1999. Sound alters visual temporal resolution. Investigative Ophthalmology & Visual Science 40;4169. Schneider, K.A., and D. Bavelier. 2003. Components of visual prior entry. Cognitive Psychology 47(4); 333–66. Sekuler, R., A.B. Sekuler, and R. Lau. 1997. Sound alters visual motion perception. Nature 385;308–08. Senkowski, D., D. Talsma, M. Grigutsch, C.S. Herrmann, and M.G. Woldorff. 2007. Good times for multisensory integration: Effects of the precision of temporal synchrony as revealed by gamma-band oscillations. Neuropsychologia 45(3);561–71. Shams, L., Y. Kamitani, and S. Shimojo. 2002. Visual illusion induced by sound. Cognitive Brain Research 14(1);147–52. Shimojo, S., C. Scheier, R. Nijhawan et al. 2001. Beyond perceptual modality: Auditory effects on visual perception. Acoustical Science & Technology 22(2);61–67. Shipley, T. 1964. Auditory flutter-driving of visual flicker. Science 145;1328–30.
Perception of Synchrony between the Senses
175
Shore, D.I., C. Spence, and R.M. Klein. 2001. Visual prior entry. Psychological Science 12(3);205–12. Shore, D.I., C. Spence, and R.M. Klein. 2005. Prior entry. In L. Itti, G. Rees, and J. Tsotsos (eds.), Neurobiology of Attention (pp. 89–95). North Holland: Elsevier. Slutsky, D.A., and G.H. Recanzone. 2001. Temporal and spatial dependency of the ventriloquism effect. Neuroreport 12(1);7–10. Smith, W.F. 1933. The relative quickness of visual and auditory perception. Journal of Experimental Psychology 16;239–257. Soto-Faraco, S., and A. Alsius. 2007. Conscious access to the unisensory components of a cross-modal illusion. Neuroreport 18(4);347–50. Soto-Faraco, S., and A. Alsius. 2009. Deconstructing the McGurk–MacDonald illusion. Journal of Experimen tal Psychology. Human Perception and Performance 35(2);580–7. Spence, C., and J. Driver. 1996. Audiovisual links in endogenous covert spatial attention. Journal of Experimen tal Psychology. Human Perception and Performance 22(4);1005–30. Spence, C., and J. Driver. 2004. Crossmodal Space and Crossmodal Attention. Oxford: Oxford University Press. Spence, C., F. Pavani, and J. Driver. 2000. Crossmodal links between vision and touch in covert endogenous spatial attention. Journal of Experimental Psychology. Human Perception and Performance 26(4);1298–319. Spence, C., and S. Squire. 2003. Multisensory integration: Maintaining the perception of synchrony. Current Biology 13(13);R519–21. Spence, C., D.I. Shore, and R.M. Klein. 2001. Multisensory prior entry. Journal of Experimental Psychology. General 130(4);799–832. Spence, C., R. Baddeley, M. Zampini, R. James, and D.I. Shore. 2003. Multisensory temporal order judgments: When two locations are better than one. Perception & Psychophysics 65(2);318–28. Stein, B.E., and M.A. Meredith. 1993. The Merging of the Senses. Cambridge, MA: The MIT Press. Stein, B.E., M.A. Meredith, and M.T. Wallace. 1993. The visually responsive neuron and beyond: Multisensory integration in cat and monkey. Progress in Brain Research 95;79–90. Stein, B.E., N. London, L.K. Wilkinson, and D.D. Price. 1996. Enhancement of perceived visual intensity by auditory stimuli: A psychophysical analysis. Journal of Cognitive Neuroscience 8(6);497–506. Stekelenburg, J.J., and J. Vroomen. 2005. An event-related potential investigation of the time-course of temporal ventriloquism. Neuroreport 16;641–44. Stekelenburg, J.J., and J. Vroomen. 2007. Neural correlates of multisensory integration of ecologically valid audiovisual events. Journal of Cognitive Neuroscience 19(12);1964–73. Stelmach, L.B., and C.M. Herdman. 1991. Directed attention and perception of temporal order. Journal of Experimental Psychology. Human Perception and Performance 17(2);539–50. Sternberg, S., and R.L. Knoll. 1973. The perception of temporal order: Fundamental issues and a general model. In S. Kornblum (ed.), Attention and Performance (vol. IV, pp. 629–85). New York: Academic Press. Stetson, C., X. Cui, P.R. Montague, and D.M. Eagleman. 2006. Motor–sensory recalibration leads to an illusory reversal of action and sensation. Neuron 51(5);651–9. Stone, J.V., N.M. Hunkin, J. Porrill et al. 2001. When is now? Perception of simultaneity. Proceedings of the Royal Society of London. Series B. Biological Sciences 268(1462);31–8. Sugano, Y., M. Keetels, and J. Vroomen. 2010. Adaptation to motor–visual and motor–auditory temporal lags transfer across modalities. Experimental Brain Research 201(3);393–9. Sugita, Y., and Y. Suzuki. 2003. Audiovisual perception: Implicit estimation of sound-arrival time. Nature 421(6926);911. Sumby, W.H., and I. Pollack. 1954. Visual contribution to speech intelligibility in noise. Journal of the Acoustical Society of America 26;212–15. Summerfield, Q. 1987. A comprehensive account of audio-visual speech perception. In B. Dodd and R. Campbell (eds.), Hearing by Eye: The Psychology of Lip-Reading (pp. 3–51). London: Lawrence Erlbaum Associates. Takahashi, K., J. Saiki, and K. Watanabe. 2008. Realignment of temporal simultaneity between vision and touch. Neuroreport 19(3);319–22. Tanaka, A., S. Sakamoto, K. Tsumura, and S. Suzuki. 2009a. Visual speech improves the intelligibility of timeexpanded auditory speech. Neuroreport 20;473–7. Tanaka, A., S. Sakamoto, K. Tsumura, and Y. Suzuki. 2009b. Visual speech improves the intelligibility of timeexpanded auditory speech. Neuroreport 20(5);473–7. Teatini, G., M. Ferne, F. Verzella, and J.P. Berruecos. 1976. Perception of temporal order: Visual and auditory stimuli. Giornale Italiano di Psicologia 3;157–64.
176
The Neural Bases of Multisensory Processes
Teder-Salejarvi, W.A., F. Di Russo, J.J. McDonald, and S.A. Hillyard. 2005. Effects of spatial congruity on audio-visual multimodal integration. Journal of Cognitive Neuroscience 17(9);1396–409. Titchener, E.B. 1908. Lectures on the Elementary Psychology of Feeling and Attention. New York: Macmillan. van de Par, S., and A. Kohlrausch. 2004. Visual and auditory object selection based on temporal correlations between auditory and visual cues. Paper presented at the 18th International Congress on Acoustics, Kyoto, Japan. van der Burg, E., C.N. Olivers, A.W. Bronkhorst, and J. Theeuwes. 2008a. Audiovisual events capture attention: Evidence from temporal order judgments. Journal of Vision 8(5);2, 1–10. van der Burg, E., C.N. Olivers, A.W. Bronkhorst, and J. Theeuwes. 2008b. Pip and pop: Nonspatial auditory signals improve spatial visual search. Journal of Experimental Psychology. Human Perception and Performance 34(5);1053–65. van der Burg, E., C.N. Olivers, A.W. Bronkhorst, and J. Theeuwes. 2009. Poke and pop: Tactile–visual synchrony increases visual saliency. Neuroscience Letters 450(1);60–4. Van Eijk, R.L. 2008. Audio-Visual Synchrony Perception. Thesis, Technische Universiteit Eindhoven, The Netherlands. Van Eijk, R.L., A. Kohlrausch, J.F. Juola, and S. van de Par. 2008. Audiovisual synchrony and temporal order judgments: Effects of experimental method and stimulus type. Perception & Psychophysics 70(6);955–68. van Wassenhove, V., K.W. Grant, and D. Poeppel. 2007. Temporal window of integration in auditory–visual speech perception. Neuropsychologia 45;598–601. Vatakis, A., and C. Spence. 2006a. Audiovisual synchrony perception for music, speech, and object actions. Brain Research 1111(1);134–42. Vatakis, A., and C. Spence. 2006b. Audiovisual synchrony perception for speech and music assessed using a temporal order judgment task. Neuroscience Letters 393(1);40–4. Vatakis, A., and C. Spence. 2007. Crossmodal binding: Evaluating the “unity assumption” using audiovisual speech stimuli. Perception & Psychophysics 69(5);744–56. Vatakis, A., and C. Spence. 2008. Evaluating the influence of the ‘unity assumption’ on the temporal perception of realistic audiovisual stimuli. Acta Psychologica 127(1);12–23. Vatakis, A., J. Navarra, S. Soto-Faraco, and C. Spence. 2007. Temporal recalibration during asynchronous audiovisual speech perception. Experimental Brain Research 181(1);173–81. Vatakis, A., A.A. Ghazanfar, and C. Spence. 2008a. Facilitation of multisensory integration by the “unity effect” reveals that speech is special. Journal of Vision 8(9);14 1–11. Vatakis, A., J. Navarra, S. Soto-Faraco, and C. Spence. 2008b. Audiovisual temporal adaptation of speech: Temporal order versus simultaneity judgments. Experimental Brain Research 185(3);521–9. Vibell, J., C. Klinge, M. Zampini, C. Spence, and A.C. Nobre. 2007. Temporal order is coded temporally in the brain: Early event-related potential latency shifts underlying prior entry in a cross-modal temporal order judgment task. Journal of Cognitive Neuroscience 19(1);109–20. von Grunau, M.W. 1986. A motion aftereffect for long-range stroboscopic apparent motion. Perception & Psychophysics 40(1);31–8. Von Helmholtz, H. 1867. Handbuch der Physiologischen Optik. Leipzig: Leopold Voss. Vroomen, J., and B. de Gelder. 2000. Sound enhances visual perception: Cross-modal effects of auditory organization on vision. Journal of Experimental Psychology. Human Perception and Performance 26(5);1583–90. Vroomen, J., and B. de Gelder. 2004a. Perceptual effects of cross-modal stimulation: Ventriloquism and the freezing phenomenon. In G.A. Calvert, C. Spence, and B.E. Stein (eds.). The Handbook of Multisensory Processes. Cambridge, MA: MIT Press. Vroomen, J., and B. de Gelder. 2004b. Temporal ventriloquism: Sound modulates the flash-lag effect. Journal of Experimental Psychology. Human Perception and Performance 30(3);513–8. Vroomen, J., and M. Keetels. 2006. The spatial constraint in intersensory pairing: No role in temporal ventriloquism. Journal of Experimental Psychology. Human Perception and Performance 32(4);1063–71. Vroomen, J., and M. Keetels. 2009. Sounds change four-dot masking. Acta Psychologica 130(1);58–63. Vroomen, J., and J.J. Stekelenburg. 2009. Visual anticipatory information modulates multisensory interactions of artificial audiovisual stimuli. Journal of Cognitive Neuroscience 22(7);1583–96. Vroomen, J., M. Keetels, B. de Gelder, and P. Bertelson. 2004. Recalibration of temporal order perception by exposure to audio-visual asynchrony. Cognitive Brain Research 22(1);32–5. Watanabe, K., and S. Shimojo. 2001. When sound affects vision: Effects of auditory grouping on visual motion perception. Psychological Science 12(2);109–16.
Perception of Synchrony between the Senses
177
Welch, R.B. 1999. Meaning, attention, and the “unity assumption” in the intersensory bias of spatial and temporal perceptions. In G. Aschersleben, T. Bachmann, and J. Müsseler (eds.), Cognitive Contributions to the Perception of Spatial and Temporal Events (pp. 371–87). Amsterdam: Elsevier. Welch, R.B., and D.H. Warren. 1980. Immediate perceptual response to intersensory discrepancy. Psychological Bulletin 88(3);638–67. Yamamoto, S., M. Miyazaki, T. Iwano, and S. Kitazawa. 2008. Bayesian calibration of simultaneity in audiovisual temporal order judgment. Paper presented at the 9th International Multisensory Research Forum (July 16–19, 2008). Hamburg, Germany. Zampini, M., D.I. Shore, and C. Spence. 2003a. Audiovisual temporal order judgments. Experimental Brain Research 152(2);198–210. Zampini, M., D.I. Shore, and C. Spence. 2003b. Multisensory temporal order judgments: The role of hemispheric redundancy. International Journal of Psychophysiology 50(1–2);165–80. Zampini, M., T. Brown, D.I. Shore et al. 2005a. Audiotactile temporal order judgments. Acta Psychologica 118(3);277–91. Zampini, M., S. Guest, D.I. Shore, and C. Spence. 2005b. Audio-visual simultaneity judgments. Perception & Psychophysics 67(3);531–44. Zampini, M., D.I. Shore, and C. Spence. 2005c. Audiovisual prior entry. Neuroscience Letters 381(3);217–22.
10
Representation of Object Form in Vision and Touch Simon Lacey and Krish Sathian
CONTENTS 10.1 Introduction........................................................................................................................... 179 10.2 Cortical Regions Involved in Visuo-Haptic Shape Processing............................................. 179 10.2.1 Lateral Occipital Complex......................................................................................... 179 10.2.2 Parietal Cortical Regions........................................................................................... 180 10.3 Do Vision and Touch Share a Common Shape Representation?........................................... 180 10.3.1 Potential Role of Visual Imagery.............................................................................. 180 10.3.2 A Modality-Independent Shape Representation?...................................................... 181 10.4 Properties of Shared Representation..................................................................................... 182 10.4.1 View-Dependence in Vision and Touch.................................................................... 182 10.4.2 Cross-Modal View-Independence............................................................................. 183 10.5 An Integrative Framework for Visuo- Haptic Shape Representation..................................... 183 Acknowledgments........................................................................................................................... 184 References....................................................................................................................................... 184
10.1 INTRODUCTION The idea that the brain processes sensory inputs in parallel modality-specific streams has given way to the concept of a “metamodal” brain with a multisensory task-based organization (Pascual-Leone and Hamilton 2001). For example, recent research shows that many cerebral cortical regions previously considered to be specialized for processing various aspects of visual input are also activated during analogous tactile or haptic tasks (reviewed by Sathian and Lacey 2007). In this article, which concentrates on shape processing in humans, we review the current state of knowledge about the mental representation of object form in vision and touch. We begin by describing the cortical regions showing multisensory responses to object form. Next, we consider the extent to which the underlying representation of object form is explained by cross-modal visual imagery or multisensory convergence. We then review recent work on the view-dependence of visuo-haptic shape representations and the resulting model of a multisensory, view-independent representation. Finally, we discuss a recently presented conceptual framework of visuo-haptic shape processing as a basis for future investigations.
10.2 CORTICAL REGIONS INVOLVED IN VISUO-HAPTIC SHAPE PROCESSING 10.2.1 Lateral Occipital Complex Most notable among the several cortical regions implicated in visuo-haptic shape processing is the lateral occipital complex (LOC), an object-selective region in the ventral visual pathway (Malach et al. 1995). Part of the LOC responds selectively to objects in both vision and touch and has been termed LOtv (Amedi et al. 2001, 2002). The LOC is shape-selective during both haptic 179
180
The Neural Bases of Multisensory Processes
three- dimensional shape perception (Amedi et al. 2001; Stilla and Sathian 2008; Zhang et al. 2004) and tactile two-dimensional shape perception (Stoesz et al. 2003; Prather et al. 2004). Neurological case studies indicate that the LOC is necessary for both haptic and visual shape perception: a patient with a left occipitotemporal cortical lesion, likely including the LOC, was found to exhibit tactile in addition to visual agnosia (inability to recognize objects), although somatosensory cortex and basic somatosensory function were intact (Feinberg et al. 1986). Another patient with bilateral LOC lesions could not learn new objects either visually or haptically (James et al. 2006). LOtv is thought to be a processor of geometric shape because it is not activated during object recognition triggered by object-specific sounds (Amedi et al. 2002). Interestingly, though, LOtv does respond when auditory object recognition is mediated by a visual–auditory sensory substitution device that converts visual shape information into an auditory stream, but only when individuals (whether sighted or blind) are specifically trained in a manner permitting generalization to untrained objects and not when merely arbitrary associations are taught (Amedi et al. 2007). This dissociation further bolsters the idea that LOtv is concerned with geometric shape information, regardless of the input sensory modality.
10.2.2 Parietal Cortical Regions Multisensory shape selectivity also occurs in parietal cortical regions, including the postcentral sulcus (Stilla and Sathian 2008), which is the location of Brodmann’s area 2 in human primary somatosensory cortex (S1; Grefkes et al. 2001). Although this region is generally assumed to be purely somatosensory, earlier neurophysiological observations in monkeys suggested visual responsiveness in parts of S1 (Iwamura 1998; Zhou and Fuster 1997). Visuo-haptic shape selectivity has also repeatedly been reported in various parts of the human intraparietal sulcus (IPS), which is squarely in classical multisensory cortex. The particular bisensory foci are either anteriorly in the IPS (Grefkes et al. 2002; Stilla and Sathian 2008), in either the region referred to as the anterior intraparietal area (AIP; Grefkes and Fink 2005; Shikata et al. 2008) or that termed the medial intraparietal area (Grefkes et al. 2004), or posteroventrally (Saito et al. 2003; Stilla and Sathian 2008) in a region comprising the caudal intraparietal area (CIP; Shikata et al. 2008) and the adjacent, retinotopically mapped areas IPS1 and V7 (Swisher et al. 2007). It should be noted that areas AIP, medial intraparietal, CIP, and V7 were first described in macaque monkeys, and their homologies in humans remain somewhat uncertain. A recent study reported that repetitive transcranial magnetic stimulation over the left anterior IPS impaired visual–haptic, but not haptic–visual, shape matching using the right hand (Buelte et al. 2008). However, repetitive transcranial magnetic stimulation over the right AIP during shape matching with the left hand had no effect on either cross-modal condition. The reason for this discrepancy is unclear, and emphasizes that the exact roles of the postcentral sulcus, the IPS regions, and LOtv in multisensory shape processing remain to be fully worked out.
10.3 DO VISION AND TOUCH SHARE A COMMON SHAPE REPRESENTATION? 10.3.1 Potential Role of Visual Imagery An intuitively appealing explanation for haptically evoked activation of visual cortex is that this is mediated by visual imagery rather than multisensory convergence of inputs (Sathian et al. 1997). The visual imagery hypothesis is supported by evidence that the LOC is active during visual imagery. For example, the left LOC is active during mental imagery of familiar objects previously explored haptically by blind individuals or visually by sighted individuals (De Volder et al. 2001), and also during recall of both geometric and material object properties from memory (Newman et al. 2005). Furthermore, individual differences in ratings of the vividness of visual imagery were found to strongly predict individual differences in haptic shape-selective activation magnitudes in the right LOC (Zhang et al. 2004). On the other hand, the magnitude of LOC activation during
Representation of Object Form in Vision and Touch
181
visual imagery can be considerably less than during haptic shape perception, suggesting that visual imagery may be relatively unimportant in haptic shape perception (Amedi et al. 2001; see also Reed et al. 2004). However, performance on the visual imagery task has not generally been monitored, so that lower levels of LOC activity during visual imagery could simply reflect participants not maintaining their visual images throughout the imagery scan. Because both the early and late blind show shape-related activity in the LOC evoked by tactile input (Amedi et al. 2003; Burton et al. 2002; Pietrini et al. 2004; Stilla et al. 2008; reviewed by Pascual-Leone et al. 2005; Sathian 2005; Sathian and Lacey 2007), or by auditory input when sensory substitution devices were used (Amedi et al. 2007; Arno et al. 2001; Renier et al. 2004, 2005), some have concluded that visual imagery does not account for cross-modal activation of visual cortex. Although this is true for the early-blind, it certainly does not exclude the use of visual imagery in the sighted, especially in view of the abundant evidence for cross-modal plasticity resulting from visual deprivation (Pascual-Leone et al. 2005; Sathian 2005; Sathian and Lacey 2007). It is also important to be clear about what is meant by “visual imagery,” which is often treated as a unitary ability. Recent research has shown that there are two different kinds of visual imagery: “object imagery” (images that are pictorial and deal with the actual appearance of objects in terms of shape, color, brightness, and other surface properties) and “spatial imagery” (more schematic images dealing with the spatial relations of objects and their component parts and with spatial transformations; Kozhevnikov et al. 2002, 2005; Blajenkova et al. 2006). This distinction is relevant because both vision and touch encode spatial information about objects—for example, size, shape, and the relative positions of different object features—such information may well be encoded in a modality-independent spatial representation (Lacey and Campbell 2006). Support for this possibility is provided by recent work showing that spatial, but not object, imagery scores were correlated with accuracy on cross-modal, but not within-modal, object identification for a set of closely similar and previously unfamiliar objects (Lacey et al. 2007a). Thus, it is probably beneficial to explore the roles of object and spatial imagery rather than taking an undifferentiated visual imagery approach. We return to this idea later but, as an aside, we note that the object–spatial dimension of imagery can be viewed as orthogonal to the modality involved, as there is evidence that early-blind individuals perform both object-based and spatially based tasks equally well (Aleman et al. 2001; see also Noordzij et al. 2007). However, the object–spatial dimension of haptically derived representations remains unexplored.
10.3.2 A Modality-Independent Shape Representation? An alternative to the visual imagery hypothesis is that incoming inputs in both vision and touch converge on a modality-independent representation, which is suggested by the overlap of visual and haptic shape-selective activity in the LOC. Some researchers refer to such modality-independent representations as “amodal,” but we believe that this term is best reserved for linguistic or other abstract representations. Instead, we suggest the use of the term “multisensory” to refer to a representation that can be encoded and retrieved by multiple sensory systems and which retains the modality “tags” of the associated inputs (Sathian 2004). The multisensory hypothesis is suggested by studies of effective connectivity derived from functional magnetic resonance imaging (fMRI) data indicating bottom-up projections from S1 to the LOC (Peltier et al. 2007; Deshpande et al. 2008) and also by electrophysiological data showing early propagation of activity from S1 into the LOC during tactile shape discrimination (Lucan et al. 2011). If vision and touch engage a common spatial representational system, then we would expect to see similarities in processing of visually and haptically derived representations and this, in fact, turns out to be the case. Thus, LOC activity is greater when viewing objects previously primed haptically, compared to viewing nonprimed objects (James et al. 2002b). In addition, behavioral studies have shown that cross-modal priming is as effective as within-modal priming (Easton et al. 1997a, 1997b; Reales and Ballesteros 1999). Candidate regions for housing a common visuo-haptic shape
182
The Neural Bases of Multisensory Processes
representation include the right LOC and the left CIP because activation magnitudes during visual and haptic processing of (unfamiliar) shape are significantly correlated across subjects in these regions (Stilla and Sathian 2008). Furthermore, the time taken to scan both visual images (Kosslyn 1973; Kosslyn et al. 1978) and haptically derived images (Röder and Rösler 1998) increases with the spatial distance to be inspected. Also, the time taken to judge whether two objects are the same or mirror images increases nearly linearly with increasing angular disparity between the objects for mental rotation of both visual (Shepard and Metzler 1971) and haptic stimuli (Marmor and Zaback 1976; Carpenter and Eisenberg 1978; Hollins 1986; Dellantonio and Spagnolo 1990). The same relationship was found when the angle between a tactile stimulus and a canonical angle was varied, with associated activity in the left anterior IPS (Prather et al. 2004), an area also active during mental rotation of visual stimuli (Alivisatos and Petrides 1997), and probably corresponding to AIP (Grefkes and Fink 2005; Shikata et al. 2008). Similar processing has been found with sighted, early- and late-blind individuals (Carpenter and Eisenberg 1978; Röder and Rösler 1998). These findings suggest that spatial metric information is preserved in both vision and touch, and that both modalities rely on similar, if not identical, imagery processes (Röder and Rösler 1998).
10.4 PROPERTIES OF SHARED REPRESENTATION In this section, we discuss the properties of the multisensory representation of object form with particular reference to recent work on view-independence in visuo-haptic object recognition. The representation of an object is said to be view-dependent if rotating the object away from the learned view impairs object recognition, that is, optimal recognition depends on perceiving the same view of the object. By contrast, a representation is view-independent if objects are correctly identified despite being rotated to provide a different view. The shared multisensory representation that enables crossmodal object recognition is likely distinct from the separate unisensory representations that support visual and haptic within-modal object recognition: we examine the relationship between these.
10.4.1 View-Dependence in Vision and Touch It has long been known that visual object representations are view-dependent (reviewed by Peissig and Tarr 2007) but it might be expected that haptic object representations are view-independent because the hands can simultaneously contact an object from different sides (Newell et al. 2001). This expectation is reinforced because following the contours of a three-dimensional object is necessary for haptic object recognition (Lederman and Klatzky 1987). Nonetheless, several studies have shown that haptic object representations are in fact view-dependent for both unfamiliar (Newell et al. 2001; Lacey et al. 2007a) and familiar objects (Lawson 2009). This may be because the biomechanics of the hands can be restrictive in some circumstances: some hand positions naturally facilitate exploration more than others (Woods et al. 2008). Furthermore, for objects with a vertical main axis, haptic exploration is biased to the far (back) “view” of an object, explored by the fingers whereas the thumbs stabilize the object rather than explore it (Newell et al. 2001). However, haptic recognition remains view-dependent even when similar objects are presented so that their main axis is horizontal, an orientation that allows more freely comprehensive haptic exploration of multiple object surfaces (Lacey et al. 2007a). The extent to which visual object recognition is impaired by changes in orientation depends on the particular axis of rotation: picture-plane rotations are less disruptive than depth-plane rotations in both object recognition and mental rotation tasks, even though these tasks depend on different visual pathways—ventral and dorsal, respectively (Gauthier et al. 2002). By contrast, haptic object recognition is equally disrupted by rotation about each of the three main axes (Lacey et al. 2007a). Thus, although visual and haptic unisensory representations may be functionally equivalent in that they are both view-dependent, the underlying basis for this may be very different in each case.
Representation of Object Form in Vision and Touch
183
A further functional equivalence between visual and haptic object representation is that each has preferred or canonical views of objects. In vision, the preferred view for both familiar and unfamiliar objects is one in which the main axis is angled at 45° to the observer (Palmer et al. 1981; Perrett et al. 1992). Recently, Woods et al. (2008) have shown that haptic object recognition also has canonical views—again independently of familiarity—but that these are defined by reference to the midline of the observer’s body, the object’s main axis being aligned either parallel or perpendicular to the midline. This may be due to grasping and object function: Craddock and Lawson (2008) found that haptic recognition was better for objects in typical rather than atypical orientations; for example, a cup oriented with the handle to the right for a right-handed person.
10.4.2 Cross-Modal View-Independence Remarkably, although visual and haptic within-modal object recognition are both view-dependent, visuo-haptic cross-modal recognition is view-independent (Lacey et al. 2007a; Ueda and Saiki 2007). Rotating an object away from the learned view did not degrade recognition, whether visual study was followed by haptic test or vice versa (Lacey et al. 2007a; Ueda and Saiki 2007), although Lawson (2009) found view-independence only in the haptic study–visual test condition. Cross-modal object recognition was also independent of the particular axis of rotation (Lacey et al. 2007a). Thus, visuo-haptic cross-modal object recognition clearly relies on a different representation from that involved in the corresponding within-modal task (see also Newell et al. 2005). In a recent series of experiments, we used a perceptual learning paradigm to investigate the relationship between the unisensory view-dependent and multisensory view-independent representations (Lacey et al. 2009a). We showed that a relatively brief period of within-modal learning to establish within-modal view-independence resulted in complete, symmetric cross-modal transfer of view-independence: visual view-independence acquired following exclusively visual learning also resulted in haptic view-independence, and vice versa. In addition, both visual–haptic and haptic– visual cross-modal learning also transformed visual and haptic within-modal recognition from viewdependent to view-independent. We concluded from this study that visual and haptic within-modal and visuo-haptic cross-modal view-independence all rely on the same shared representation. Thus, this study and its predecessor (Lacey et al. 2007a) suggest a model of view-independence in which separate, view-dependent, unisensory representations feed directly into a view-independent, bisensory representation rather than being routed through intermediate, unisensory, view-independent representations. A possible mechanism for this is the integration of multiple low-level, view-dependent, unisensory representations into a higher-order, view-independent, multisensory representation (see Riesenhuber and Poggio 1999 for a similar proposal regarding visual object recognition). Cortical localization of this modality-independent, view-independent representation is an important goal for future work. Although the IPS is a potential candidate, being a well-known convergence site for visual and haptic shape processing (Amedi et al. 2001; James et al. 2002b; Zhang et al. 2004; Stilla and Sathian 2008), IPS responses appear to be view-dependent (James et al. 2002a). The LOC also shows convergent multisensory shape processing; however, responses in this area have shown viewdependence in some studies (Grill-Spector et al. 1999; Gauthier et al. 2002) but view-independence in other studies (James et al. 2002a).
10.5 AN INTEGRATIVE FRAMEWORK FOR VISUO- HAPTIC SHAPE REPRESENTATION An important goal of multisensory research is to model the processes underlying visuo-haptic object representation. As a preliminary step to this goal, we have recently investigated connectivity and intertask correlations of activation magnitudes during visual object imagery and haptic perception of both familiar and unfamiliar objects (Deshpande et al. 2010; Lacey et al. 2010). In the visual
184
The Neural Bases of Multisensory Processes
object imagery task, participants listened to word pairs and decided whether the objects designated by those words had the same or different shapes. Thus, in contrast with earlier studies, participants had to process their images throughout the scan and this could be verified by monitoring their performance. In a separate session, participants performed a haptic shape discrimination task. For one group of subjects, the haptic objects were familiar; for the other group, they were unfamiliar. We found that both intertask correlations and connectivity were modulated by object familiarity (Deshpande et al. 2010; Lacey et al. 2010). Although the LOC was active bilaterally during both visual object imagery and haptic shape perception, there was an intertask correlation only for familiar shape. Analysis of connectivity showed that visual object imagery and haptic familiar shape perception engaged quite similar networks characterized by top-down paths from prefrontal and parietal regions into the LOC, whereas a very different network emerged during haptic perception of unfamiliar shape, featuring bottom-up inputs from S1 to the LOC (Deshpande et al. 2010). Based on these findings and on the literature reviewed earlier in this chapter, we proposed a conceptual framework for visuo-haptic object representation that integrates the visual imagery and multisensory approaches (Lacey et al. 2009b). In this proposed framework, the LOC houses a representation that is independent of the input sensory modality and is flexibly accessible via either bottom-up or top-down pathways, depending on object familiarity (or other task attributes). Haptic perception of familiar shape uses visual object imagery via top-down paths from prefrontal and parietal areas into the LOC whereas haptic perception of unfamiliar shape may use spatial imagery processes and involves bottom-up pathways from the somatosensory cortex to the LOC. Because there is no stored representation of an unfamiliar object, its global shape has to be computed by exploring it in its entirety and the framework would therefore predict the somatosensory drive of LOC. The IPS has been implicated in visuo-haptic perception of both shape and location (Stilla and Sathian 2008; Gibson et al. 2008). We might therefore expect that, to compute global shape in unfamiliar objects, the IPS would be involved in processing the relative spatial locations of object parts. For familiar objects, global shape can be inferred easily, perhaps from distinctive features that are sufficient to retrieve a visual image, and so the framework predicts increased contribution from parietal and prefrontal regions. Clearly, objects are not exclusively familiar or unfamiliar and individuals are not purely object or spatial imagers: these are continua along which objects and individuals may vary. In this respect, an individual differences approach is likely to be productive (see Lacey et al. 2007b; Motes et al. 2008) because these factors may interact, with different weights in different circumstances, for example task demands or individual history (visual experience, training, etc.). More work is required to define and test this framework.
ACKNOWLEDGMENTS This work was supported by the National Eye Institute, the National Science Foundation, and the Veterans Administration.
REFERENCES Aleman, A., L. van Lee, M.H.M. Mantione, I.G. Verkoijen, and E.H.F. de Haan. 2001. Visual imagery without visual experience: Evidence from congenitally totally blind people. Neuroreport 12:2601–2604. Alivisatos, B., and M. Petrides. 1997. Functional activation of the human brain during mental rotation. Neuropsychologia 36:11–118. Amedi, A., R. Malach, T. Hendler, S. Peled, and E. Zohary. 2001. Visuo-haptic object-related activation in the ventral visual pathway. Nature Neuroscience 4:324–330. Amedi, A., G. Jacobson, T. Hendler, R. Malach, and E. Zohary. 2002. Convergence of visual and tactile shape processing in the human lateral occipital complex. Cerebral Cortex 12:1202–1212. Amedi, A., N. Raz, P. Pianka, R. Malach, and E. Zohary. 2003 Early ‘visual’ cortex activation correlates with superior verbal memory performance in the blind. Nature Neuroscience 6:758–766.
Representation of Object Form in Vision and Touch
185
Amedi, A., W.M. Stern, J.A. Camprodon et al. 2007. Shape conveyed by visual-to-auditory sensory substitution activates the lateral occipital complex. Nature Neuroscience 10:687–689. Arno, P., A.G. De Volder, A. Vanlierde et al. 2001. Occipital activation by pattern recognition in the early blind using auditory substitution for vision. NeuroImage 13:632–645. Blajenkova, O., M. Kozhevnikov, and M.A. Motes. 2006. Object-spatial imagery: A new self-report imagery questionnaire. Applied Cognitive Psychology 20:239–263. Buelte, D., I.G. Meister, M. Staedtgen et al. 2008. The role of the anterior intraparietal sulcus in crossmodal processing of object features in humans: An rTMS study. Brain Research 1217:110–118. Burton, H., A.Z. Snyder, T.E. Conturo, E. Akbudak, J.M. Ollinger, and M.E. Raichle. 2002. Adaptive changes in early and late blind: A fMRI study of Braille reading. Journal of Neurophysiology 87:589–607. Carpenter, P.A., and P. Eisenberg. 1978. Mental rotation and the frame of reference in blind and sighted individuals. Perception & Psychophysics 23:117–124. Craddock, M., and R. Lawson. 2008. Repetition priming and the haptic recognition of familiar and unfamiliar objects. Perception & Psychophysics 70:1350–1365. Dellantonio, A., and F. Spagnolo. 1990. Mental rotation of tactual stimuli. Acta Psychologica 73:245–257. Deshpande, G., X. Hu, R. Stilla, and K. Sathian. 2008. Effective connectivity during haptic perception: A study using Granger causality analysis of functional magnetic resonance imaging data. NeuroImage 40:1807–1814. Deshpande, G., X. Hu, S. Lacey, R. Stilla, and K. Sathian. 2010. Object familiarity modulates effective connectivity during haptic shape perception. NeuroImage 49:1991–2000. De Volder, A.G., H. Toyama, Y. Kimura et al. 2001. Auditory triggered mental imagery of shape involves visual association areas in early blind humans. NeuroImage 14:129–139. Easton, R.D., A.J. Greene, and K. Srinivas. 1997a. Transfer between vision and haptics: Memory for 2-D patterns and 3-D objects. Psychonomic Bulletin & Review 4:403–410. Easton, R.D., K. Srinivas, and A.J. Greene. 1997b. Do vision and haptics share common representations? Implicit and explicit memory within and between modalities. Journal of Experimental Psychology. Learning, Memory, and Cognition 23:153–163. Feinberg, T.E., L.J. Rothi, and K.M. Heilman. 1986. Multimodal agnosia after unilateral left hemisphere lesion. Neurology 36:864–867. Gauthier, I., W.G. Hayward, M.J. Tarr et al. 2002. BOLD activity during mental rotation and view-dependent object recognition. Neuron 34:161–171. Gibson, G., R. Stilla, and K. Sathian. 2008. Segregated visuo-haptic processing of texture and location. Abstract, Human Brain Mapping. Grefkes, C., S. Geyer, T. Schormann, P. Roland, and K. Zilles. 2001. Human somatosensory area 2: Observerindependent cytoarchitectonic mapping, interindividual variability, and population map. NeuroImage 14:617–631. Grefkes, C., P.H. Weiss, K. Zilles, and G.R. Fink. 2002. Crossmodal processing of object features in human anterior intraparietal cortex: An fMRI study implies equivalencies between humans and monkeys. Neuron 35:173–184. Grefkes, C., A. Ritzl, K. Zilles, and G.R. Fink. 2004. Human medial intraparietal cortex subserves visuomotor coordinate transformation. NeuroImage 23:1494–1506. Grefkes, C., and G. Fink. 2005. The functional organization of the intraparietal sulcus in humans and monkeys. Journal of Anatomy 207:3–17. Grill-Spector, K., T. Kushnir, S. Edelman, G. Avidan, Y. Itzchak, and R. Malach. 1999. Differential processing of objects under various viewing conditions in the human lateral occipital complex. Neuron 24:187–203. Hollins, M. 1986. Haptic mental rotation: More consistent in blind subjects? Journal of Visual Impairment & Blindness 80:950–952. Iwamura, Y. 1998. Hierarchical somatosensory processing. Current Opinion in Neurobiology 8:522–528. James, T.W., G.K. Humphrey, J.S. Gati, R.S. Menon, and M.A. Goodale. 2002a. Differential effects of view on object-driven activation in dorsal and ventral streams. Neuron 35:793–801. James, T.W., G.K. Humphrey, J.S. Gati, P. Servos, R.S. Menon, and M.A. Goodale. 2002b. Haptic study of three-dimensional objects activates extrastriate visual areas. Neuropsychologia 40:1706–1714. James, T.W., K.H. James, G.K. Humphrey, and M.A. Goodale. 2006. Do visual and tactile object representations share the same neural substrate? In Touch and Blindness: Psychology and Neuroscience, ed. M.A. Heller and S. Ballesteros, 139–155. Mahwah, NJ: Lawrence Erlbaum Associates. Kosslyn, S.M. 1973. Scanning visual images: Some structural implications. Perception & Psychophysics 14:90–94.
186
The Neural Bases of Multisensory Processes
Kosslyn, S.M., T.M. Ball, and B.J. Reiser. 1978. Visual images preserve metric spatial information: Evidence from studies of image scanning. Journal of Experimental Psychology. Human Perception and Performance 4:47–60. Kozhevnikov, M., M. Hegarty, and R.E. Mayer. 2002. Revising the visualiser–verbaliser dimension: Evidence for two types of visualisers. Cognition and Instruction 20:47–77. Kozhevnikov, M., S.M. Kosslyn, and J. Shephard. 2005. Spatial versus object visualisers: A new characterisation of cognitive style. Memory & Cognition 33:710–726. Lacey, S., and C. Campbell. 2006. Mental representation in visual/haptic crossmodal memory: Evidence from interference effects. Quarterly Journal of Experimental Psychology 59:361–376. Lacey, S., A. Peters, and K. Sathian. 2007a. Cross-modal object representation is viewpoint-independent. PLoS ONE 2:e890. doi: 10.1371/journal.pone0000890. Lacey, S., C. Campbell, and K. Sathian. 2007b. Vision and touch: Multiple or multisensory representations of objects? Perception 36:1513–1521. Lacey, S., M. Pappas, A. Kreps, K. Lee, and K. Sathian. 2009a. Perceptual learning of view-independence in visuo-haptic object representations. Experimental Brain Research 198:329–337. Lacey, S., N. Tal, A. Amedi, and K. Sathian. 2009b. A putative model of multisensory object representation. Brain Topography 21:269–274. Lacey, S., P. Flueckiger, R. Stilla, M. Lava, and K. Sathian. 2010. Object familiarity modulates the relationship between visual object imagery and haptic shape perception. NeuroImage 49:1977–1990. Lawson, R. 2009. A comparison of the effects of depth rotation on visual and haptic three-dimensional object recognition. Journal of Experimental Psychology. Human Perception and Performance 35:911–930. Lederman, S.J., and R.L. Klatzky. 1987. Hand movements: A window into haptic object recognition. Cognitive Psychology 19:342–368. Lucan, J.N., J.J. Foxe, M. Gomez-Ramirez, K. Sathian, and S. Molholm. 2011. Tactile shape discrimination recruits human lateral occipital complex during early perceptual processing. Human Brain Mapping 31:1813–1821. Malach, R., J.B. Reppas, R.R. Benson et al. 1995. Object-related activity revealed by functional magnetic resonance imaging in human occipital cortex. Proceedings of the National Academy of Sciences of the United States of America 92:8135–8139. Marmor, G.S., and L.A. Zaback. 1976. Mental rotation by the blind: Does mental rotation depend on visual imagery? Journal of Experimental Psychology. Human Perception and Performance 2:515–521. Motes, M.A., R. Malach, and M. Kozhevnikov. 2008. Object-processing neural efficiency differentiates object from spatial visualizers. Neuroreport 19:1727–1731. Newell, F.N., M.O. Ernst, B.S. Tjan, and H.H. Bülthoff. 2001. View dependence in visual and haptic object recognition. Psychological Science 12:37–42. Newell, F.N., A.T. Woods, M. Mernagh, and H.H. Bülthoff. 2005. Visual, haptic and crossmodal recognition of scenes. Experimental Brain Research 161:233–242. Newman, S.D., R.L. Klatzky, S.J. Lederman, and M.A. Just. 2005. Imagining material versus geometric properties of objects: An fMRI study. Cognitive Brain Research 23:235–246. Noordzij, M.L., S. Zuidhoek, and A. Postma. 2007. The influence of visual experience on visual and spatial imagery. Perception 36:101–112. Palmer, S., E. Rosch, and P. Chase. 1981. Canonical perspective and the perception of objects. In Attention and Performance IX, ed. J.B. Long and A.D. Baddeley, 135–151. Hillsdale, NJ: Lawrence Earlbaum Associates. Pascual-Leone, A., and R.H. Hamilton. 2001. The metamodal organization of the brain. Progress in Brain Research 134:427–445. Pascual-Leone, A., A. Amedi, F. Fregni, and L.B. Merabet. 2005. The plastic human brain. Annual Review of Neuroscience 28:377–401. Peissig, J.J., and M.J. Tarr. 2007. Visual object recognition: Do we know more now than we did 20 years ago? Annual Review of Psychology 58:75–96. Peltier, S., R. Stilla, E. Mariola, S. LaConte, X. Hu, and K. Sathian. 2007. Activity and effective connectivity of parietal and occipital cortical regions during haptic shape perception. Neuropsychologia 45:476–483. Perrett, D.I., M.H. Harries, and S. Looker. 1992. Use of preferential inspection to define the viewing sphere and characteristic views of an arbitrary machined tool part. Perception 21:497–515. Pietrini, P., M.L. Furey, E. Ricciardi et al. 2004. Beyond sensory images: Object-based representation in the human ventral pathway. Proceedings of the National Academy of Sciences of the United States of America 101:5658–5663.
Representation of Object Form in Vision and Touch
187
Prather, S.C., J.R. Votaw, and K. Sathian. 2004. Task-specific recruitment of dorsal and ventral visual areas during tactile perception. Neuropsychologia 42:1079–1087. Reales, J.M., and S. Ballesteros. 1999. Implicit and explicit memory for visual and haptic objects: Cross-modal priming depends on structural descriptions. Journal of Experimental Psychology. Learning, Memory, and Cognition 25:644–663. Reed, C.L., S. Shoham, and E. Halgren. 2004. Neural substrates of tactile object recognition: An fMRI study. Human Brain Mapping 21:236–246. Renier, L., O. Collignon, D. Tranduy et al. 2004. Visual cortex activation in early blind and sighted subjects using an auditory visual substitution device to perceive depth. NeuroImage 22:S1. Renier, L., O. Collignon, C. Poirier et al. 2005. Cross modal activation of visual cortex during depth perception using auditory substitution of vision. NeuroImage 26:573–580. Riesenhuber, M., and T. Poggio. 1999. Hierarchical models of object recognition in cortex. Nature Neuroscience 2:1019–1025. Röder, B., and F. Rösler. 1998. Visual input does not facilitate the scanning of spatial images. Journal of Mental Imagery 22:165–181. Saito, D.N., T. Okada, Y. Morita, Y. Yonekura, and N. Sadato. 2003. Tactile–visual cross-modal shape matching: A functional MRI study. Cognitive Brain Research 17:14–25. Sathian, K. 2004. Modality, quo vadis? Comment. Behavioral and Brain Sciences 27:413–414. Sathian, K. 2005. Visual cortical activity during tactile perception in the sighted and the visually deprived. Developmental Psychobiology 46:279–286. Sathian, K., and S. Lacey. 2007. Journeying beyond classical somatosensory cortex. Canadian Journal of Experimental Psychology 61:254–264. Sathian, K., A. Zangaladze, J.M. Hoffman, and S.T. Grafton. 1997. Feeling with the mind’s eye. Neuroreport 8:3877–3881. Shepard, R.N., and J. Metzler. 1971. Mental rotation of three-dimensional objects. Science 171:701–703. Shikata, E., A. McNamara, A. Sprenger et al. 2008. Localization of human intraparietal areas AIP, CIP, and LIP using surface orientation and saccadic eye movement tasks. Human Brain Mapping 29:411–421. Stilla, R., R. Hanna, X. Hu, E. Mariola, G. Deshpande, and K. Sathian. 2008. Neural processing underlying tactile microspatial discrimination in the blind: A functional magnetic resonance imaging study. Journal of Vision 8:1–19 doi:10.1167/8.10.13. Stilla, R., and K. Sathian. 2008. Selective visuo-haptic processing of shape and texture. Human Brain Mapping 29:1123–1138. Stoesz, M., M. Zhang, V.D. Weisser, S.C. Prather, H. Mao, and K. Sathian. 2003. Neural networks active during tactile form perception: Common and differential activity during macrospatial and microspatial tasks. International Journal of Psychophysiology 50:41–49. Swisher, J.D., M.A. Halko, L.B. Merabet, S.A. McMains, and D.C. Somers. 2007. Visual topography of human intraparietal sulcus. Journal of Neuroscience 27:5326–5337. Ueda, Y., and J. Saiki. 2007. View independence in visual and haptic object recognition. Japanese Journal of Psychonomic Science 26:11–19. Woods, A.T., A. Moore, and F.N. Newell. 2008. Canonical views in haptic object representation. Perception 37:1867–1878. Zhang, M., V.D. Weisser, R. Stilla, S.C. Prather, and K. Sathian. 2004. Multisensory cortical processing of object shape and its relation to mental imagery. Cognitive, Affective & Behavioral Neuroscience 4:251–259. Zhou, Y.-D., and J.M. Fuster. 1997. Neuronal activity of somatosensory cortex in a cross-modal (visuo-haptic) memory task. Experimental Brain Research 116:551–555.
Section III Combinatorial Principles and Modeling
11
Spatial and Temporal Features of Multisensory Processes Bridging Animal and Human Studies Diana K. Sarko, Aaron R. Nidiffer, Albert R. Powers III, Dipanwita Ghose, Andrea Hillock-Dunn, Matthew C. Fister, Juliane Krueger, and Mark T. Wallace
CONTENTS 11.1 Introduction........................................................................................................................... 192 11.2 Neurophysiological Studies in Animal Models: Integrative Principles as a Foundation for Understanding Multisensory Interactions........................................................................ 192 11.3 Neurophysiological Studies in Animal Models: New Insights into Interdependence of Integrative Principles............................................................................................................. 193 11.3.1 Spatial Receptive Field Heterogeneity and Its Implications for Multisensory Interactions................................................................................................................ 193 11.3.2 Spatiotemporal Dynamics of Multisensory Processing............................................ 197 11.4 Studying Multisensory Integration in an Awake and Behaving Setting: New Insights into Utility of Multisensory Processes.................................................................................. 199 11.5 Human Behavioral and Perceptual Studies of Multisensory Processing: Building Bridges between Neurophysiological and Behavioral and Perceptual Levels of Analysis.........201 11.5.1 Defining the “Temporal Window” of Multisensory Integration............................... 201 11.5.2 Stimulus-Dependent Effects on the Size of the Multisensory Temporal Window....202 11.5.3 Can “Higher-Order” Processes Affect Multisensory Temporal Window?............... 203 11.6 Adult Plasticity in Multisensory Temporal Processes: Psychophysical and Neuroimaging Evidence........................................................................................................203 11.7 Developmental Plasticity in Multisensory Representations: Insights from Animal and Human Studies....................................................................................................................... 205 11.7.1 Neurophysiological Studies into Development of Multisensory Circuits..................205 11.7.2 Development of Integrative Principles......................................................................206 11.7.3 Experientially Based Plasticity in Multisensory Circuits..........................................207 11.7.4 Development of Human Multisensory Temporal Perception....................................207 11.8 Conclusions and Future Directions........................................................................................209 References....................................................................................................................................... 210
191
192
The Neural Bases of Multisensory Processes
11.1 INTRODUCTION Multisensory processing is a pervasive and critical aspect of our behavioral and perceptual repertoires, facilitating and enriching a wealth of processes including target identification, signal detection, speech comprehension, spatial navigation, and flavor perception to name but a few. The adaptive advantages that multisensory integration confers are critical to survival, with effective acquisition and use of multisensory information enabling the generation of appropriate behavioral responses under circumstances in which one sense is inadequate. In the behavioral domain, a number of studies have illustrated the strong benefits conferred under multisensory circumstances, with the most salient examples including enhanced orientation and discrimination (Stein et al. 1988, 1989), improved target detection (Frassinetti et al. 2002; Lovelace et al. 2003), and speeded responses (Hershenson 1962; Hughes et al. 1994; Frens et al. 1995; Harrington and Peck 1998; Corneil et al. 2002; Forster et al. 2002; Molholm et al. 2002; Amlot et al. 2003; Diederich et al. 2003; Calvert and Thesen 2004). Along with these behavioral examples, there are myriad perceptual illustrations of the power of multisensory interactions. For example, the intensity of a light is perceived as greater when presented with a sound (Stein et al. 1996) and judgments of stimulus features such as speed and orientation are often more accurate when combined with information available from another sense (Soto-Faraco et al. 2003; Manabe and Riquimaroux 2000; Clark and Graybiel 1966; Wade and Day 1968). One of the most compelling examples of multisensory-mediated perceptual gains can be seen in the speech realm, where the intelligibility of a spoken signal can be greatly enhanced when the listener can see the speaker’s face (Sumby and Pollack 1954). In fact, this bimodal gain may be a principal factor in the improvements in speech comprehension seen in those with significant hearing loss after visual training (Schorr et al. 2005; Rouger et al. 2007). Regardless of whether the benefits are seen in the behavioral or perceptual domains, they typically exceed those that are predicted on the basis of responses to each of the component unisensory stimuli (Hughes et al. 1994, 1998; Corneil and Munoz 1996; Harrington and Peck 1998). Such deviations from simple additive models provide important insights into the neural bases for these multisensory interactions in that they strongly argue for a convergence and active integration of the different sensory inputs within the brain.
11.2 NEUROPHYSIOLOGICAL STUDIES IN ANIMAL MODELS: INTEGRATIVE PRINCIPLES AS A FOUNDATION FOR UNDERSTANDING MULTISENSORY INTERACTIONS Information from multiple sensory modalities converges at many sites within the central nervous system, providing the necessary anatomical framework for multisensory interactions (Calvert and Thesen 2004; Stein and Meredith 1993). Multisensory convergence at the level of the single neuron commonly results in an integrated output such that the multisensory response is typically distinct from the component responses, and often from their predicted addition as well. Seminal studies of multisensory processing initially focused on a midbrain structure, the superior colliculus (SC), because of its high incidence of multisensory neurons, its known spatiotopic organization, and its well-defined role in controlling orientation movements of the eyes, pinnae, and head (Sparks 1986; Stein and Meredith 1993; Sparks and Groh 1995; Hall and Moschovakis 2004; King 2004). These foundational studies of the SC of cats (later reaffirmed by work in nonhuman primate models, see Wallace and Stein 1996, 2001; Wallace et al. 1996) provided an essential understanding of the organization of multisensory neurons and the manner in which they integrate their different sensory inputs. In addition to characterizing the striking nonlinearities that frequently define the responses of these neurons under conditions of multisensory stimulation, these studies established a series of fundamental principles that identified key stimulus features that govern multisensory interactions (Meredith and Stein 1983, 1985, 1986; Meredith et al. 1987). The spatial principle deals
Spatial and Temporal Features of Multisensory Processes
193
with the physical location of the paired stimuli, and illustrates the importance of spatial proximity in driving the largest proportionate gains in response. Similarly, the temporal principle captures the fact that the largest gains are typically seen when stimuli are presented close together in time, and that the magnitude of the interaction declines as the stimuli become increasingly separated in time. Finally, the principle of inverse effectiveness reflects the fact that the largest gains are generally seen to the pairing of two weakly effective stimuli. As individual stimuli become increasingly effective in driving neuronal responses, the size of the interactions seen to the pairing declines. Together, these principles have provided an essential predictive outline for understanding multisensory integration at the neuronal level, as well as for understanding the behavioral and perceptual consequences of multisensory pairings. However, it is important to point out that these principles, although widely instructive, fail to capture the complete integrative profile of any individual neuron. The reason for this is that space, time, and effectiveness are intimately intertwined in naturalistic stimuli, and manipulating one has a consequent effect on the others. Recent studies, described in the next section, have sought to better understand the strong interdependence between these factors, with the hope of better elucidating the complex spatiotemporal architecture of multisensory interactions.
11.3 NEUROPHYSIOLOGICAL STUDIES IN ANIMAL MODELS: NEW INSIGHTS INTO INTERDEPENDENCE OF INTEGRATIVE PRINCIPLES 11.3.1 S patial Receptive Field Heterogeneity and Its Implications for Multisensory Interactions Early observations during the establishment of the neural principles of multisensory integration hinted at a complexity not captured by integrative “rules” or constructs. For example, in structuring experiments to test the spatial principle, it was clear that stimulus location not only played a key role in the magnitude of the multisensory interaction, but also that the individual sensory responses were strongly modulated by stimulus location. Such an observation suggested an interaction between the spatial and inverse effectiveness principles, and one that might possibly be mediated by differences in unisensory responses as a function of location within the neuron’s receptive field. Recently, this concept has been tested by experiments specifically designed to characterize the microarchitecture of multisensory receptive fields. In these experiments, stimuli from each of the effective modalities were presented at a series of locations within and outside the classically defined excitatory receptive field of individual multisensory neurons (Figure 11.1). Studies were conducted in both subcortical (i.e., SC) and cortical [i.e., the anterior ectosylvian sulcus (AES)] multisensory domains in the cat, in which prior work had illustrated that the receptive fields of multisensory neurons are quite large (Stein and Meredith 1993; Benedek et al. 2004; Furukawa and Middlebrooks 2002; Middlebrooks and Knudsen 1984; Middlebrooks et al. 1998; Xu et al. 1999; Wallace and Stein 1996, 1997; Nagy et al. 2003). In this manner, spatial receptive field (SRFs) can be created for each of the effective modalities, as well as for the multisensory combination. It is important to point out that in these studies, the stimuli are identical (e.g., same luminance, loudness, and spectral composition) except for their location. The results of these analyses have revealed a marked degree of heterogeneity to the SRFs of both SC and AES multisensory neurons (Carriere et al. 2008; Royal et al. 2009). This response heterogeneity is typically characterized by regions of high response (i.e., hot spots) surrounded by regions of substantially weaker response. Studies are ongoing to determine whether features such as the number or size of these hot spots differ between subcortical and cortical areas. Although these SRF analyses have revealed a previously uncharacterized feature of multisensory neurons, perhaps the more important consequence of this SRF heterogeneity is the implication that this has for multisensory interactions. At least three competing hypotheses can be envisioned for the role of receptive field heterogeneity in multisensory integration—each with strikingly different
194
The Neural Bases of Multisensory Processes Receptive field locations
Stimulus locations
SUA
SDF Visual Auditory Spikes/s
100
Elevation (deg)
SRF
50 0 –200 0 200 400 600 Time (ms)
Azimuth (deg)
FIGURE 11.1 Construction of an SRF for an individual multisensory neuron. Each stimulus location tested within receptive field generates a response that is then compiled into a single unit activity (SUA) plot. SUA plot at one location is shown in detail to illustrate how spike density function (SDF) is derived. Finally, SDF/ SUA data are transformed into a pseudocolor SRF plot in which normalized evoked response is shown relative to azimuth and elevation. Evoked responses are scaled to maximal response, with warmer colors representing higher firing rates. (Adapted from Carriere, B.N. et al., J. Neurophysiol., 99, 2357–2368, 2008.)
predictions. The first is that spatial location takes precedence and that the resultant interactions would be completely a function of the spatial disparity between the paired stimuli. In this scenario, the largest interactions would be seen when the stimuli were presented at the same location, and the magnitude of the interaction would decline as spatial disparity increased. Although this would seem to be a strict interpretation of the spatial principle, in fact, even the early characterization of this principle focused not on location or disparity, but rather on the presence or absence of stimuli within the receptive field (Meredith and Stein 1986), hinting at the relative lack of importance of absolute location. The second hypothesis is that stimulus effectiveness would be the dominant factor, and that the interaction would be dictated not by spatial location but rather by the magnitude of the individual sensory responses (which would be modulated by changes in spatial location). The final hypothesis is that there is an interaction between stimulus location and effectiveness, such that both would play a role in shaping the resultant interaction. If this were the case, studies would seek to identify the relative weighting of these two stimulus dimensions to gain a better mechanistic view into these interactions. The first foray into this question focused on cortical area AES (Carriere et al. 2008). Here, it was found that SRF architecture played an essential deterministic role in the observed multisensory interactions, and most intriguingly, in a manner consistent with the second hypothesis outlined above. Thus, and as illustrated in Figure 11.2, SRF architecture resulted in changes in stimulus effectiveness that formed the basis for the multisensory interaction. In the neuron shown, if the stimuli were presented in a region of strong response within the SRF, a response depression would result (Figure 11.2b, left column). In contrast, if the stimuli were moved to a location of weak response, their pairing resulted in a large enhancement (Figure 11.2b, center column). Intermediate regions
0
10
20
0
10
20
Azimuth (deg)
0.2
–10
0.4
–30 0
0.6
0
0.8
1
–15
15
Multisensory
0.2
–30
Azimuth (deg)
0.4
–15 0
0.6
0
1 0.8
0
20
15
–10
10
Azimuth (deg)
–10
0
0.2
0.4
0.6
0.8
(d)
0
100
69
143 200
300
263
0
99
100
198 267
200
300
V A M Stimulus condition
0
5
10
–100
0
100
200
0 100 200 300 400 Time from stimulus onset (ms)
–100
400
400
No stimulus evoked response detected
–100
0
100
200
7
14
S A V
0
100
200
7
14
S A V
–100
0
100
200
7
S A V
14
+2SD Mean
+2SD Mean
+2SD Mean
0
100
64 200
300
–100
0
56
100
199
200
286
300
V A M Stimulus condition
0
5
10
–100
0
100
200
0 100 200 300 400 Time from stimulus onset (ms)
–100
400
400
No stimulus evoked response detected
–100
131 175
+2SD Mean
+2SD Mean
+2SD Mean
0
100
65 200
300
253
0
65
106
100
241
200
300
0
5 V A M Stimulus condition
–100
0
100
200
0 100 200 300 400 Time from stimulus onset (ms)
10
–100
–100
400
400
No stimulus evoked response detected
–100
153
+2SD Mean
+2SD Mean
+2SD Mean
Multisensory interaction (%)
Multisensory interaction (%)
Multisensory interaction (%)
FIGURE 11.2 (See color insert.) Multisensory interactions in AES neurons differ based on location of paired stimuli. (a) Visual, auditory, and multisensory SRFs are shown with highlighted locations (b, d) illustrating response suppression (left column), response enhancement (middle column), and no significant interaction (right column). (c) Shaded areas depict classically defined receptive fields for visual (blue) and auditory (green) stimuli.
(c)
–30
–15
0
Auditory (50%)
Elevation (deg)
Elevation (deg)
Elevation (deg)
Trial Stim Spikes/s Trial Stim Spikes/s Trial Stim Spikes/s
15
(b)
Mean stimulus evoked spikes/trial
1
Mean stimulus evoked spikes/trial
Visual
Mean stimulus evoked spikes/trial
(a)
Spatial and Temporal Features of Multisensory Processes 195
196
The Neural Bases of Multisensory Processes
of response resulted in either weak or no interactions (Figure 11.2b, right column). In addition to this traditional measure of multisensory gain (relative to the best unisensory response), these same interactions can also be examined and quantified relative to the predicted summation of the unisensory responses (Wallace et al. 1992; Wallace and Stein 1996; Stein and Wallace 1996; Stanford et al. 2005; Royal et al. 2009; Carriere et al. 2008). In these comparisons, strongly effective pairings typically result in subadditive interactions, weakly effective pairings result in superadditive interactions, and intermediate pairings result in additive interactions. Visualization of these different categories of interactions relative to additive models can be captured in pseudocolor representations such as that shown in Figure 11.3, in which the actual multisensory SRF is contrasted against that predicted on the basis of additive modeling. Together, these results clearly illustrate the primacy of stimulus efficacy in dictating multisensory interactions, and that the role of space per se appears to be a relatively minor factor in governing these integrative processes. Parallel studies are now beginning to focus on the SC, and provide an excellent comparative framework from which to view multisensory interactive mechanisms across brain structures. In this work, Krueger et al. (2009) reported that the SRF architecture of multisensory neurons in the SC is not only similar to that of cortical neurons, but also that stimulus effectiveness appears to once again be the key factor in dictating the multisensory response. Thus, stimulus pairings within regions of weak unisensory response often resulted in superadditive interactions (Figure 11.4b–c, ◼), whereas pairings at locations of strong unisensory responses typically exhibited subadditive interactions (Figure 11.4b–c, ○). Overall, such an organization presumably boosts signals within weakly effective regions of the unisensory SRFs during multisensory stimulus presentations and yields more reliable activation for each stimulus presentation. Although SRF architecture appears similar in both cortical and subcortical multisensory brain regions, there are also subtle differences that may provide important insights into both the underlying mechanistic operations and the different behavioral and perceptual roles of AES and SC. For example, when the SRFs of a multisensory neuron in the SC are compared under different sensory
Azimuth (deg)
Azimuth (deg)
10
10
30
30
50
50
0
200
400
600
V A
(V + A)
A
0
200
400
600 1
Azimuth (deg)
Azimuth (deg)
Auditory
V A
Multisensory
V
Visual
10
10
30
30
50
50
200 0 600 400 Time from stim onset (ms)
0 200 400 600 Time from stim onset (ms)
0
FIGURE 11.3 Multisensory interactions relative to additive prediction models. Visual, auditory, and multisensory (VA) SRFs are shown for an individual multisensory neuron of AES. True multisensory responses can be contrasted with those predicted by an additive model (V + A) and reveal a richer integrative microarchitecture than predicted by simple linear summation of unisensory response profiles. (Adapted from Carriere, B.N. et al., J. Neurophysiol., 99, 2357–2368, 2008.)
197
Spatial and Temporal Features of Multisensory Processes
60 30 0 30 −60
Multisensory (M)
Visual (V)
−60 −30 0 30 Azimuth (deg)
−60 −30 0 30 Azimuth (deg)
(b)
Auditory (A)
Elevation (deg)
Elevation (deg)
(a)
60 30 0 30 −60
−60 −30 0 30 Azimuth (deg)
−60 −30 0 30 Azimuth (deg)
Spikes/s stim
(c) A V
V A M
V A M
150 100 50 0
–100
0 100 200 300 400 Time from stimulus onset (ms)
–100
0 100 200 300 400 Time from stimulus onset (ms)
FIGURE 11.4 Multisensory interactions in SC neurons differ based on location of paired stimuli. (a) Visual, auditory, and multisensory SRFs are shown as a function of azimuth (x axis) and elevation (y axis). Specific locations within receptive field (b) are illustrated in detail (c) to show evoked responses for visual, auditory, and multisensory conditions. Weakly effective locations (square) result in response enhancement, whereas conditions evoking a strong unisensory response (circle) result in response suppression.
conditions, there appears to be a global similarity in the structure of each SRF with respect to both the number and location of hot spots. This might indicate that the overall structure of the SRF is dependent on fixed anatomical and/or biophysical constraints such as the extent of dendritic arbors. However, these characteristics are far less pronounced in cortical SRFs (Carriere et al. 2008), possibly due to the respective differences in the inputs to these two structures (the cortex receiving more heterogeneous inputs) and/or due to less spatiotopic order in the cortex. Future work will seek to better clarify these intriguing differences across structures.
11.3.2 Spatiotemporal Dynamics of Multisensory Processing In addition to the clear interactions between space and effectiveness captured by the aforementioned SRF analyses, an additional stimulus dimension that needs to be included is time. For example, and returning to the initial outlining of the interactive principles, changing stimulus location impacts not only stimulus effectiveness, but also the temporal dynamics of each of the unisensory (and multisensory) responses. Thus, dependent on the location of the individual stimuli, responses will have very different temporal patterns of activation. More recently, the importance of changes in temporal response profiles has been highlighted by findings that the multisensory responses of SC neurons show shortened latencies when compared with the component unisensory responses (Rowland et al. 2007), a result likely underlying the behavioral finding of the speeding of saccadic eye movements under multisensory conditions (Frens and Van Opstal 1998; Frens et al. 1995; Hughes et al. 1998; Amlot et al. 2003; Bell et al. 2005). Additional work focused on the temporal dimension of multisensory responses has extended the original characterization of the temporal principle to nonhuman primate cortex, where Kayser and colleagues (2008) have found that audiovisual interactions in the superior temporal plane of rhesus monkey neocortex are maximal when a visual stimulus precedes an auditory stimulus by 20 to 80
198
The Neural Bases of Multisensory Processes
Change in firing rate (%)
ms. Along with these unitary changes, recent work had also shown that the timing of sensory inputs with respect to ongoing neural oscillations in the neocortex has a significant impact on whether neuronal responses are enhanced or suppressed. For instance, in macaque primary auditory cortex, properly timed somatosensory input has been found to reset ongoing oscillations to an optimal excitability phase that enhances the response to temporally correlated auditory input. In contrast, somatosensory input delivered during suboptimal, low-excitability oscillatory periods depresses the auditory response (Lakatos et al. 2007). Although clearly illustrating the importance of stimulus timing in shaping multisensory interactions, these prior studies have yet to characterize the interactions between time, space, and effectiveness in the generation of a multisensory response. To do this, recent studies from our laboratory have extended the SRF analyses described above to include time, resulting in the creation of spatiotemporal receptive field (STRF) plots. It is important to point out that such analyses are not a unique construct to multisensory systems, but rather stem from both spatiotemporal and spectrotemporal receptive field studies within individual sensory systems (David et al. 2004; Machens et al. 2004; Haider et al. 2010; Ye et al. 2010). Rather, the power of the STRF here is its application to multisensory systems as a modeling framework from which important mechanistic insights can be gained about the integrative process. The creation of STRFs for cortical multisensory neurons has revealed interesting features about the temporal dynamics of multisensory interactions and the evolution of the multisensory response (Royal et al. 2009). Most importantly, these analyses, when contrasted with simple additive models based on the temporal architecture of the unisensory responses, identified two critical epochs in the multisensory response not readily captured by additive processes (Figure 11.5). The first of these, presaged by the Rowland et al. study described above, revealed an early phase of superadditive multisensory responses that manifest as a speeding of response (i.e., reduced latency) under 200 150 100 50 0 −50
Firing rate (Hz)
Integration (%)
−100
−15 −10 −5
0
5
10
15
Latency shift (ms)
20
25
–50
−25
0
25
50
Duration shift (ms)
75
800 400 0 60 45 30 15 0
–500
0
500
Time from onset of predicted multisensory response (ms)
1000
–1000
–500
0
Time from offset of predicted multisensory response (ms)
500
FIGURE 11.5 Spatiotemporal response dynamics in multisensory AES neurons. A reduced response latency and increased response duration characterized spatiotemporal dynamics of paired multisensory stimuli.
Spatial and Temporal Features of Multisensory Processes
199
multisensory conditions. The second of these happens late in the response epoch, where the multisensory response continues beyond the truncation of the unisensory responses, effectively increasing response duration under multisensory circumstances. It has been postulated that these two distinct epochs of multisensory integration may ultimately be linked to very different behavioral and/or perceptual roles (Royal et al. 2009). Whereas reduced latencies may speed target detection and identification, extended response duration may facilitate perceptual analysis of the object or area of interest. One interesting hypothesis is that the early speeding of responses will be more prominent in SC multisensory neurons given their important role in saccadic (and head) movements, and that the extended duration will be seen more in cortical networks engaged in perceptual analyses. Future work, now in progress in our laboratory (see below), will seek to clarify the behavioral/perceptual roles of these integrative processes by directly examining the links at the neurophysiological and behavioral levels.
11.4 STUDYING MULTISENSORY INTEGRATION IN AN AWAKE AND BEHAVING SETTING: NEW INSIGHTS INTO UTILITY OF MULTISENSORY PROCESSES As research on the neural substrates of multisensory integration progresses, and as the behavioral and perceptual consequences of multisensory combinations become increasingly apparent, contemporary neuroscience is faced with the challenge of bridging between the level of the single neuron and whole animal behavior and perception. To date, much of the characterization of multisensory integration at the cellular level has been conducted in anesthetized animals, which offer a variety of practical advantages. However, given that anesthesia could have substantial effects on neural encoding, limiting the interpretation of results within the broader construct of perceptual abilities (Populin 2005; Wang et al. 2005; Ter-Mikaelian et al. 2007), the field must now turn toward awake preparations in which direct correlations can be drawn between neurons and behavior/perception. Currently, in our laboratory, we are using operant conditioning methods to train animals to fixate on a single location while audiovisual stimuli are presented in order to study SRF architecture in this setting (and compare these SRFs with those generated in anesthetized animals). In addition to providing a more naturalistic view into receptive field organization, these studies can then be extended in order to begin to address the relationships between the neural and behavioral levels. One example of this is the use of a delayed saccade task, which has been used in prior work to parse sensory from motor responses in the SC (where many neurons have both sensory and motor activity; Munoz et al. 1991a, 1991b; Munoz and Guitton 1991; Guitton and Munoz 1991). In this task, an animal is operantly conditioned to fixate on a simple visual stimulus (a light-emitting diode or LED), and to hold fixation for the duration of the LED. While maintaining fixation, a peripheral LED illuminates, resulting in a sensory (i.e., visual) response in the SC. A short time later (usually on the order of 100–200 ms), the fixation LED is shut off, cueing the animal to generate a motor response to the location at which the target was previously presented. The “delay” allows the sensory response to be dissociated from the motor response, thus providing insight into the nature of the sensory–motor transform. Although such delayed saccade tasks have been heavily employed in both the cat and monkey, they are typically used to eliminate “confounding” sensory influences on the motor responses. Another advantage afforded by the awake preparation is the ability to study how space, time, and effectiveness interact in a state more reflective of normal brain function, and which is likely to reveal important links between multisensory neuronal interactions and behavioral/perceptual enhancements such as speeded responses, increased detection, and accuracy gains. Ideally, these analyses could be structured to allow direct neurometric–psychometric comparisons, providing fundamental insights into how individual neurons and neuronal assemblies impact whole organismic processes.
0 10 20 30 50
VA
(V+A)
–1.0
0.0
1.0
150 100 50 0 –100 0
150 100 50 0
150 100 50 0
100 200 300 400 Time (ms)
Rasters and perievent time histograms
STRFs
100 200 300 400 0 Time from stim onset (ms)
40
20
0
40
20
0
40
20
0
40
20
0
40
20
0
(b)
–1.0
0.0
1.0
Spikes/s
Spikes/s
0 –100 0
100
200
0
100
200
0
100
200
100 200 300 400 Time (ms)
Rasters and perievent time histograms
FIGURE 11.6 (See color insert.) Representative STRF from awake (a) versus anesthetized (b) recordings from cat SC using simple audiovisual stimulus presentations (an LED paired with broadband noise). In awake animals, superadditive interactions occurred over multiple time points in multisensory condition (VA) when compared to what would be predicted based on a linear summation of unisensory responses (V + A; see contrast, VA – [V + A]). This differs from anesthetized recordings from SC in which multisensory interactions are limited to earliest temporal phase of multisensory response.
VA–(V+A)
0 10 20 30 50
A
STRFs
0 10 20 30 50 0 100 200 300 400 Time from stim onset (ms)
0 10 20 30 50
V
0 10 20 30 50
Azimuth (deg)
(a)
200 The Neural Bases of Multisensory Processes
Spatial and Temporal Features of Multisensory Processes
201
Preliminary studies have already identified that multisensory neurons in the SC of the awake cat demonstrate extended response durations, as well as superadditive interactions over multiple time scales, when compared to anesthetized animals in which multisensory interactions are typically limited to the early phases of the response (Figure 11.6; Krueger et al. 2008). These findings remain to be tested in multisensory regions of the cortex, or extended beyond simple stimuli (LEDs paired with white noise) to more complex, ethologically relevant cues that might better address multisensory perceptual capabilities. Responses to naturalistic stimuli in cats have primarily been examined in unisensory cortices, demonstrating that simplification of natural sounds (bird chirps) results in significant alteration of neuronal responses (Bar-Yosef et al. 2002) and that firing rates differ for natural versus time-reversed conspecific vocalizations (Qin et al. 2008) in the primary auditory cortex. Furthermore, multisensory studies in primates have shown that multisensory enhancement in the primary auditory cortex of awake monkeys was reduced when a mismatched pair of naturalistic audiovisual stimuli was presented (Kayser et al. 2010).
11.5 HUMAN BEHAVIORAL AND PERCEPTUAL STUDIES OF MULTISENSORY PROCESSING: BUILDING BRIDGES BETWEEN NEUROPHYSIOLOGICAL AND BEHAVIORAL AND PERCEPTUAL LEVELS OF ANALYSIS As should be clear from the above description, the ultimate goal of neurophysiological studies is to provide a more informed view into the encoding processes that give rise to our behaviors and perceptions. Indeed, these seminal findings in the animal model can be used as important instruction sets for the design of experiments in human subjects to bridge between these domains. Recently, our laboratory has embarked on such experiments with a focus on better characterizing how stimulus timing influences multisensory perceptual processes, with a design shaped by our knowledge of the temporal principle.
11.5.1 Defining the “Temporal Window” of Multisensory Integration In addition to emphasizing the importance of stimulus onset asynchrony (SOA) in determining the outcome of a given multisensory pairing, experiments in both SC and AES cortex of the cat showed that the span of time over which response enhancements are generally seen in these neurons is on the order of several hundred milliseconds (Meredith et al. 1987; Wallace and Stein 1996; Wallace et al. 1992, 1996). Behavioral studies have followed up on these analyses to illustrate the temporal constraints of multisensory combinations on human performance, and have found that the presentation of cross-modal stimulus pairs in close temporal proximity results in shortened saccadic reaction times (Colonius and Diederich 2004; Colonius and Arndt 2001; Frens et al. 1995), heightened accuracy in understanding speech in noise (McGrath and Summerfield 1985; Pandey et al. 1986; van Wassenhove et al. 2007), as well as playing an important role in multisensory illusions such as the McGurk effect (Munhall et al. 1996), the sound-induced flash illusion (Shams et al. 2000, 2002), the parchment skin illusion (Guest et al. 2002), and the stream-bounce illusion (Sekuler et al. 1997). Moreover, multisensory interactions as demonstrated using population-based functional imaging methods (Dhamala et al. 2007; Kavounoudias et al. 2008; Macaluso et al. 2004; Noesselt et al. 2007) have been shown to be greatest during synchronous presentation of stimulus pairs. Perhaps even more important than synchrony in these studies was the general finding that multisensory interactions were typically preserved over an extended window of time (i.e., several hundred milliseconds) surrounding simultaneity, giving rise to the term “temporal window” for describing the critical period for these interactions (Colonius and Diederich 2004; van Wassenhove et al. 2007; Dixon and Spitz 1980). The concept of such a window makes good ethological sense, in that it provides a buffer for the latency differences that characterize the propagation times of energies in the different senses. Most illustrative here are the differences between the propagation times of light and sound in our environment, which differ by many orders of magnitude. As a simple example of
202
The Neural Bases of Multisensory Processes
this difference, take an audiovisual event happening at a distance of 1 m, where the incident energies will arrive at the retina almost instantaneously and at the cochlea about 3 ms later (the speed of sound is approximately 330 m/s). Now, if we move that same audiovisual source to a distance of 20 m, the difference in arrival times expands to 60 ms. Hence, having a window of tolerance for these audiovisual delays represents an effective means to continue to bind stimuli across modalities even without absolute correspondence in their incident arrival times. Because of the importance of temporal factors for multisensory integration, a number of experimental paradigms have been developed for use in human subjects as a way to systematically study the temporal binding window and its associated dynamics. One of the most commonly used of these is a simultaneity judgment task, in which paired visual and auditory stimuli are presented at various SOAs and participants are asked to judge whether the stimuli occurred simultaneously or successively (Zampini et al. 2005a; Engel and Dougherty 1971; Stone et al. 2001; Stevenson et al. 2010). A distribution of responses can then be created that plots the probability of simultaneity reports as a function of SOA. This distribution yields not only the point of subjective simultaneity, defined as the peak of function (Stone et al. 2001; Zampini et al. 2005a) but, more importantly, can be used to define a “window” of time within which simultaneity judgments are highly likely. A similar approach is taken in paradigms designed to assess multisensory temporal order judgments, wherein participants judge whether stimuli within one or another modality was presented first. Similar to the simultaneity judgment task, the point of subjective simultaneity is the time point at which participants judge either stimulus to have occurred first at a rate of 50% (Zampini et al. 2003; Spence et al. 2001). Once again, this method can also be adapted to create response distributions that serve as proxies for the temporal binding window. Although the point measures (i.e., point of subjective simultaneity) derived from these studies tend to differ based on the paradigm chosen (Fujisaki et al. 2004; Vroomen et al. 2004; Zampini et al. 2003, 2005a), the span of time over which there is a high likelihood of reporting simultaneity is remarkably constant, ranging from about –100 ms to 250 ms, where negative values denote auditory-leading-visual conditions (Dixon and Spitz 1980; Fujisaki et al. 2004; Vroomen et al. 2004; Zampini et al. 2003, 2005a). The larger window size on the right side of these distributions—in which vision leads audition—appears in nearly all studies of audiovisual simultaneity perception, and has been proposed to arise from the inherent flexibility needed to process real-world audiovisual events, given that the propagation speeds of light and sound will result in SOAs only on the right side of these distributions (Dixon and Spitz 1980). Indeed, very recent efforts to model the temporal binding window within a probabilistic framework (Colonius and Diederich 2010a, 2010b) have described this asymmetry as arising from an asymmetry in Bayesian priors across SOAs corresponding to the higher probability that visual-first pairs were generated by the same external event.
11.5.2 Stimulus-Dependent Effects on the Size of the Multisensory Temporal Window Although some have argued for an invariant size to the temporal window (see Munhall et al. 1996), there is a growing body of evidence to suggest that the size of the temporal window is very much dependent on the type of stimulus that is used (Dixon and Spitz 1980; van Wassenhove et al. 2008; Soto-Faraco and Alsius 2009). The largest distinctions in this domain have been seen when contrasting speech versus nonspeech stimuli, in which the window for speech appears to be far larger (approximately 450 ms) when compared with the pairing of simpler stimuli such as flash-tone pairs or videos of inanimate objects, such as a hammer pounding a nail—about 250 ms (Dixon and Spitz 1980; van Atteveldt et al. 2007; van Wassenhove et al. 2007; Massaro et al. 1996; Conrey and Pisoni 2006; McGrath and Summerfield 1985). Interpretation of this seeming expansion in the case of speech has ranged from the idea that learned tolerance of asynchrony is greatest with stimuli to which we are most exposed (Dixon and Spitz 1980), to the theory that the richness of auditory spectral and visual dynamic content in speech allows for binding over a larger range of asynchrony (Massaro et al. 1996), to the view that speech window size is dictated
Spatial and Temporal Features of Multisensory Processes
203
by the duration of the elemental building blocks of the spoken language—phonemes (Crystal and House 1981). Other studies have focused on altering the statistics of multisensory temporal relations in an effort to better characterize the malleability of these processes. For example, repeated exposure to a 250-ms auditory-leading-visual asynchronous pair is capable of biasing participants’ simultaneity judgments in the direction of that lag by about 25 ms, with effects lasting on the order of minutes (Fujisaki et al. 2004; Vroomen et al. 2004). Similar recalibration effects have been noted after exposure to asynchronous audiovisual speech, as well as to visual–tactile, audio–tactile, and sensory– motor pairs (Hanson et al. 2008; Fajen 2007; Stetson et al. 2006; Navarra et al. 2005). Although the exact mechanisms underlying these changes are unknown, they have been proposed to represent a recalibration of sensory input consistent with Bayesian models of perception (Hanson et al. 2008; Miyazaki et al. 2005, 2006).
11.5.3 Can “Higher-Order” Processes Affect Multisensory Temporal Window? In addition to these studies examining stimulus-dependent effects, other works have sought to determine the malleability of multisensory temporal processing resulting from the manipulation of cognitive processes derived from top-down networks. Much of this work has focused on attentional control, and has been strongly influenced by historical studies showing that attention within a modality could greatly facilitate information processing of a cued stimulus within that modality. This work has now been extended to the cross-modal realm, and has shown that attention to one modality can bias temporally based judgments concerning a stimulus in another modality (Zampini et al. 2005b; Spence et al. 2001; Shore et al. 2001), illustrating the presence of strong attentional links between different sensory systems.
11.6 ADULT PLASTICITY IN MULTISENSORY TEMPORAL PROCESSES: PSYCHOPHYSICAL AND NEUROIMAGING EVIDENCE Further work in support of top-down influences on multisensory perception have focused on characterizing the plasticity that can be engendered with the use of classic perceptual learning paradigms. The first of these studies were directed outside the temporal domain, and focused on the simple question of whether perceptual learning within a single sensory modality can be improved with the use of cross-modal stimuli. In these studies, participants were trained on a motion discrimination task using either a visual cue alone or combined visual–auditory cues. Results reveal enhanced visual motion discrimination abilities and an abbreviated time course of learning in the group trained on the audiovisual version of the task when compared with those trained only on the visual version (Kim et al. 2008; Seitz et al. 2006). Similar results have been seen in the visual facilitation of voice discrimination learning (von Kriegstein and Giraud 2006), cross-modal enhancement of both auditory and visual natural object recognition (Schneider et al. 2008), and in the facilitation of unisensory processing based on prior multisensory memories (Murray et al. 2004, 2005). More recently, our laboratory has extended these perceptual plasticity studies into the temporal realm, by attempting to assess the plasticity of the multisensory temporal binding window itself. Initial efforts used a two-alternative forced choice audiovisual simultaneity judgment task in which subjects were asked to choose on a trial-by-trial basis whether a stimulus pair was synchronously or asynchronously presented (Powers et al. 2009). In the initial characterization (i.e., before training), a distribution of responses was obtained that allowed us to define a proxy measure for the multisensory temporal binding window for each individual subject (Figure 11.7). After this baseline measurement, subjects were then engaged in the same task, except that now they were given feedback as to the correctness of their judgments. Training was carried out for an hour a day over 5 days. This training regimen resulted in a marked narrowing in the width of the multisensory temporal binding
204
The Neural Bases of Multisensory Processes (b)
1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0
Probability of simultaneity judgment
Probability of simultaneity judgment
(a) Baseline Post-training day 5
321 ms 115 ms
–300 –200 –100 0 100 SOA (ms)
Mean window size (ms)
(c)
400 350 300 250
200
300
1 0.9 0.8
Baseline Post-training day 5 N = 22
*
0.7
*
*
0.6 0.5 0.4 0.3 0.2 0.1 0
–300 –200 –100 0 100 SOA (ms)
200
300
Total Window Right Window Left Window
*
200 150 100 50 0
Baseline Pre Post Pre Post Pre Post Pre Post Pre Post Day 1 Day 1 Day 2 Day 2 Day 3 Day 3 Day 4 Day 4 Day 5 Day 5
FIGURE 11.7 Training on a two-alternative forced choice simultaneity judgment forced choice task. (a) An estimate of temporal binding window is derived using a criterion set at 75% of maximum. In this representative individual case, window narrows from 321 to 115 ms after 5 days (1 h/day) of feedback training. (b) After training, a significant decrease in probability of judging nonsimultaneous audiovisual pairs to be simultaneous was found (*P < .05). (c) Average window size dropped significantly after first day (1 h) of training, then remained stable (*P < .05).
window, with a group average reduction of 40%. Further characterization revealed that the changes in window size were very rapid (being seen after the first day of training), were durable (lasting at least a week after the cessation of training), and were a direct result of the feedback provided (control subjects passively exposed to the same stimulus set did not exhibit window narrowing). Additionally, to rule out the possibility that this narrowing was the result of changes in cognitive biases, a second experiment using a two-interval forced choice paradigm was undertaken in which participants were instructed to identify the simultaneously presented audiovisual pair presented within one of two intervals. The two-interval forced choice paradigm resulted in a narrowing that was similar in both degree and dynamics to that using the two-alternative forced choice approach. Overall, this result is the first to illustrate a marked experience-dependent malleability to the multisensory temporal binding window, a result that has potentially important implication for clinical conditions such as autism and dyslexia in which there is emerging evidence for changes in multisensory temporal function (Ciesielski et al. 1995; Laasonen et al. 2001, 2002; Kern 2002; Hairston et al. 2005; Facoetti et al. 2010; Foss-Feig et al. 2010). In an effort to better define the brain networks responsible for multisensory temporal perception (and the demonstrable plasticity), our laboratory has conducted a follow-up neuroimaging study using functional magnetic resonance imaging (fMRI) (Powers et al. 2010). The findings revealed marked
Spatial and Temporal Features of Multisensory Processes
205
changes in one of the best-established multisensory cortical domains in humans, the posterior superior temporal sulcus (pSTS). The pSTS exhibited striking decreases in blood oxygen level dependent (BOLD) activation after training, suggestive of an increased efficiency of processing. In addition to these changes in pSTS were changes in regions of the auditory and visual cortex, along with marked changes in functional coupling between these unisensory domains and the pSTS. Together, these studies are beginning to reveal the cortical networks involved in multisensory temporal processing and perception, as well as the dynamics of these networks that must be continually adjusted to capture the ever-changing sensory statistics of our natural world as well as their cognitive valence.
11.7 DEVELOPMENTAL PLASTICITY IN MULTISENSORY REPRESENTATIONS: INSIGHTS FROM ANIMAL AND HUMAN STUDIES In addition to this compelling emerging evidence as to the plastic potential of the adult brain for having its multisensory processing architecture shaped in an experience-dependent manner, there is a rich literature on the development of multisensory representations and the role that postnatal experience plays in shaping these events. Although the questions were first posed in the literature associated with the development of human perceptual abilities, more recent work in animal models has laid the foundation for better understanding the seminal events in the maturation of multisensory behaviors and perceptions.
11.7.1 Neurophysiological Studies into Development of Multisensory Circuits The studies described above in adult animal models provide an ideal foundation on which to evaluate the developmental events in the nervous system that lead up to the construction of mature multisensory representations. Hence, subsequent studies focused on establishing the developmental chronology for multisensory neurons and their integrative features in these same model structures— the subcortical SC and the cortical AES. In the SC, recordings immediately after birth reveal an absence of multisensory neurons (Wallace and Stein 1997). Indeed, the first neurons present in the SC at birth and soon after are those that are exclusively responsive to somatosensory cues. By 10 to 12 days postnatal, auditory-responsive neurons appear, setting the stage for the first multisensory neurons that are responsive to both somatosensory and auditory cues. More than a week later, the first visually responsive neurons appear, providing the basis for the first visually responsive multisensory neurons. These early multisensory neurons were found to be far different than their adult counterparts, responded weakly to sensory stimuli, and had poorly developed response selectivity, long latencies, and large receptive fields (Wallace and Stein 1997; Stein et al. 1973a, 1973b). Perhaps most importantly, these early multisensory neurons failed to integrate their different sensory inputs, responding to stimulus combinations in a manner that was indistinguishable from their component unisensory responses (Wallace and Stein 1997). Toward the end of the first postnatal month, this situation begins to change, with individual neurons starting to show the capacity to integrate their different sensory inputs. Over the ensuing several months, both the number of multisensory neurons and those with integrative capacity grow steadily, such that by 4 to 5 months after birth, the adultlike incidences are achieved (Figure 11.8). The developmental progression in the cortex is very similar to that in the SC, except that it appears to be delayed by several weeks (Wallace et al. 2006). Thus, the first multisensory neurons do not appear in AES until about 6 weeks after birth (Figure 11.8). Like with the SC, these early multisensory neurons are reflective of the adjoining unisensory representations, being auditory– somatosensory. Four weeks or so later, we see the appearance of visual neurons and the coincident appearance of visually responsive multisensory neurons. Once again, early cortical multisensory neurons are strikingly immature in many respects, including a lack of integrative capacity. As development progresses, we see a substantial growth in the multisensory population and we see most multisensory AES neurons develop their integrative abilities.
206
The Neural Bases of Multisensory Processes 70
SC AES
% Multisensory cells
60
SC
50 40 30
AES
20 10 0 0
5
10 15 Postnatal age (weeks)
20
Adult
FIGURE 11.8 Development of multisensory neurons in SC (open circles) versus AES (closed circles) of cat. Development of multisensory neurons is similar between SC and AES with exceptions of onset and overall percentage of multisensory neurons. At 4 months postnatal life, percentages of multisensory neurons in both AES and SC are at their mature levels, with SC having a higher percentage than AES.
The parallels between SC and AES in their multisensory developmental chronology likely reflect the order of overall sensory development (Gottlieb 1971), rather than dependent connectivity between the two regions because the establishment of sensory profiles in the SC precedes the functional maturation of connections between AES and the SC (Wallace and Stein 2000). Thus, a gradual recruitment of sensory functions during development appears to produce neurons capable of multisensory integration (Lewkowicz and Kraebel 2004; Lickliter and Bahrick 2004), and points strongly to a powerful role for early experience in sculpting the final multisensory state of these systems (see Section 11.7.3).
11.7.2 Development of Integrative Principles In addition to characterizing the appearance of multisensory neurons and the maturation of their integrative abilities, these studies also examined how the integrative principles changed during the course of development. Intriguingly, the principle of inverse effectiveness appeared to hold in the earliest integrating neurons, in that as soon as a neuron demonstrated integrative abilities, the largest enhancements were seen in pairings of weakly effective stimuli. Indeed, one of the most surprising findings in these developmental studies is the all-or-none nature of multisensory integration. Thus, neurons appear to transition very rapidly from a state in which they lack integrative capacity to one in which that capacity is adult-like in both magnitude and adherence to the principle of inverse effectiveness. In the spatial domain, the situation appears to be much the same. Whereas early multisensory neurons have large receptive fields and lack integration, as soon as receptive fields become adult-like in size, neurons show integrative ability. Indeed, these processes appear to be so tightly linked that it has been suggested that they reflect the same underlying mechanistic process (Wallace and Stein 1997; Wallace et al. 2006). The one principle that appears to differ in a developmental context is the temporal principle. Observations from the earliest integrating neurons show that they typically only show response enhancements to pairings at a single SOA (see Wallace and Stein 1997). This is in stark contrast to adults, in which enhancements are typically seen over a span of SOAs lasting several hundred milliseconds, and which has led to the concept of a temporal “window” for multisensory integration. In these animal studies, as development progresses, the range of SOAs over which enhancements can be generated grow, ultimately resulting in adult-sized distributions reflective of the large temporal window. Why such a progression is seen in the temporal domain and not in the other domains is not
Spatial and Temporal Features of Multisensory Processes
207
yet clear, but may have something to do with the fact that young animals are generally only concerned with events in the immediate proximity to the body (and which would make an SOA close to 0 of greatest utility). As the animal becomes increasingly interested in exploring space at greater distances, an expansion in the temporal window would allow for the better encoding of these more distant events. We will return to the issue of plasticity in the multisensory temporal window when we return to the human studies (see Section 11.7.4).
11.7.3 Experientially Based Plasticity in Multisensory Circuits Although the protracted timeline for the development of mature multisensory circuits is strongly suggestive of a major deterministic role for early experience in shaping these circuits, only with controlled manipulation of this experience can we begin to establish causative links. To address this issue, our laboratory has performed a variety of experiments in which sensory experience is eliminated or altered in early life, after which the consequent impact on multisensory representations is examined. In the first of these studies, the necessity of cross-modal experiences during early life was examined by eliminating all visual experiences from birth until adulthood, and then assessing animals as adults (Wallace et al. 2004; Carriere et al. 2007). Although there were subtle differences between SC and AES in these studies, the impact on multisensory integration in both structures was profound. Whereas dark-rearing allowed for the appearance of a robust (albeit smaller than normal) visual population, its impact on multisensory integration was profound—abolishing virtually all response enhancements to visual–nonvisual stimulus pairings. A second series of experiments then sought to address the importance of the statistical relationship of the different sensory cues to one another on the construction of these multisensory representations. Here, animals were reared in environments in which the spatial relationship between visual and auditory stimuli was systematically altered, such that visual and auditory events that were temporally coincident were always separated by 30°. When examined as adults, these animals were found to have multisensory neurons with visual and auditory receptive fields that were displaced by approximately 30°, but more importantly, to now show maximal multisensory enhancements when stimuli were separated by this disparity (Figure 11.9a). More recent work has extended these studies into the temporal domain, and has shown that raising animals in environments in which the temporal relationship of visual and auditory stimuli is altered by 100 ms results in a shift in the peak tuning profiles of multisensory neurons by approximately 100 ms (Figure 11.9b). Of particular interest was that when the temporal offset was extended to 250 ms, the neurons lost the capacity to integrate their different sensory inputs, suggesting that there is a critical temporal window for this developmental process. Collectively, these results provide strong support for the power of the statistical relations of multisensory stimuli in driving the formation of multisensory circuits; circuits that appear to be optimally designed to code the relations most frequently encountered in the world during the developmental period.
11.7.4 Development of Human Multisensory Temporal Perception The ultimate goal of these animal model–based studies is to provide a better framework from which to view human development, with a specific eye toward the maturation of the brain mechanisms that underlie multisensory-mediated behaviors and perceptions. Human developmental studies on multisensory processing have provided us with important insights into the state of the newborn and infants brains, and have illustrated that multisensory abilities are changing rapidly in the first year of postnatal life (see Lewkowicz and Ghazanfar 2009). Intriguingly, there is then a dearth of knowledge about multisensory maturation until adulthood. In an effort to begin to fill this void, our laboratory has embarked on a series of developmental studies focused on childhood and adolescence, with a specific emphasis on multisensory temporal processes, one of the principal themes of this chapter.
208
The Neural Bases of Multisensory Processes
Multisensory interaction (%)
(a)
150
100
Spatial disparity rearing normal rearing 30˚ spatially disparate rearing
50
0
–50 –30
–10
0
10
20
30
50
60
30˚ Visual-auditory spatial experience shift Temporal disparity rearing
160
normal rearing 100 ms temporal disparate rearing 250 ms temporal disparate rearing
140 120
Multisensory interaction (%)
40
Visual stimulus location relative to auditory (degrees)
–100
(b)
–20
100 80 60 40 20 0 –20 –40 –60
A100V
V=A
V100A
V200A
V300A
stimulus onset asynchrony (ms)
100 ms
V400A
V500A
250 ms
visual-auditory temporal experience shift
FIGURE 11.9 Developmental manipulations of spatial and temporal relationships of audiovisual stimuli. (a) Multisensory interaction is shown as a function of spatially disparate stimuli between normally reared animals and animals reared with a 30° disparity between auditory and visual stimuli. Peak multisensory interaction for disparately reared group falls by 30° from that of normally reared animals. (b) Multisensory interaction as a function of SOA in animals reared normally versus animals reared in environments with 100 and 250 ms temporal disparities. As might be expected, peak multisensory interactions are offset by 100 ms for normally reared versus the 100 ms disparate group. Interestingly, the 250 ms group loses the ability to integrate audiovisual stimuli.
These studies strongly suggest that the maturation of multisensory temporal functioning extends beyond the first decade of life. In the initial study, it was established that multisensory temporal functioning was still not mature by 10 to 11 years of age (Hillock et al. 2010). Here, children were assessed on a simultaneity judgment task in which flashes and tone pips were presented at SOAs ranging from –450 to +450 ms (with positive values representing visual-leading stimulus trials and
209
Spatial and Temporal Features of Multisensory Processes 700
Window size (ms)
600 500 400 300 200 100 0
0
5
10 15 Subject age (y)
20
25
FIGURE 11.10 Temporal window size decreases from childhood to adulthood. Each data point represents a participant’s window size as determined by width at 75% of maximum probability of perceived simultaneity using nonspeech stimuli. See Section 11.5.1. (Adapted from Hillock, A.R. et al., Binding of sights and sounds: Age-related changes in audiovisual temporal processing, 2010, submitted for publication.)
negative values representing auditory-leading stimulus trials), allowing for the creation of a response distribution identical to what has been done in adults and which serves as a proxy for the multisensory temporal binding window (see Section 11.6). When compared with adults, the group mean window size for these children was found to be approximately 38% larger (i.e., 413 vs. 299 ms). A larger follow-up study then sought to detail the chronology of this maturational process from 6 years of age until adulthood, and identified the closure of the binding window in mid to late adolescence for these simple visual–auditory pairings (Figure 11.10; Hillock and Wallace 2011b). A final study then sought to extend these analyses into the stimulus domain with which children likely have the greatest experience—speech. Using the McGurk effect, which uses the pairing of discordant visual and auditory speech stimuli (e.g., a visual /ga/ with an auditory /ba/), it is possible to index the integrative process by looking at how often participants report fusions that represent a synthesis of the visual and auditory cues (e.g., /da/ or /tha/). Furthermore, because this effect has been shown to be temporally dependent, it can be used as a tool to study the multisensory temporal binding window for speech-related stimuli. Surprisingly, when used with children (6–11 years), adolescents (12–17 years), and adults (18–23 years), windows were found to be indistinguishable (Hillock and Wallace 2011a). Together, these studies show a surprising dichotomy between the development of multisensory temporal perception for nonspeech versus speech stimuli, a result that may reflect the powerful imperative placed on speech in young children, and reinforcing the importance of sensory experience in the development of multisensory abilities.
11.8 CONCLUSIONS AND FUTURE DIRECTIONS As should be clear from the above, substantial efforts are ongoing to bridge between the rapidly growing knowledge sets concerning multisensory processing derived from both animal and human studies. This work should not only complement each domain, but should inform the design of better experiments in each. As an example, the final series of human experiments described above begs for a nonhuman correlate to better explore the mechanistic underpinnings that result in very different timelines for the maturation of nonspeech versus speech integrative networks. Experiments in nonhuman primates, in which the critical nodes for communicative signal processing are beginning to emerge (Ghazanfar et al. 2008, 2010), can begin to tease out the relative maturation of the relevant neurophysiological processes likely to result in these distinctions.
210
The Neural Bases of Multisensory Processes
Although we have made great strides in recent years in building a better understanding of multisensory behavioral and perceptual processes and their neural correlates, we still have much to discover. Fundamental questions remain unanswered, providing both a sense of frustration but also a time of great opportunity. One domain of great interest to our laboratory is creating a bridge between the neural and the behavioral/perceptual in an effort to extend beyond the correlative analyses done thus far. Paradigms developed in awake and behaving animals allow for a direct assessment of neural and behavioral responses during performance on the same task, and should more directly link multisensory encoding processes to their striking behavioral benefits (e.g., see Chandrasekaran and Ghazanfar 2009). However, even these experiments provide only correlative evidence, and future work will seek to use powerful new methods such as optogenetic manipulation in animal models (e.g., see Cardin et al. 2009) and transcranial magnetic stimulation in humans (e.g., see Romei et al. 2007; Beauchamp et al. 2010; Pasalar et al. 2010) to selectively deactivate specific circuit components and then assess the causative impact on multisensory function.
REFERENCES Amlot, R., R. Walker, J. Driver, and C. Spence. 2003. Multimodal visual–somatosensory integration in saccade generation. Neuropsychologia, 41, 1–15. Bar-Yosef, O., Y. Rotman, and I. Nelken. 2002. Responses of neurons in cat primary auditory cortex to bird chirps: Effects of temporal and spectral context. Journal of Neuroscience, 22, 8619–8632. Beauchamp, M.S., A.R. Nath, and S. Pasalar. 2010. fMRI-guided transcranial magnetic stimulation reveals that the superior temporal sulcus is a cortical locus of the McGurk effect. Journal of Neuroscience, 30, 2414–2417. Bell, A.H., M.A. Meredith, A.J. Van Opstal, and D.P. Munoz. 2005. Crossmodal integration in the primate superior colliculus underlying the preparation and initiation of saccadic eye movements. Journal of Neurophysiology, 93, 3659–3673. Benedek, G., G. Eordegh, Z. Chadaide, and A. Nagy. 2004. Distributed population coding of multisensory spatial information in the associative cortex. European Journal of Neuroscience, 20, 525–529. Calvert, G.A., and T. Thesen. 2004. Multisensory integration: methodological approaches and emerging principles in the human brain. Journal of Physiology, Paris, 98, 191–205. Cardin, J.A., M. Carlen, K. Meletis, U. Knoblich, F. Zhang, K. Deisseroth, L.H. Tsai, and C.I. Moore. 2009. Driving fast-spiking cells induces gamma rhythm and controls sensory responses. Nature, 459, 663–667. Carriere, B.N., D.W. Royal, T.J. Perrault, S.P. Morrison, J.W. Vaughan, B.E. Stein, and M.T. Wallace. 2007. Visual deprivation alters the development of cortical multisensory integration. Journal of Neurophysiology, 98, 2858–2867. Carriere, B.N., D.W. Royal, and M.T. Wallace. 2008. Spatial heterogeneity of cortical receptive fields and its impact on multisensory interactions. Journal of Neurophysiology, 99, 2357–2368. Chandrasekaran, C., and A.A. Ghazanfar. 2009. Different neural frequency bands integrate faces and voices differently in the superior temporal sulcus. Journal of Neurophysiology, 101, 773–788. Ciesielski, K.T., J.E. Knight, R.J. Prince, R.J. Harris, and S.D. Handmaker. 1995. Event-related potentials in cross-modal divided attention in autism. Neuropsychologia, 33, 225–246. Clark, B., and A. Graybiel. 1966. Factors contributing to the delay in the perception of the oculogravic illusion. American Journal of Psychology, 79, 377–388. Colonius, H., and P. Arndt. 2001. A two-stage model for visual–auditory interaction in saccadic latencies. Perception & Psychophysics, 63, 126–147. Colonius, H., and A. Diederich. 2004. Multisensory interaction in saccadic reaction time: A time-window-ofintegration model. Journal of Cognitive Neuroscience, 16, 1000–1009. Colonius, H., and A. Diederich. 2010a. The optimal time window of visual–auditory integration: A reaction time analysis. Frontiers in Integrative Neuroscience, 4, 11. Colonius, H., and A. Diederich. 2010b. Optimal time windows of integration. Abstract Presented at 2010 International Multisensory Research Forum. Conrey, B., and D.B. Pisoni. 2006. Auditory–visual speech perception and synchrony detection for speech and nonspeech signals. Journal of the Acoustical Society of America, 119, 4065–4073. Corneil, B.D., and D.P. Munoz 1996. The influence of auditory and visual distractors on human orienting gaze shifts. Journal of Neuroscience, 16, 8193–8207.
Spatial and Temporal Features of Multisensory Processes
211
Corneil, B.D., M. Van Wanrooij., D.P. Munoz, and A.J. Van Opstal. 2002. Auditory–visual interactions subserving goal-directed saccades in a complex scene. Journal of Neurophysiology, 88, 438–454. Crystal, T.H., and A.S. House. 1981. Segmental durations in connected speech signals. Journal of the Acoustical Society of America, 69, S82–S83. David, S.V., W.E. Vinje, and J.L. Gallant. 2004. Natural stimulus statistics alter the receptive field structure of v1 neurons. Journal of Neuroscience, 24, 6991–7006. Dhamala, M., C.G. Assisi, V.K. Jirsa, F.L. Steinberg, and J.A. Kelso. 2007. Multisensory integration for timing engages different brain networks. NeuroImage, 34, 764–773. Diederich, A., H. Colonius, D. Bockhorst, and S. Tabeling. 2003. Visual–tactile spatial interaction in saccade generation. Experimental Brain Research, 148, 328–337. Dixon, N.F., and L. Spitz. 1980. The detection of auditory visual desynchrony. Perception, 9, 719–721. Engel, G.R., and W.G. Dougherty. 1971. Visual–auditory distance constancy. Nature, 234, 308. Facoetti, A., A.N. Trussardi, M. Ruffino, M.L. Lorusso, C. Cattaneo, R. Galli, M. Molteni, and M. Zorzi. 2010. Multisensory spatial attention deficits are predictive of phonological decoding skills in developmental dyslexia. Journal of Cognitive Neuroscience, 22, 1011–1025. Fajen, B.R. 2007. Rapid recalibration based on optic flow in visually guided action. Experimental Brain Research, 183, 61–74. Forster, B., C. Cavina-Pratesi, S.M. Aglioti, and G. Berlucchi. 2002. Redundant target effect and intersensory facilitation from visual–tactile interactions in simple reaction time. Experimental Brain Research, 143, 480–487. Foss-Feig, J.H., L.D. Kwakye, C.J. Cascio, C.P. Burnette, H. Kadivar, W.L. Stone, and M.T. Wallace. 2010. An extended multisensory temporal binding window in autism spectrum disorders. Experimental Brain Research, 203, 381–389. Frassinetti, F., N. Bolognini, and E. Ladavas. 2002. Enhancement of visual perception by crossmodal visuoauditory interaction. Experimental Brain Research, 147, 332–343. Frens, M.A., and A.J. Van Opstal. 1998. Visual–auditory interactions modulate saccade-related activity in monkey superior colliculus. Brain Research Bulletin, 46, 211–224. Frens, M.A., A.J. Van Opstal, and R.F. van der Willigen. 1995. Spatial and temporal factors determine auditory–visual interactions in human saccadic eye movements. Perception & Psychophysics, 57, 802–816. Fujisaki, W., S. Shimojo, M. Kashino, and S. Nishida. 2004. Recalibration of audiovisual simultaneity. Nature Neuroscience, 7, 773–778. Furukawa, S., and J.C. Middlebrooks. 2002. Cortical representation of auditory space: Information-bearing features of spike patterns. Journal of Neurophysiology, 87, 1749–1762. Ghazanfar, A.A., C. Chandrasekaran, and N.K. Logothetis. 2008. Interactions between the superior temporal sulcus and auditory cortex mediate dynamic face/voice integration in rhesus monkeys. Journal of Neuroscience, 28, 4457–4469. Ghazanfar, A.A., C. Chandrasekaran, and R.J. Morrill. 2010. Dynamic, rhythmic facial expressions and the superior temporal sulcus of macaque monkeys: implications for the evolution of audiovisual speech. European Journal of Neuroscience, 31, 1807–1817. Gottlieb, G. 1971. Ontogenesis of sensory function in birds and mammals. In The biopsychology of development, ed. E. Tobach, L.R. Aronson, and E. Shaw. New York: Academic Press. Guest, S., C. Catmur, D. Lloyd, and C. Spence. 2002. Audiotactile interactions in roughness perception. Experimental Brain Research, 146, 161–171. Guitton, D., and D.P. Munoz 1991. Control of orienting gaze shifts by the tectoreticulospinal system in the head-free cat. I. Identification, localization, and effects of behavior on sensory responses. Journal of Neurophysiology, 66, 1605–1623. Haider, B., M.R. Krause, A. Duque, Y. Yu, J. Touryan, J.A. Mazer, and D.A. McCormick. 2010. Synaptic and network mechanisms of sparse and reliable visual cortical activity during nonclassical receptive field stimulation. Neuron, 65, 107–121. Hairston, W.D., J.H. Burdette, D.L. Flowers, F.B. Wood, and M.T. Wallace. 2005. Altered temporal profile of visual–auditory multisensory interactions in dyslexia. Experimental Brain Research, 166, 474–480. Hall, W.C., and A.K. Moschovakis. 2004. The superior colliculus: New approaches for studying sensorimotor integration. Boca Raton, FL: CRC Press. Hanson, J.V., J. Heron, and D. Whitaker. 2008. Recalibration of perceived time across sensory modalities. Experimental Brain Research, 185, 347–352. Harrington, L.K., and C.K. Peck. 1998. Spatial disparity affects visual–auditory interactions in human sensorimotor processing. Experimental Brain Research, 122, 247–252.
212
The Neural Bases of Multisensory Processes
Hershenson, M. 1962. Reaction time as a measure of intersensory facilitation. Journal of Experimental Psychology, 63, 289–293. Hillock, A.R., and M.T. Wallace. 2011a. Changes in the multisensory temporal binding window persist into adolescence. In preparation. Hillock, A.R., and M.T. Wallace. 2011b. A developmental study of the temporal constraints for audiovisual speech binding. In preparation. Hillock, A.R., A.R. Powers 3rd, and M.T. Wallace. 2010. Binding of sights and sounds: Age-related changes in audiovisual temporal processing. (Submitted). Hughes, H.C., P.A. Reuter-Lorenz, G. Nozawa, and R. Fendrich. 1994. Visual–auditory interactions in sensorimotor processing: saccades versus manual responses. Journal of Experimental Psychology. Human Perception and Performance, 20, 131–53. Hughes, H.C., M.D. Nelson, and D.M. Aronchick. 1998. Spatial characteristics of visual–auditory summation in human saccades. Vision Research, 38, 3955–63. Kavounoudias, A., J.P. Roll, J.L. Anton, B. Nazarian, M. Roth, and R. Roll. 2008. Proprio-tactile integration for kinesthetic perception: An fMRI study. Neuropsychologia, 46, 567–575. Kayser, C., C.I. Petkov, and N.K. Logothetis. 2008. Visual modulation of neurons in auditory cortex. Cerebral Cortex, 18, 1560–74. Kayser, C., N.K. Logothetis, and S. Panzeri. 2010. Visual enhancement of the information representation in auditory cortex. Current Biology, 20, 19–24. Kern, J.K. 2002. The possible role of the cerebellum in autism/PDD: Disruption of a multisensory feedback loop. Medical Hypotheses, 59, 255–260. Kim, R.S., A.R. Seitz, and L. Shams. 2008. Benefits of stimulus congruency for multisensory facilitation of visual learning. PLoS One, 3, e1532. King, A. J. 2004. The superior colliculus. Current Biology, 14, R335–R338. Krueger, J., M.C. Fister, D.W. Royal, B.N. Carriere, and M.T. Wallace. 2008. A comparison of spatiotemporal receptive fields of multisensory superior colliculus neurons in awake and anesthetized cat. Society for Neuroscience Abstract, 457.17. Krueger, J., D.W. Royal, M.C. Fister, and M.T. Wallace. 2009. Spatial receptive field organization of multisensory neurons and its impact on multisensory interactions. Hearing Research, 258, 47–54. Laasonen, M., E. Service, and V. Virsu. 2001. Temporal order and processing acuity of visual, auditory, and tactile perception in developmentally dyslexic young adults. Cognitive, Affective & Behavioral Neuroscience, 1, 394–410. Laasonen, M., E. Service, and V. Virsu. 2002. Crossmodal temporal order and processing acuity in developmentally dyslexic young adults. Brain and Language, 80, 340–354. Lakatos, P., C.M. Chen, M.N. O’Connell, A. Mills, and C.E. Schroeder. 2007. Neuronal oscillations and multisensory interaction in primary auditory cortex. Neuron, 53, 279–292. Lewkowicz, D.J., and K.S. Kraebel. 2004. The value of multisensory redundancy in the development of intersensory perception. In The Handbook of Multisensory Processes, ed. G.A. Calvert, C. Spence, and B.E. Stein. Cambridge, MA: MIT Press. Lewkowicz, D.J., and A.A. Ghazanfar. 2009. The emergence of multisensory systems through perceptual narrowing. Trends in Cognitive Sciences, 13, 470–478. Lickliter, R., and L.E. Bahrick. 2004. Perceptual development and the origins of multisensory responsiveness. In The Handbook of Multisensory Processes, ed. G.A. Calvert, C. Spence, and B.E. Stein. Cambridge, MA: MIT Press. Lovelace, C.T., B.E. Stein, and M.T. Wallace. 2003. An irrelevant light enhances auditory detection in humans: a psychophysical analysis of multisensory integration in stimulus detection. Brain Research Cognitive Brain Research, 17, 447–453. Macaluso, E., N. George, R. Dolan, C. Spence, and J. Driver. 2004. Spatial and temporal factors during processing of audiovisual speech: A PET study. NeuroImage, 21, 725–732. Machens, C.K., M.S. Wehr, and A.M. Zador. 2004. Linearity of cortical receptive fields measured with natural sounds. Journal of Neuroscience, 24, 1089–1100. Manabe, K., and H. Riquimaroux. 2000. Sound controls velocity perception of visual apparent motion. Journal of the Acoustical Society of Japan, 21, 171–174. Massaro, D.W., M.M. Cohen, and P.M. Smeele. 1996. Perception of asynchronous and conflicting visual and auditory speech. Journal of the Acoustical Society of America, 100, 1777–1786. McGrath, M., and Q. Summerfield. 1985. Intermodal timing relations and audio-visual speech recognition by normal-hearing adults. Journal of the Acoustical Society of America, 77, 678–685.
Spatial and Temporal Features of Multisensory Processes
213
Meredith, M.A., J.W. Nemitz, and B.E. Stein. 1987. Determinants of multisensory integration in superior colliculus neurons. I. Temporal factors. Journal of Neuroscience, 7, 3215–3229. Meredith, M.A., and B.E. Stein. 1983. Interactions among converging sensory inputs in the superior colliculus. Science, 221, 389–391. Meredith, M.A., and B.E. Stein. 1985. Descending efferents from the superior colliculus relay integrated multisensory information. Science, 227, 657–659. Meredith, M.A., and B.E. Stein. 1986. Spatial factors determine the activity of multisensory neurons in cat superior colliculus. Brain Research, 365, 350–354. Middlebrooks, J.C., and E.I. Knudsen. 1984. A neural code for auditory space in the cat’s superior colliculus. Journal of Neuroscience, 4, 2621–2634. Middlebrooks, J.C., L. Xu, A.C. Eddins, and D.M. Green. 1998. Codes for sound-source location in nontonotopic auditory cortex. Journal of Neurophysiology, 80, 863–881. Miyazaki, M., D. Nozaki, and Y. Nakajima. 2005. Testing Bayesian models of human coincidence timing. Journal of Neurophysiology, 94, 395–399. Miyazaki, M., S. Yamamoto, S., Uchida, and S. Kitazawa. 2006. Bayesian calibration of simultaneity in tactile temporal order judgment. Nature Neuroscience, 9, 875–877. Molholm, S., W. Ritter, M.M. Murray, D.C. Javitt, C.E. Schroeder, and J.J. Foxe. 2002. Multisensory auditory– visual interactions during early sensory processing in humans: A high-density electrical mapping study. Brain Research. Cognitive Brain Research, 14, 115–128. Munhall, K.G., P. Gribble, L. Sacco, and M. Ward. 1996. Temporal constraints on the McGurk effect. Perception & Psychophysics, 58, 351–362. Munoz, D.P., and D. Guitton. 1991. Control of orienting gaze shifts by the tectoreticulospinal system in the head-free cat: II. Sustained discharges during motor preparation and fixation. Journal of Neurophysiology, 66, 1624–1641. Munoz, D.P., D. Guitton, and D. Pelisson. 1991a. Control of orienting gaze shifts by the tectoreticulospinal system in the head-free cat: III. Spatiotemporal characteristics of phasic motor discharges. Journal of Neurophysiology, 66, 1642–1666. Munoz, D.P., D. Pelisson, and D. Guitton. 1991b. Movement of neural activity on the superior colliculus motor map during gaze shifts. Science, 251, 1358–1360. Murray, M.M., C.M. Michel, R. Grave De Peralta, S. Ortigue, D. Brunet, S. Gonzalez Andino, and A. Schnider. 2004. Rapid discrimination of visual and multisensory memories revealed by electrical neuroimaging. NeuroImage, 21, 125–135. Murray, M.M., J.J. Foxe, and G.R. Wylie. 2005. The brain uses single-trial multisensory memories to discriminate without awareness. NeuroImage, 27, 473–478. Nagy, A., G. Eordegh, and G. Benedek. 2003. Spatial and temporal visual properties of single neurons in the feline anterior ectosylvian visual area. Experimental Brain Research, 151, 108–114. Navarra, J., A. Vatakis, M. Zampini, S. Soto-Faraco, W. Humphreys, and C. Spence. 2005. Exposure to asynchronous audiovisual speech extends the temporal window for audiovisual integration. Brain Research. Cognitive Brain Research, 25, 499–507. Noesselt, T., J.W. Rieger, M.A. Schoenfeld, M. Kanowski, H. Hinrichs, H.J. Heinze, and J. Driver. 2007. Audiovisual temporal correspondence modulates human multisensory superior temporal sulcus plus primary sensory cortices. Journal of Neuroscience, 27, 11431–11441. Pandey, P.C., H. Kunov, and S.M. Abel. 1986. Disruptive effects of auditory signal delay on speech perception with lipreading. Journal of Auditory Research, 26, 27–41. Pasalar, S., T. Ro, and M.S. Beauchamp. 2010. TMS of posterior parietal cortex disrupts visual tactile multisensory integration. European Journal of Neuroscience, 31, 1783–1790. Populin, L.C. 2005. Anesthetics change the excitation/inhibition balance that governs sensory processing in the cat superior colliculus. Journal of Neuroscience, 25, 5903–5914. Powers 3rd, A.R., A.R. Hillock, and M.T. Wallace. 2009. Perceptual training narrows the temporal window of multisensory binding. Journal of Neuroscience, 29, 12265–12274. Powers 3rd, A.R., M.A. Hevey, and M.T. Wallace. 2010. Neural correlates of multisensory perceptual learning. In preparation. Qin, L., J.Y. Wang, and Y. Sato. 2008. Representations of cat meows and human vowels in the primary auditory cortex of awake cats. Journal of Neurophysiology, 99, 2305–2319. Romei, V., M.M. Murray, L.B. Merabet, and G. Thut. 2007. Occipital transcranial magnetic stimulation has opposing effects on visual and auditory stimulus detection: implications for multisensory interactions. Journal of Neuroscience, 27, 11465–11472.
214
The Neural Bases of Multisensory Processes
Rouger, J., S. Lagleyre, B. Fraysse, S. Deneve, O. Deguine, and P. Barone. 2007. Evidence that cochlearimplanted deaf patients are better multisensory integrators. Proceedings of the National Academy of Sciences of the United States of America, 104, 7295–7300. Rowland, B.A., S. Quessy, T.R. Stanford, and B.E. Stein. 2007. Multisensory integration shortens physiological response latencies. Journal of Neuroscience, 27, 5879–5884. Royal, D.W., B.N. Carriere, and M.T. Wallace. 2009. Spatiotemporal architecture of cortical receptive fields and its impact on multisensory interactions. Experimental Brain Research, 198, 127–136. Schneider, T.R., A.K. Engel, and S. Debener. 2008. Multisensory identification of natural objects in a two-way crossmodal priming paradigm. Experimental Psychology, 55, 121–132. Schorr, E.A., N.A. Fox, V. van Wassenhove, and E.I. Knudsen. 2005. Auditory–visual fusion in speech perception in children with cochlear implants. Proceedings of the National Academy of Sciences of the United States of America, 102, 18748–18750. Seitz, A.R., R. Kim, and L. Shams. 2006. Sound facilitates visual learning. Current Biology, 16, 1422–1427. Sekuler, R., A.B. Sekuler, and R. Lau. 1997. Sound alters visual motion perception. Nature, 385, 308. Shams, L., Y. Kamitani, and S. Shimojo. 2000. Illusions. What you see is what you hear. Nature, 408, 788. Shams, L., Y. Kamitani, and S. Shimojo. 2002. Visual illusion induced by sound. Brain Research Cognitive Brain Research, 14, 147–152. Shore, D.I., C. Spence, and R.M. Klein. 2001. Visual prior entry. Psychological Science, 12, 205–212. Soto-Faraco, S., and A. Alsius. 2009. Deconstructing the McGurk–MacDonald illusion. Journal of Experimental Psychology. Human Perception and Performance, 35, 580–587. Soto-Faraco, S., A. Kingstone, and C. Spence. 2003. Multisensory contributions to the perception of motion. Neuropsychologia, 41, 1847–1862. Sparks, D.L. 1986. Translation of sensory signals into commands for control of saccadic eye movements: Role of primate superior colliculus. Physiological Reviews, 66, 118–171. Sparks, D.L., and Groh, J.M. 1995. The superior colliculus: A window for viewing issues in integrative neuroscience. In The Cognitive Sciences, ed. Gazzaniga, M.S. Cambridge, MA: MIT Press. Spence, C., D.I. Shore, and R.M. Klein. 2001. Multisensory prior entry. Journal of Experimental Psychology. General, 130, 799–832. Stanford, T.R., S. Quessy, and B.E. Stein. 2005. Evaluating the operations underlying multisensory integration in the cat superior colliculus. Journal of Neuroscience, 25, 6499–6508. Stein, B.E., and M.A. Meredith. 1993. The Merging of the Senses. Cambridge, MA: MIT Press. Stein, B.E., and M.T. Wallace. 1996. Comparisons of cross-modality integration in midbrain and cortex. Progress in Brain Research, 112, 289–299. Stein, B.E., E. Labos, and L. Kruger. 1973a. Determinants of response latency in neurons of superior colliculus in kittens. Journal of Neurophysiology, 36, 680–689. Stein, B.E., E. Labos, and L. Kruger. 1973b. Sequence of changes in properties of neurons of superior colliculus of the kitten during maturation. Journal of Neurophysiology, 36, 667–679. Stein, B.E., W.S. Huneycutt, and M.A. Meredith. 1988. Neurons and behavior: The same rules of multisensory integration apply. Brain Research, 448, 355–358. Stein, B.E., M.A. Meredith, W.S. Huneycutt, and L. McDade. 1989. Behavioral indices of multisensory integration: Orientation to visual cues is affected by auditory stimuli. Journal of Cognitive Neuroscience, 1, 12–24. Stein, B.E., N. London, L.K. Wilkinson, and D.D. Price. 1996. Enhancement of perceived visual intensity by auditory stimuli: A psychophysical analysis. Journal of Cognitive Neuroscience, 8, 497–506. Stetson, C., X. Cui, P.R. Montague, and D.M. Eagleman. 2006. Motor-sensory recalibration leads to an illusory reversal of action and sensation. Neuron, 51, 651–659. Stevenson, R.A., N.A. Altieri, S. Kim, D.B. Pisoni, and T.W. James. 2010. Neural processing of asynchronous audiovisual speech perception. NeuroImage, 49, 3308–3318. Stone, J.V., N.M. Hunkin, J. Porrill, R. Wood, V. Keeler, M. Beanland, M. Port, and N.R. Porter. 2001. When is now? Perception of simultaneity. Proceedings of the Royal Society of London. Series B. Biological Sciences, 268, 31–38. Sumby, W.H., and I. Pollack. 1954. Visual contribution to speech intelligibility in noise. Journal of the Acoustical Society of America, 26, 212–215. Ter-Mikaelian, M., D.H. Sanes, and M.N. Semple. 2007. Transformation of temporal properties between auditory midbrain and cortex in the awake Mongolian gerbil. Journal of Neuroscience, 27, 6091–6102. van Atteveldt, N.M., E. Formisano, L. Blomert, and R. Goebel. 2007. The effect of temporal asynchrony on the multisensory integration of letters and speech sounds. Cerebral Cortex, 17, 962–794.
Spatial and Temporal Features of Multisensory Processes
215
van Wassenhove, V., K.W. Grant, and D. Poeppel. 2007. Temporal window of integration in auditory–visual speech perception. Neuropsychologia, 45, 598–607. van Wassenhove, V., D.V. Buonomano, S. Shimojo, and L. Shams. 2008. Distortions of subjective time perception within and across senses. PLoS One, 3, e1437. Von Kriegstein, K., and A.L. Giraud. 2006. Implicit multisensory associations influence voice recognition. PLoS Biology, 4, e326. Vroomen, J., M. Keetels, B. De Gelder, and P. Bertelson. 2004. Recalibration of temporal order perception by exposure to audio-visual asynchrony. Brain Research. Cognitive Brain Research, 22, 32–35. Wade, N.J., and R.H. Day. 1968. Development and dissipation of a visual spatial aftereffect from prolonged head tilt. Journal of Experimental Psychology, 76, 439–443. Wallace, M.T., and B.E. Stein. 1996. Sensory organization of the superior colliculus in cat and monkey. Progress in Brain Research, 112, 301–311. Wallace, M.T., and B.E. Stein. 1997. Development of multisensory neurons and multisensory integration in cat superior colliculus. Journal of Neuroscience, 17, 2429–2444. Wallace, M.T., and B.E. Stein. 2000. Onset of cross-modal synthesis in the neonatal superior colliculus is gated by the development of cortical influences. Journal of Neurophysiology, 83, 3578–3582. Wallace, M.T., and B.E. Stein. 2001. Sensory and multisensory responses in the newborn monkey superior colliculus. Journal of Neuroscience, 21, 8886–8894. Wallace, M.T., M.A. Meredith, and B.E. Stein. 1992. Integration of multiple sensory modalities in cat cortex. Experimental Brain Research, 91, 484–488. Wallace, M.T., L.K. Wilkinson, and B.E. Stein. 1996. Representation and integration of multiple sensory inputs in primate superior colliculus. Journal of Neurophysiology, 76, 1246–1266. Wallace, M.T., T.J. Perrault Jr., W.D. Hairston, and B.E. Stein. 2004. Visual experience is necessary for the development of multisensory integration. Journal of Neuroscience, 24, 9580–9584. Wallace, M.T., B.N. Carriere, T.J. Perrault Jr., J.W. Vaughan, and B.E. Stein. 2006. The development of cortical multisensory integration. Journal of Neuroscience, 26, 11844–11849. Wang, X., T. Lu, R.K. Snider, and L. Liang. 2005. Sustained firing in auditory cortex evoked by preferred stimuli. Nature, 435, 341–346. Xu, L., S. Furukawa, and J.C. Middlebrooks. 1999. Auditory cortical responses in the cat to sounds that produce spatial illusions. Nature, 399, 688–691. Ye, C.Q., M.M. Poo, Y. Dan, and X.H. Zhang. 2010. Synaptic mechanisms of direction selectivity in primary auditory cortex. Journal of Neuroscience, 30, 1861–1868. Zampini, M., D.I. Shore, and C. Spence. 2003. Audiovisual temporal order judgments. Experimental Brain Research, 152, 198–210. Zampini, M., S. Guest, D.I. Shore, and C. Spence. 2005a. Audio-visual simultaneity judgments. Perception & Psychophysics, 67, 531–544. Zampini, M., D.I. Shore, and C. Spence. 2005b. Audiovisual prior entry. Neurosci Letters, 381, 217–22.
12
Early Integration and Bayesian Causal Inference in Multisensory Perception Ladan Shams
CONTENTS 12.1 Introduction........................................................................................................................... 217 12.2 Early Auditory–Visual Interactions in Human Brain............................................................ 218 12.3 Why Have Cross-Modal Interactions?................................................................................... 219 12.4 The Problem of Causal Inference.......................................................................................... 220 12.5 Spectrum of Multisensory Combinations..............................................................................220 12.6 Principles Governing Cross-Modal Interactions................................................................... 222 12.7 Causal Inference in Multisensory Perception........................................................................ 223 12.8 Hierarchical Bayesian Causal Inference Model.................................................................... 225 12.9 Relationship with Nonhierarchical Causal Inference Model................................................ 226 12.10 Hierarchical Causal Inference Model versus Human Data................................................. 226 12.11 Independence of Priors and Likelihoods............................................................................. 227 12.12 Conclusions.......................................................................................................................... 229 References....................................................................................................................................... 229
12.1 INTRODUCTION Brain function in general, and perception in particular, has been viewed as highly modular for more than a century. Although phrenology is considered obsolete, its general notion of the brain being composed of compartments each devoted to a single function and independent of other functions has been the dominant paradigm, especially in the context of perception (Pascual-Leone and Hamilton 2001). In the cerebral cortex, it is believed that the different sensory modalities are organized into separate pathways that are independent of each other, and process information almost completely in a self-contained manner until the “well digested” processed signals converge at some higher-order level of processing in the polysensory association cortical areas, wherein the unified perception of the environment is achieved. The notion of modularity of sensory modalities has been particularly strong as related to visual perception. Vision has been considered to be highly self-contained and independent of extramodal influences. This view owes to many sources. Humans are considered to be “visual animals,” and this notion has been underscored in contemporary society with the everincreasingly important role of text and images in our lives along with the advent of electricity (and light at night). The notion of visual dominance has been supported by the classic and well-known studies of cross-modal interactions in which a conflict was artificially imposed between vision and another modality and found that vision overrides the conflicting sensory modality. For example, in the ventriloquist illusion, vision captures the location of discrepant auditory stimulus (Howard and Templeton 1966). Similarly, in the “visual capture” effect, vision captures the spatial location of a tactile or proprioceptive stimulus (Rock and Victor 1964). In the McGurk effect, vision strongly and 217
218
The Neural Bases of Multisensory Processes
qualitatively alters the perceived syllable (McGurk and McDonald 1976). As a result, the influence of vision on other modalities has been acknowledged for some time. However, the influence of other modalities on vision has not been appreciated until very recently. There have been several reports of vision being influenced by another modality; however, most of these have involved quantitative effects (Gebhard and Mowbray 1959; Scheier et al. 1999; Walker and Scott 1981; McDonald et al. 2000; Spence and Driver 1997; Spence et al. 1998; Stein et al. 1996). Over the past few years, two studies have reported radical alterations of visual perception by auditory modality. In one case, the motion trajectory of two visual targets is sometimes changed from a streaming motion to a bouncing motion by a brief sound occurring at the time of visual coincidence (Sekuler et al. 1997). In this case, the motion of the visual stimuli is, in principle, ambiguous in the absence of sound, and one could argue that sound disambiguates this ambiguity. In another study, we found that the perceived number of pulsations of a visual flash (for which there is no obvious ambiguity) is often increased when paired with multiple beeps (Shams et al. 2000, 2002). This phenomenon demonstrates, in an unequivocal fashion, that visual perception can be altered by a nonvisual signal. The effect is also very robust and resistant to changes in the shape, pattern, intensity, and timing of the visual and auditory stimuli (Shams et al. 2001, 2002; Watkins et al. 2006). For this reason, this illusion known as “sound-induced flash illusion” appears to reflect a mainstream mechanism of auditory–visual interaction in the brain as opposed to some aberration in neural processing. Thus, we used the sound-induced flash illusion as an experimental paradigm for investigating auditory–visual interactions in the human brain.
12.2 EARLY AUDITORY–VISUAL INTERACTIONS IN HUMAN BRAIN The first question we asked was, at what level of processing do auditory–visual perceptual interactions occur? Do they occur at some higher-order polysensory area in the association cortex or do they involve the modulation of activation along the visual cortex? We examined whether visually evoked potentials, as recorded from three electrodes in the occipital regions of the scalp, are affected by sound. We recorded evoked potentials under visual-alone (1flash, or 2flashes), auditory-alone (2beeps), and auditory–visual (1flash2beeps) stimulus conditions. When comparing the pattern of activity associated with a second physical flash (2flash – 1flash) with that of an illusory second flash (i.e., 1flash2beeps – 1flash – 2beeps), we obtained a very similar temporal pattern of activity (Shams et al. 2001). Furthermore, for the 1flash2beep condition, comparing illusion and no-illusion trials revealed that the perception of illusion was associated with increased gamma-band activity in the occipital region (Bhattacharya et al. 2002). A magnetoencephalography (MEG) study of the flash illusion revealed the modulation of activity in occipital channels by sound as early as 35 to 65 ms poststimulus onset (Shams et al. 2005a). These results altogether indicated a mechanism of auditory–visual interaction with very short latency, and in the occipital cortex. However, to map the exact location of the interactions, we needed higher spatial resolution. Therefore, we performed functional MRI (fMRI) studies of the sound-induced flash illusion. In these studies (Watkins et al. 2006, 2007), the visual cortical areas were functionally mapped for each individual subject using retinotopic mapping. We contrasted auditory–visual conditions (1flash1beep, 2flash2beep) versus visual-alone conditions (1flash, 2flash). This contrast indicated auditory cortical areas, which is not surprising because in one condition, there is sound, and in another condition, there is no sound. But interestingly, the contrast also indicated areas V1, V2, and V3, which is surprising because the visual stimulus is identical in the contrasted conditions. Therefore, these results (Watkins et al. 2006) clearly demonstrated for the first time (but see Calvert et al. 2001) that activity in the human visual cortex as early as V1 can be modulated by nonvisual stimulation. The observed increase in activation was very robust and significant. We suspected that this increase in activity may reflect a possible general arousal effect caused by sound as opposed to auditory–visual integration per se. Indeed, attention has been previously shown to increase activity in early visual cortical areas. To address this question, we focused on the 1flash2beep condition which, in some trials, gave rise to
Early Integration and Bayesian Causal Inference in Multisensory Perception
219
an illusory percept of two flashes (also referred to as a fission effect). We compared the illusion and no-illusion trials, reasoning that given that the physical stimuli are identical in both of these post hoc–defined conditions, the arousal level should also be equal. Contrasting illusion and nonillusion trials revealed increased activity in V1 in the illusion condition (Watkins et al. 2006), indicating that the perception of illusion is correlated with increased activity in V1. Although this contradicts the attention hypothesis laid out earlier, one could still argue that sound may only increase arousal in some trials and those trials happen to be the illusion trials. Although this argument confounds attention with integration, we could nevertheless address it using another experiment in which we included a 2flash1beep condition. On some trials of this condition, the two flashes are fused, leading to an illusory percept of a single flash (also referred to as a fusion effect), whereas in other trials, the observers correctly perceived two flashes. Contrasting the illusion and nonillusion trials, we again found a significant difference in the activation level of V1; however, this time, the perception of sound-induced visual illusion was correlated with decreased activity in V1 (Watkins et al. 2007), therefore ruling out the role of attention or arousal. As mentioned above, the eventrelated potential (ERP) study showed a similar temporal pattern of activity for the illusory and physical second flash. Here, we found a similar degree of V1 activation for physical and illusory double flash, and a similar degree of activation for the physical and illusory single flash (Watkins et al. 2007). These results altogether establish clearly that activity in early visual cortical areas, as early as in the primary visual cortex, is modulated by sound through cross-modal integration processes. What neural pathway could underlie these early auditory–visual interactions? Again, the last decade has witnessed the overturning of another dogma; the dogma of no connectivity among the sensory cortical areas. There has been mounting evidence for direct and indirect anatomical connectivity among the sensory cortical areas (e.g., Clavagnier et al. 2004; Falchier et al. 2002; Ghazanfar and Schroeder 2006; Rockland and Ojima 2003; Hackett et al. 2007). Of particular interest here are the findings of extensive projections from the auditory core and parabelt and multisensory area superior temporal polysensory cortical areas to V1 and V2 in monkey (Falchier et al. 2002; Rockland and Ojima 2003; Clavagnier et al. 2004). Intriguingly, these projections appear to be only extensive for the peripheral representations in V1, and not for the foveal representations (Falchier et al. 2002). This pattern is highly consistent with the much stronger behavioral and physiological auditory modulation of vision in the periphery compared with the fovea that we have observed (Shams et al. 2001). Interestingly, tactile modulation of visual processing also seems to be stronger in the periphery (Diederich and Colonius 2007). Therefore, it seems likely that a direct projection from A1 or a feedback projection from superior temporal sulcus (STS) could mediate the modulations we have observed. We believe that the former may be more likely because although the activation in V1 was found to correlate with the perception of flash, the activation of area STS was always increased with the perception of illusion regardless of the type of illusion (single or doubleflash; Watkins et al. 2006, 2007). Therefore, these results are more readily consistent with a direct modulation of V1 projections from auditory areas.
12.3 WHY HAVE CROSS-MODAL INTERACTIONS? The findings discussed above as well as those discussed in other chapters, make it clear that crossmodal interactions are prevalent, and can be very strong and robust. But why? At first glance, it may not be obvious why having cross-modal interactions would be advantageous or necessary for human’s survival in the environment. Especially in the context of visual perception, one could argue that visual perception is highly precise and accurate in so many tasks, that it may even be disadvantageous to “contaminate” it with other sensory signals that are not as reliable (which could then cause illusions or errors). Theory tells us, and experimental studies have confirmed, that even when a second source of information is not very reliable, combining two sources of information could result in superior estimation compared with using only the most reliable source. Maximum
220
The Neural Bases of Multisensory Processes
likelihood estimation of an object property using two independent cues, for example, an auditory estimate and a visual estimate, results in an estimate that is more reliable (more precise) than either one of the individual estimates. Many studies of multisensory perception have confirmed that the human nervous system integrates two cross-modal estimates in a similar fashion (e.g., Alais and Burr 2004; Ernst and Banks 2002; van Beers et al. 1999; Ronsse et al. 2009). Therefore, integrating information across modalities is always beneficial. Interestingly, recent studies using single-cell recordings and behavioral measurements from macaque monkeys have provided a bridge between the behavioral manifestations of multisensory integration and neural activity, showing that the activity of multisensory (visual–vestibular) neurons is consistent with Bayesian cue integration (for a review, see Angelaki et al. 2009).
12.4 THE PROBLEM OF CAUSAL INFERENCE Although it is beneficial to integrate information from different modalities if the signals correspond to the same object, one could see that integrating information from two different objects would not be advantageous. For example, while trying to cross the street on a foggy day, it would be beneficial to combine auditory and visual information to estimate the speed and direction of an approaching car. It could be a fatal mistake, on the other hand, to combine the information from the sound of a car moving behind us in the opposite direction with the image of another moving car in front of us. It should be noted that humans (as with most other organisms) are constantly surrounded by multiple objects and thus multiple sources of sensory stimulation. Therefore, at any given moment, the nervous system is engaged in processing multiple sensory signals across the senses, and not all of these signals are caused by the same object, and therefore not all of them should be bound and integrated. The problem of whether to combine two signals involves an (implicit or explicit) inference about whether the two signals are caused by the same object or by different objects, i.e., causal inference. This is not a trivial problem, and cannot be simply solved, for example, based on whether the two signals originate from the same coordinates in space. The different senses have different precisions in all dimensions, including the temporal and spatial dimensions, and even if the two signals are derived from the same object/event, the noise in the environment and in the nervous system makes the sensory signals somewhat inconsistent with each other most of the time. Therefore, the nervous system needs to use as much information as possible to solve this difficult problem. It appears that whether two sensory signals are perceptually bound together typically depends on a combination of spatial, temporal, and structural consistency between the signals as well as the prior knowledge derived from experience about the coupling of the signals in nature. For example, moving cars often make a frequency sweep sound, therefore, the prior probability for combining these two stimuli should be very high. On the other hand, moving cars do not typically create a bird song, therefore the prior bias for combining the image of a car and the sound of a bird is low. Unlike the problem of causal inference in cognition, which only arises intermittently, the problem of causal inference in perception has to be solved by the nervous system at any given moment, and is therefore at the heart of perceptual processing. In addition to solving the problem of causal inference, the perceptual system also needs to determine how to integrate signals that appear to have originated from the same source, i.e., to what extent, and in which direction (which modality should dominate which modality).
12.5 SPECTRUM OF MULTISENSORY COMBINATIONS To investigate these theoretical issues, we used two complementary experimental paradigms: a temporal numerosity judgment task (Shams et al. 2005b), and a spatial localization task (Körding et al. 2007). These two tasks are complementary in that the former is primarily a temporal task, whereas the latter is clearly a spatial task. Moreover, in the former, the auditory modality dominates, whereas in the latter, vision dominates. In both of these paradigms, there are strong illusions that occur under some stimulus conditions: the sound-induced flash illusion and the ventriloquist illusion.
Early Integration and Bayesian Causal Inference in Multisensory Perception
221
In the temporal numerosity experiment, a variable number of flashes were presented in the periphery simultaneously with a variable number of beeps. The task of the observers was to judge the number of flashes and beeps in each trial. In the spatial localization experiment, a Gabor patch and/or a noise burst were briefly presented at one of several locations along a horizontal line and the task of the subject was to judge the location of both the visual and auditory stimuli in each trial. In both experiments, we observed a spectrum of interactions (Figure 12.1). When there was no discrepancy between the auditory and visual stimuli, the two stimuli were fused (Figure 12.1a, left). When the discrepancy was small between the two stimuli, they were again fused in a large fraction of trials (Figure 12.1a, middle and right). These trials are those in which an illusion occurred. For example, when one flash paired with two beeps was presented, in a large fraction of trials, the observers reported seeing two flashes (sound-induced flash illusion) and hearing two beeps. The reverse illusion occurred when two flashes paired with one beep were seen as a single flash in a large fraction of trials. Similarly, in the localization experiment, when the spatial gap between the flash and noiseburst was small (5°), the flash captured the location of the sound in a large fraction of trials (ventriloquist illusion). In the other extreme, when the discrepancy between the auditory and visual stimuli was large, there was little interaction, if any, between the two. For example, in the 1flash4beep or 4flash1beep conditions in the numerosity judgment experiments, or in the conditions in which the flash was all the way to the left and noise all the way to the right or vice versa in the localization experiment, there was hardly any shift in the visual or auditory percepts relative to the unisensory conditions. We refer to this lack of interaction as segregation (Figure 12.1c) because it appears that the signals are kept separate from each other. Perhaps most interestingly, in conditions in which there was a moderate discrepancy between the two stimuli, sometimes there was a partial shift of the two modalities toward each other. We refer to this phenomenon as “partial integration” (Figure 12.1b). For example, in the 1flash3beep condition, the observers sometimes reported seeing two flashes and hearing three beeps. Or in the condition in which the flash is at –5° (left of fixation) and noise is at +5° (right of fixation), the observers sometimes reported hearing the noise at 0 degrees and seeing the flash at –5°. Therefore, in summary, in both experiments, we observed a Fusion
(a)
Conflict
(c)
Partial integration Segregation
(b)
FIGURE 12.1 Range of cross-modal interactions. Horizontal axis in these panels represents a perceptual dimension such as space, time, number, etc. Light bulb and loudspeaker icons represent visual stimulus and auditory stimulus, respectively. Eye and ear icons represent visual and auditory percepts, respectively. (a) Fusion. Three examples of conditions in which fusion often occurs. Left: when stimuli are congruent and veridically perceived. Middle: when discrepancy between auditory and visual stimuli is small, and percept corresponds to a point in between two stimuli. Right: when discrepancy between two stimuli is small, and one modality (in this example, vision) captures the other modality. (b) Partial integration. Left: when discrepancy between two stimuli is moderate, and the less reliable modality (in this example, vision) gets shifted toward the other modality but does not converge. Right: when discrepancy is moderate and both modalities get shifted toward each other but not enough to converge. (c) Segregation. When conflict between two stimuli is large, and the two stimuli do not affect each other.
222
The Neural Bases of Multisensory Processes (a)
(b) 50 % Auditory bias
% Visual bias
50
40
30
20
40
30
20 1
2 3 Number disparity (#)
5 10 15 20 Spatial disparity (deg.)
FIGURE 12.2 Interaction between auditory and visual modalities as a function of conflict. (a) Visual bias (i.e., influence of sound on visual perception) as a function of discrepancy between number of flashes and beeps in temporal numerosity judgment task. (b) Auditory bias (i.e., influence of vision on auditory perception) as a function of spatial gap between the two in spatial localization task.
spectrum of interactions between the two modalities. When the discrepancy is zero or small, the two modalities tend to get fused. When the conflict is moderate, partial integration may occur, and when the conflict is large, the two signals tend to be segregated (Figure 12.1, right). In both experiments, the interaction between the two modalities gradually decreased as the discrepancy between the two increased (Figure 12.2). What would happen if we had more than two sensory signals? For example, if we have a visual, auditory, and tactile signal, as is most often the case in nature. We investigated this scenario using the numerosity judgment task (Wozny et al. 2008). We presented a variable number of flashes paired with a variable number of beeps and a variable number of taps, providing unisensory, bisensory, and trisensory conditions pseudorandomly interleaved. The task of the participants was to judge the number of flashes, beeps, and taps on each trial. This experiment provided a rich set of data that replicated the sound-induced flash illusion (Shams et al. 2000) and the touch-induced flash illusion (Violentyev et al. 2005), as well as many previously unreported illusions. In fact, in every condition in which there was a small discrepancy between two or three modalities, we observed an illusion. This finding demonstrates that the interaction among these modalities is the rule rather than the exception, and the sound-induced flash illusions that have been previously reported are not “special” in the sense that they are not unusual or out of the ordinary, but rather, they are consistent with a general pattern of cross-modal interactions that cuts across modalities and stimulus conditions. We wondered whether these changes in perceptual reports reflect a change in response criterion as opposed to a change in perception per se. We calculated the sensitivity (d′) change between bisensory and unisensory conditions (and between trisensory and bisensory conditions) and found statistically significant changes in sensitivity as a result of the introduction of a second (or third) sensory signal in most of the cases despite the very conservative statistical criterion used. In other words, the observed illusions (both fission and fusion) reflect cross-modal integration processes, as opposed to response bias.
12.6 PRINCIPLES GOVERNING CROSS-MODAL INTERACTIONS Is there anything surprising about the fact that there are a range of interactions between the senses? Let us examine that. Intuitively, it is reasonable for the brain to combine different sources of information to come up with the most informative guess about an object, if all the bits of information are about the same object. For example, if we are holding a mug in our hand, it makes sense that
Early Integration and Bayesian Causal Inference in Multisensory Perception
223
we use both haptic and visual information to estimate the shape of the mug. It is also expected for the bits of information to be fairly consistent with each other if they arise from the same object. Therefore, it would make sense for the nervous system to fuse the sensory signals when there is little or no discrepancy between the signals. Similarly, as discussed earlier, it is reasonable for the nervous system not to combine the bits of information if they correspond to different objects. It is also expected for the bits of information to be highly disparate if they stem from different objects. Therefore, if we are holding a mug while watching TV, it would be best not to combine the visual and haptic information. Therefore, segregation also makes sense from a functional point of view. How about partial integration? Is there a situation in which partial integration would be beneficial? There is no intuitively obvious explanation for partial integration, as we do not encounter situations wherein two signals are only partially caused by the same object. Therefore, the phenomenon of partial integration is rather curious. Is there a single rule that can account for the entire range of cross-modal interactions including partial integration?
12.7 CAUSAL INFERENCE IN MULTISENSORY PERCEPTION The traditional model of cue combination (Ghahramani 1995; Yuille and Bülthoff 1996; Landy et al. 1995), which has been the dominant model for many years, assumes that the sensory cues all originate from the same object (Figure 12.3a) and therefore they should all be fused to obtain an optimal estimate of the object property in question. In this model, it is assumed that the sensory signals are corrupted by independent noise and, therefore, are conditionally independent of each other. The optimal estimate of the source is then a linear combination of the two sensory cues. If a Gaussian distribution is assumed for the distribution of the sensory cues, and no a priori bias, this linear combination would become a weighted average of the two sensory estimates, with each estimate weighted by its precision (or inverse of variance). This model has been very successful in accounting for the integration of sensory cues in various tasks and various combinations of sensory modalities (e.g., Alais and Burr 2004; Ernst and Banks 2002; Ghahramani 1995; van Beers et al. 1999). Although this model can account well for behavior when the conflict between the two signals is small (i.e., for situations of fusion, for obvious reasons), it fails to account for the rest of the spectrum (i.e., partial integration and segregation). (a)
(b)
s
x1 (c)
sA xA
x2
sV xV
sT
sA
sV
xA
xV
(d)
C
C=1 s
xT x1
x2
sA xA
C=2
sV xV
FIGURE 12.3 Generative model of different models of cue combination. (a) Traditional model of cue combination, in which two signals are assumed to be caused by one source. (b) Causal inference model of cue combination, in which each signal has a respective cause, and causes may or may not be related. (c) Generalization of model in (b) to three signals. (d) Hierarchical causal inference model of cue combination. There are two explicit causal structures, one corresponding to common cause and one corresponding to independent causes, and variable C chooses between the two. (b, Adapted from Shams, L. et al., Neuroreport, 16, 1923–1927, 2005b; c, adapted from Wozny, D.R. et al., J. Vis., 8, 1–11, 2008; d, Körding, K. et al., PLoS ONE, 2, e943, 2007.)
224
The Neural Bases of Multisensory Processes
To come up with a general model that can account for the entire range of interactions, we abandoned the assumption of a single source, and allowed each of the sensory cues to have a respective source. By allowing the two sources to be either dependent or independent, we allowed for both conditions of a common cause and conditions of independent causes for the sensory signals (Figure 12.3b). We assume that the two sensory signals (xA and x V) are conditionally independent of each other. This follows from the assumption that up to the point where the signals get integrated, the sensory signals in different modalities are processed in separate pathways and thus are corrupted by independent noise processes. As mentioned above, this is a common assumption. The additional assumption made here is that the auditory signal is independent of the visual source (sV) given the auditory source (sA), and likewise for visual signal. This is based on the observation that either the two signals are caused by the same object, in which case, the dependence of auditory signal on the visual source is entirely captured by its dependence on the auditory source, or they are caused by different objects, in which case, the auditory signal is entirely independent of the visual source (likewise for visual signal). In other words, this assumption follows from the observation that there is either a common source or independent sources. This general model of bisensory perception (Shams et al. 2005b) results in a very simple inference rule:
P ( s A , sV | x A , x V ) =
P ( x A | s A ) P ( x V | sV ) P ( s A , sV ) P( x A , x V )
(12.1)
where the probability of the auditory and visual sources, sA and sV, given the sensory signals xA and x V is a normalized product of the auditory likelihood (i.e., the probability of getting a signal xA given that there is a source sA out there) and visual likelihood (i.e., the probability of getting a signal x V given that there is a source sV) and the prior probability of sources sA and s V occurring jointly. The joint prior probability P(sA,s V) represents the implicit knowledge that the perceptual system has accumulated over the course of a lifetime about the statistics of auditory–visual events in the environment. In effect, it captures the coupling between the two modalities, and therefore, how much the two modalities will interact in the process of inference. If the two signals (e.g., the number of flashes and beeps) have always been consistent in one’s experience, then the expectation is that they will be highly consistent in the future, and therefore, the joint prior matrix would be diagonal (only the identical values of number of flashes and beeps are allowed, and the rest will be zero). On the other hand, if in one’s experience, the number of flashes and beeps are completely independent of each other, then P(sA,sV) would be factorizable (e.g., a uniform distribution or an isotropic Gaussian distribution) indicating that the two events have nothing to do with each other, and can take on any values independently of each other. Therefore, by having nonzero values for both sA = sV and sA ≠ sV in this joint probability distribution, both common cause and independent cause scenarios are allowed, and the relative strength of these probabilities would determine the prior expectation of a common cause versus independent causes. Other recent models of multisensory integration have also used joint prior probabilities to capture the interaction between two modalities, for example, in haptic–visual numerosity judgment tasks (Bresciani et al. 2006) and auditory–visual rate perception (Roach et al. 2006). The model of Equation 12.1 is simple, general, and readily extendable to more complex situations. For example, the inference rule for trisensory perception (Figure 12.3c) would be as follows:
P ( s A , sV , sT | x A , x V , x T ) =
P( x A | sA ) P( x V | sV ) P( x T | sT ) P(sA , sV , sT ) P( x A , xV , xT )
(12.2)
To test the trisensory perception model of Equation 12.2, we modeled the three-dimensional joint prior P(sA,sV,sT) with a multivariate Gaussian function, and each of the likelihood functions with a univariate Gaussian function. The mean of the likelihoods were assumed to be unbiased (i.e., on
Early Integration and Bayesian Causal Inference in Multisensory Perception
225
average at the veridical number), and the standard deviation of the likelihoods was estimated using data from unisensory conditions. It was also assumed that the mean and variance for the prior of the three modalities were equal, and the three covariances (for three pairs of modalities) were also equal.* This resulted in a total of three free parameters (mean, variance, and covariance of the prior). These parameters were fitted to the data from the trisensory numerosity judgment experiment discussed earlier. The model accounted for 95% of variance in the data (676 data points) using only three free parameters. To test whether the three parameters rendered the model too powerful and able to account for any data set, we scrambled the data and found that the model badly failed to account for the arbitrary data (R2 < .01). In summary, the Bayesian model of Figure 12.3c could provide a remarkable account for the myriad of two-way and three-way interactions observed in the data.
12.8 HIERARCHICAL BAYESIAN CAUSAL INFERENCE MODEL The model described above can account for the entire range of interactions. However, it does not directly make predictions about the perceived causal structure. In order to be able to make predictions about the perceived causal structure, one needs a hierarchical model in which there is a variable (variable C in Figure 12.3d) that chooses between the different causal structures. We describe this model in the context of the spatial localization task as an example. In this model, the probability of a common cause (i.e., C = 1) is simply computed using Bayes rule as follows:
(
)
p C = 1 | xV , x A =
(
) ( p(x , x )
)
p xV , x A | C = 1 p C = 1 V
(12.3)
A
According to this rule, the probability of a common cause is simply a product of two factors. The left term in the numerator—the likelihood that the two sensory signals occur if there is a common cause—is a function of how similar the two sensory signals are. The more dissimilar the two signals, the lower this probability will be. The right term in the numerator is the a priori expectation of a common cause, and is a function of prior experience (how often two signals are caused by the same source in general). The denominator again is a normalization factor. Given this probability of a common cause, the location of the auditory and visual stimulus can now be computed as follows:
(
)
(
)
sˆ = p C = 1 | xV , x A sˆC=1 + p C = 2 | x V , x A sˆC=2
(12.4)
where ŝ denotes the overall estimate of the location of sound (or visual stimulus), and ŝ C = 1 and ŝ C = 2 denote the optimal estimates of location for the scenario of common-cause or scenario of independent causes, respectively. The inference rule is interesting because it is a weighted average of two optimal estimates, and it is nonlinear in xA and x V. What does this inference rule mean? Let us focus on auditory estimation of location for example, and assume Gaussian functions for prior and likelihood functions over space. If the task of the observer is to judge the location of sound, then if the observer knows for certain that the auditory and visual stimuli were caused by two independent sources (e.g., a puppeteer talking and a puppet moving), then the optimal estimate of the location of sound would be entirely based on the auditory * These assumptions were made to minimize the number of free parameters and maximize the parsimony of the model. However, the assumptions were verified by fitting a model with nine parameters (allowing different values for the mean, variance, and covariance across modalities) to the data, and finding almost equal values for all three means, all three variances, and all three covariances.
226
The Neural Bases of Multisensory Processes
x A σ A2 + x P σ P2 where σA and σ P are the standard deviations 1 σ A2 + 1 σ P2 of the auditory likelihood and the prior, respectively. On the other hand, if the observer knows for certain that the auditory and visual stimuli were caused by the same object (e.g., a puppet talking and moving), then the optimal estimate of the location of sound would take visual information into x σ 2 + x σ 2 + x P σ P2 account: sˆA ,C=1 = V V2 A 2 A . In nature, the observer is hardly ever certain about the 1 σ V + 1 σ A + 1 σ P2 causal structure of the events in the environment, and in fact, it is the job of the nervous system to solve that problem. Therefore, in general, the nervous system would have to take both of these possibilities into account, thus, the overall optimal estimate of the location of sound happens to be a weighted average of the two optimal estimates each weighted by their respective probabilities as in Equation 12.3. It can now be understood how partial integration could result from this optimal scheme of multisensory perception. It should be noted that Equation 12.4 is derived assuming a mean squared error cost function. This is a common assumption, and roughly speaking, it means that the nervous system tries to minimize the average magnitude of error. The mean squared error function is minimized if the mean of the posterior distribution is selected as the estimate. The estimate shown in Equation 12.4 corresponds to the mean of the posterior distribution, and as it is a weighted average of the estimates of the two causal structures (i.e., ŝA,C = 2 and ŝA,C = 1), it is referred to as “model averaging.” If, on the other hand, the goal of the perceptual system is to minimize the number of times that an error is made, then the maximum of the posterior distribution would be the optimal estimate. In this scenario, the overall estimate of location would be the estimate corresponding to the causal structure with the higher probability, and thus, this strategy is referred to as “model selection.” Although the model averaging strategy of Equation 12.4 provides estimates that are never entirely consistent with either one of the two possible scenarios (i.e., with what occurs in the environment), this strategy does minimize the magnitude of error on average (the mean squared error) more than any other strategy, and therefore, it is optimal given the cost function. information and the prior: sˆA,C= 2 =
12.9 RELATIONSHIP WITH NONHIERARCHICAL CAUSAL INFERENCE MODEL The hierarchical causal inference model of Equation 12.3 can be thought of as a special form of the nonhierarchical causal inference model of Equation 12.1. By integrating out the hidden variable p( x A | sA ) p( x V | sV ) p(sA , sV ) C, the hierarchical model can be recast as p(sA , sV | x A , x V ) = where p( x A , x V ) p(sA,sV) = p(C = 1)p(s) + p(C = 2)p(sA)p(sV). In other words, the hierarchical model is a special form of the nonhierarchical model in which the joint prior is a mixture of two priors, a prior corresponding to the independent sources, and a prior corresponding to common cause. The main advantage of the hierarchical model over the nonhierarchical model is that it performs causal inference explicitly and allows making direct predictions about perceived causal structure (C).
12.10 HIERARCHICAL CAUSAL INFERENCE MODEL VERSUS HUMAN DATA We tested whether the hierarchical causal inference model can account for human auditory–visual spatial localization (Körding et al. 2007). We modeled the likelihood and prior over space using Gaussian functions. We assumed that the likelihood functions are, on average, centered around the veridical location. We also assumed that there is a bias for the center (straight ahead) location. There were four free parameters that were fitted to the data: the prior probability of a common cause, the standard deviation of the visual likelihood (i.e., the visual sensory noise), the standard deviation of auditory likelihoods (i.e., the auditory sensory noise), and the standard deviation of the prior over
Early Integration and Bayesian Causal Inference in Multisensory Perception
227
space (i.e., the strength of the bias for center). Because the width of the Gaussian prior over space is a free parameter, if there is no such bias for center position, the parameter will take on a large value, practically rendering this distribution uniform, and thus, the bias largely nonexistent. The model accounted for 97% of variance in human observer data (1225 data points) using only four free parameters (Körding et al. 2007). This is a remarkable fit, and as before, is not due to the degrees of freedom of the model, as the model cannot account for arbitrary data using the same number of free parameters. Also, if we set the value of the four parameters using some common sense values or the published data from other studies, and compare the data with the predictions of the model with no free parameters, we can still account for the data similarly well. We tested whether model averaging (Equation 12.4) or model selection (see above) explains the observers’ data better, and found that observers’ responses were highly more consistent with model averaging than model selection. In our spatial localization experiment, we did not ask participants to report their perceived causal structure on each trial. However, Wallace and colleagues did ask their subjects to report whether they perceive a unified source for the auditory and visual stimuli on each trial (Wallace et al. 2004). The hierarchical causal inference model can account for their published data; both for the data on judgments of unity, and the spatial localizations and interactions between the two modalities (Körding et al. 2007). We compared this model with other models of cue combination on the spatial localization data set. The causal inference model accounts for the data substantially better than the traditional forced fusion model of integration, and better than two recent models of integration that do not assume forced fusion (Körding et al. 2007). One of these models was a model developed by Bresciani et al. (2006) that assumes a Gaussian ridge distribution as the joint prior, and the other one was a model developed by Roach et al. (2006) that assumes the sum of a uniform distribution and a Gaussian ridge as the joint prior. We tested the hierarchical causal inference model on the numerosity judgment data described earlier. The model accounts for 86% of variance in the data (576 data points) using only four free parameters (Beierholm 2007). We also compared auditory–visual interactions and visual–visual interactions in the numerosity judgment task, and found that both cross-modal and within-modality interactions could be explained using the causal inference model, with the main difference between the two being in the a priori expectation of a common cause (i.e., Pcommon). The prior probability of a common cause for visual–visual condition was higher than that of the auditory–visual condition (Beierholm 2007). Hospedales and Vijayakumar (2009) have also recently shown that an adaptation of the causal inference model for an oddity detection task accounts well for both within-modality and cross-modal oddity detection of observers. Consistent with our results, they found the prior probability of a common cause to be higher for the within-modality task compared with the cross-modality task. In summary, we found that the causal inference model accounts well for two complementary sets of data (spatial localization and numerosity judgment), it accounts well for data collected by another group, it outperforms the traditional and other contemporary models of cue combination (on the tested data set), and it provides a unifying account of within-modality and cross-modality integration.
12.11 INDEPENDENCE OF PRIORS AND LIKELIHOODS These results altogether strongly suggest that human observers are Bayes-optimal in multisensory perceptual tasks. What does it exactly mean to be Bayes-optimal? The general understanding of Bayesian inference is that inference is based on two factors, likelihood and prior. Likelihood represents the sensory noise (in the environment or in the brain), whereas prior captures the statistics of the events in the environment, and therefore, the two quantities are independent of each other. Although this is the general interpretation of Bayesian inference, it is important to note that demonstrating that observers are Bayes-optimal under one condition does not necessarily imply that the
228
The Neural Bases of Multisensory Processes
likelihoods and priors are independent of each other. It is quite possible that changing the likelihoods would result in a change in priors or vice versa. Given that we are able to estimate likelihoods and priors using the causal inference model, we can empirically investigate the question of independence of likelihoods and priors. Furthermore, it is possible that the Bayes-optimal performance is achieved without using Bayesian inference (Maloney and Mamassian 2009). For example, it has been described that an observer using a table-lookup mechanism can achieve near-optimal performance using reinforcement learning (Maloney and Mamassian 2009). Because the Bayes-optimal performance can be achieved by using different processes, it has been argued that comparing human observer performance with a Bayesian observer in one setting alone is not sufficient as evidence for Bayesian inference as a process model of human perception. For these reasons, Maloney and Mamassian (2009) have proposed transfer criteria as more powerful experimental tests of Bayesian decision theory as a process model of perception. The transfer criterion is to test whether the change in one component of decision process (i.e., likelihood, prior, or decision rule) leaves the other components unchanged. The idea is that if the perceptual system indeed engages in Bayesian inference, a change in likelihoods, for example, would not affect the priors. However, if the system uses another process such as a table-lookup then it would fail these kinds of transfer tests. We asked whether priors are independent of likelihoods (Beierholm et al. 2009). To address this question, we decided to induce a strong change in the likelihoods and examine whether this would lead to a change in priors. To induce a change in likelihoods, we manipulated the visual stimulus. We used the spatial localization task and tested participants under two visual conditions, one with a high-contrast visual stimulus (Gabor patch), and one with a low-contrast visual stimulus. The task, procedure, auditory stimulus, and all other variables were identical across the two conditions that were tested in two separate sessions. The two sessions were held 1 week apart, so that if the observers learn the statistics of the stimuli during the first session, the effect of this learning would disappear by the time of the second session. The change in visual contrast was drastic enough to cause the performance on visual-alone trials to be lower than that of the high-contrast condition by as much as 41%. The performance on auditory-alone trials did not change significantly because the auditory stimuli were unchanged. The model accounts for both sets of data very well (R2 = .97 for high contrast, and R2 = .84 for low-contrast session). Therefore, the performance of the participants appears to be Bayes-optimal in both the high-contrast and low-contrast conditions. Considering that the performances in the two sessions were drastically different (substantially worse in the low-contrast condition), and considering that the priors were estimated from the behavioral responses, there is no reason to believe that the priors in these two sessions would be equal (as they are derived from very different sets of data). Therefore, if the estimated priors do transpire to be equal between the two sessions, that would provide a strong evidence for independence of priors from likelihoods. If the priors are equal, then swapping them between the two sessions should not hurt the goodness of fit to the data. We tested this using priors estimated from the low-contrast data to predict high-contrast data, and the priors estimated from the high-contrast data to predict the low-contrast data. The results were surprising: the goodness of fit remained almost as good (R2 = .97 and R2 = .81) as using priors from the same data set (Beierholm et al. 2009). Next, we directly compared the estimated parameters of the likelihood and prior functions for the two sessions. The model was fitted to each individual subject’s data, and the likelihood and prior parameters were estimated for each subject for each of the two sessions separately. Comparing the parameters across subjects (Figure 12.4) revealed a statistically significant (P < .0005) difference only for the visual likelihood (showing a higher degree of noise for the low-contrast condition). No other parameters (neither the auditory likelihood nor the two prior parameters) were statistically different between the two sessions. Despite a large difference between the two visual likelihoods (by >10 standard deviations) no change was detected in either probability of a common cause nor the prior over space. Therefore, these results suggest that priors are encoded independently of the likelihoods (Beierholm et al. 2009). These findings are consistent with the findings of a previous study showing that the change in the kind of perceptual bias transfers qualitatively to other types of stimuli (Adams et al. 2004).
Early Integration and Bayesian Causal Inference in Multisensory Perception ***
n.s.
n.s.
n.s.
100
16
80
12
60
8
40
4
20
0
σV
σA
Likelihoods
σP
Pcommon
Percentage common
Degrees along azimuth
20
229
0
Priors
FIGURE 12.4 Mean prior and likelihood parameter values across participants in two experimental sessions differing only in contrast of visual stimulus. Black and gray denote values corresponding to session with high-contrast and low-contrast visual stimulus, respectively. Error bars correspond to standard error of mean. (From Beierholm, U. et al., J. Vis., 9, 1–9, 2009. With permission.)
12.12 CONCLUSIONS Together with a wealth of other accumulating findings, our behavioral findings suggest that crossmodal interactions are ubiquitous, strong, and robust in human perceptual processing. Even visual perception that has been traditionally believed to be the dominant modality and highly self-contained can be strongly and radically influenced by cross-modal stimulation. Our ERP, MEG, and fMRI findings consistently show that visual processing is affected by sound at the earliest levels of cortical processing, namely at V1. This modulation reflects a cross-modal integration phenomenon as opposed to attentional modulation. Therefore, multisensory integration can occur even at these early stages of sensory processing, in areas that have been traditionally held to be unisensory. Cross-modal interactions depend on a number of factors, namely the temporal, spatial, and structural consistency between the stimuli. Depending on the degree of consistency between the two stimuli, a spectrum of interactions may result, ranging from complete integration, to partial integration, to complete segregation. The entire range of cross-modal interactions can be explained by a Bayesian model of causal inference wherein the inferred causal structure of the events in the environment depends on the degree of consistency between the signals as well as the prior knowledge/ bias about the causal structure. Indeed given that humans are surrounded by multiple objects and hence multiple sources of sensory stimulation, the problem of causal inference is a fundamental problem at the core of perception. The nervous system appears to have implemented the optimal solution to this problem as the perception of human observers appears to be Bayes-optimal in multiple tasks, and the Bayesian causal inference model of multisensory perception presented here can account in a unified and coherent fashion for an entire range of interactions in a multitude of tasks. Not only the performance of observers appears to be Bayes-optimal in multiple tasks, but the priors also appear to be independent of likelihoods, consistent with the notion of priors encoding the statistics of objects and events in the environment independent of sensory representations.
REFERENCES Adams, W.J., E.W. Graf, and M.O. Ernst. 2004. Experience can change the ‘light-from-above’ prior. Nature Neuroscience, 7, 1057–1058. Alais, D., and D. Burr. 2004. The ventriloquist effect results from near-optimal bimodal integration. Current Biology, 14, 257–62.
230
The Neural Bases of Multisensory Processes
Angelaki, D.E., Y. Gu, and G.C. Deangelis. 2009. Multisensory integration: Psychophysics, neurophysiology, and computation. Current Opinion in Neurobiology, 19, 452–458. Beierholm, U. 2007. Bayesian modeling of sensory cue combinations. PhD Thesis, California Institute of Technology. Beierholm, U., S. Quartz, and L. Shams. 2009. Bayesian priors are encoded independently of likelihoods in human multisensory perception. Journal of Vision, 9, 1–9. Bhattacharya, J., L. Shams, and S. Shimojo. 2002. Sound-induced illusory flash perception: Role of gamma band responses. Neuroreport, 13, 1727–1730. Bresciani, J.P., F. Dammeier, and M.O. Ernst. 2006. Vision and touch are automatically integrated for the perception of sequences of events. Journal of Vision, 6, 554–564. Calvert, G., P.C. Hansen, S.D. Iversen, and M.J. Brammer. 2001. Detection of audio-visual integration sites in humans by application of electro-physiological criteria to the BOLD effect. NeuroImage, 14, 427–438. Clavagnier, S., A. Falchier, and H. Kennedy. 2004. Long-distance feedback projections to area V1: Implications for multisensory integration, spatial awareness, and visual consciousness. Cognitive Affective Behavioral Neuroscience, 4, 117–126. Diederich, A., and H. Colonius. 2007. Modeling spatial effects in visual-tactile saccadic reaction time. Perception & Psychophysics, 69, 56–67. Ernst, M.O., and M.S. Banks. 2002. Humans integrate visual and haptic information in a statistically optimal fashion. Nature, 415, 429–433. Falchier, A., S. Clavagnier, P. Barone, and H. Kennedy. 2002. Anatomical evidence of multimodal integration in primate striate cortex. Journal of Neuroscience, 22, 5749–5759. Gebhard, J.W., and G.H. Mowbray. 1959. On discriminating the rate of visual flicker and auditory flutter. American Journal of Psychology, 72, 521–528. Ghahramani, Z. 1995. Computation and psychophysics of sensorimotor integration. Ph.D. Thesis, Massachusetts Institute of Technology. Ghazanfar, A., and C.E. Schroeder. 2006. Is neocortex essentially multisensory? Trends in Cognitive Sciences, 10, 278–285. Hackett, T.A., J.F. Smiley, I. Ulbert, G. Karmos, P. Lakatos, L.A. De La Mothe, and C.E. Schroeder. 2007. Sources of somatosensory input to the caudal belt areas of auditory cortex. Perception, 36, 1419–1430. Hospedales, T., and S. Vijayakumar. 2009. Multisensory oddity detection as Bayesian inference. PLoS ONE, 4, e4205. Howard, I.P., and W.B. Templeton. 1966. Human Spatial Orientation, London, Wiley. Körding, K., U. Beierholm, W.J. Ma, J.M. Tenenbaum, S. Quartz, and L. Shams. 2007. Causal inference in multisensory perception. PLoS ONE, 2, e943. Landy, M.S., L.T. Maloney, E.B. Johnston, and M. Young. 1995. Measurement and modeling of depth cue combination: In defense of weak fusion. Vision Research, 35, 389–412. Maloney, L.T., and P. Mamassian. 2009. Bayesian decision theory as a model of human visual perception: Testing Bayesian transfer. Visual Neuroscience, 26, 147–155. McDonald, J.J., W.A. Teder-Sälejärvi, and S.A. Hillyard. 2000. Involuntary orienting to sound improves visual perception. Nature, 407, 906–908. McGurk, H., and J.W. McDonald. 1976. Hearing lips and seeing voices. Nature, 264, 746–748. Pascual-Leone, A., and R. Hamilton. 2001. The metamodal organization of the brain. Progress in Brain Research, 134, 427–445. Roach, N., J. Heron, and P. McGraw. 2006. Resolving multisensory conflict: A strategy for balancing the costs and benefits of audio-visual integration. Proceedings of the Royal Society B: Biological Sciences, 273. 2159–2168. Rock, I., and J. Victor. 1964. Vision and touch: An experimentally created conflict between the two senses. Science, 143, 594–596. Rockland, K.S., and H. Ojima. 2003. Multisensory convergence in calcarine visual areas in macaque monkey. International Journal of Psychophysiology, 50, 19–26. Ronsse, R., C. Miall, and S.P. Swinnen. 2009. Multisensory integration in dynamical behaviors: Maximum likelihood estimation across bimanual skill learning. Journal of Neuroscience, 29, 8419–8428. Scheier, C.R., R. Nijwahan, and S. Shimojo. 1999. Sound alters visual temporal resolution. In Investigative Ophthalmology and Visual Science, 40, S4169. Sekuler, R., A.B. Sekuler, and R. Lau. 1997. Sound alters visual motion perception. Nature, 385, 308. Shams, L., Y. Kamitani, and S. Shimojo. 2000. What you see is what you hear. Nature, 408, 788. Shams, L., Y. Kamitani, S. Thompson, and S. Shimojo. 2001. Sound alters visual evoked potentials in humans. Neuroreport, 12, 3849–3852.
Early Integration and Bayesian Causal Inference in Multisensory Perception
231
Shams, L., Y. Kamitani, and S. Shimojo. 2002. Visual illusion induced by sound. Cognitive Brain Research, 14, 147–152. Shams, L., S. Iwaki, A. Chawla, and J. Bhattacharya. 2005a. Early modulation of visual cortex by sound: An MEG study. Neuroscience Letters, 378, 76–81. Shams, L., W.J. Ma, and U. Beierholm. 2005b. Sound-induced flash illusion as an optimal percept. Neuroreport, 16, 1923–1927. Spence, C., and J. Driver. 1997. Audiovisual links in exogenous covert spatial orienting. Perception & Psychophysics, 59, 1–22. Spence, C., M.E. Nicholls, N. Gillespie, and J. Driver. 1998. Cross-modal links in exogenous covert spatial orienting between touch, audition, and vision. Perception and Psychophysics, 60, 544–557. Stein, B.E., N. London, L.K. Wilkinson, and D.D. Price. 1996. Enhancement of perceived visual intensity by auditory stimuli: A psychophysical analysis. Journal of Cognitive Neuroscience, 8, 497–506. Van Beers, R.J., A.C. Sittig, and J.J. Denier van der Gon. 1999. Integration of proprioceptive and visual position information: An experimentally supported model. Journal of Neurophysiology, 81, 1355–1364. Violentyev, A., S. Shimojo, and L. Shams. 2005. Touch-induced visual illusion. Neuroreport, 16, 1107–1110. Walker, J.T., and K.J. Scott. 1981. Auditory–visual conflicts in the perceived duration of lights, tones, and gaps. Journal of Experimental Psychology: Human Perception and Performance, 7, 1327–1339. Wallace, M.T., G.H. Roberson, W.D. Hairston, B.E. Stein, J.W. Vaughan, and J.A. Schirillo. 2004. Unifying mulitsensory signals across time and space. Experimental Brain Research, 158, 252–258. Watkins, S., L. Shams, S. Tanaka, J.-D. Haynes, and G. Rees. 2006. Sound alters activity in human V1 in association with illusory visual perception. NeuroImage, 31, 1247–1256. Watkins, S., L. Shams, O. Josephs, and G. Rees. 2007. Activity in human V1 follows multisensory perception. NeuroImage, 37, 572–578. Wozny, D.R., U.R. Beierholm, and L. Shams. 2008. Human trimodal perception follows optimal statistical inference. Journal of Vision, 8, 1–11. Yuille, A.L., and H.H. Bülthoff. 1996. Bayesian decision theory and psychophysics. In Perception as Bayesian Inference, ed. D.C. Knill and W. Richards. Cambridge, UK: Cambridge Univ. Press.
13
Characterization of Multisensory Integration with fMRI Experimental Design, Statistical Analysis, and Interpretation Uta Noppeney
CONTENTS 13.1 Functional Specialization: Mass- Univariate Statistical Approaches....................................234 13.1.1 Conjunction Analyses................................................................................................234 13.1.2 Max and Mean Criteria............................................................................................. 236 13.1.3 Interaction Approaches.............................................................................................. 236 13.1.3.1 Classical Interaction Design: 2 × 2 Factorial Design Manipulating Presence versus Absence of Sensory Inputs............................................... 236 13.1.3.2 Interaction Design: 2 × 2 Factorial Design Manipulating Informativeness or Reliability of Sensory Inputs....................................... 238 13.1.3.3 Elaborate Interaction Design: m × n Factorial Design (i.e., More than Two Levels).................................................................................................240 13.1.3.4 Interaction Analyses Constrained by Maximum Likelihood Estimation Model........................................................................................ 242 13.1.3.5 Combining Interaction Analyses with Max Criterion................................ 242 13.1.4 Congruency Manipulations....................................................................................... 243 13.1.5 fMRI Adaptation (or Repetition Suppression)........................................................... 243 13.2 Multisensory Representations: Multivariate Decoding and Pattern Classifier Analyses......246 13.3 Functional Integration: Effective Connectivity Analyses..................................................... 247 13.3.1 Data-Driven Effective Connectivity Analysis: Psychophysiological Interactions and Granger Causality............................................................................................... 247 13.3.2 Hypothesis-Driven Effective Connectivity Analysis: Dynamic Causal Modeling....................................................................................................................248 13.4 Conclusions and Future Directions........................................................................................ 249 Acknowledgments........................................................................................................................... 249 References.......................................................................................................................................249
233
234
The Neural Bases of Multisensory Processes
This chapter reviews the potential and limitations of functional magnetic resonance imaging (fMRI) in characterizing the neural processes underlying multisensory integration. The neural basis of multisensory integration can be characterized from two distinct perspectives. From the perspective of functional specialization, we aim to identify regions where information from different senses converges and/or is integrated. From the perspective of functional integration, we investigate how information from multiple sensory regions is integrated via interactions among brain regions. Combining these two perspectives, this chapter discusses experimental design, analysis approaches, and interpretational limitations of fMRI results. The first section describes univariate statistical analyses of fMRI data and emphasizes the interpretational ambiguities of various statistical criteria that are commonly used for the identification of multisensory integration sites. The second section explores the potential and limitations of multivariate and pattern classifier approaches in multisensory integration. The third section introduces effective connectivity analyses that investigate how multisensory integration emerges from distinct interactions among brain regions. The complementary strengths of data-driven and hypothesis-driven effective connectivity analyses will be discussed. We conclude by emphasizing that the combined potentials of these various analysis approaches may help us to overcome or at least ameliorate the interpretational ambiguities associated with each analysis when applied in isolation.
13.1 FUNCTIONAL SPECIALIZATION: MASS- UNIVARIATE STATISTICAL APPROACHES Mass-univariate statistical analyses are used to identify regions where information from multiple senses converges or is integrated. Over the past decade, mass-univariate analyses formed the mainstay of fMRI research in multisensory integration. In the following section, we will discuss the pros and cons of the various analyses and statistical criteria that have been applied in the fMRI literature.
13.1.1 Conjunction Analyses Conjunction analyses explicitly test whether a voxel or an area responds to several unisensory inputs. For instance, a brain area is implicated in audiovisual convergence if it responds to both auditory and visual inputs presented in isolation. Conjunction analyses are well motivated by the neurophysiological findings that unisensory cortical domains are separated from one another by transitional multisensory zones (Wallace et al. 2004) and by the proposed patchy sensory organization of higher-order association cortices such as the superior temporal sulcus (STS; Seltzer et al. 1996; Beauchamp et al. 2004). Given the location of multisensory integration in transition zones between unisensory regions, it seems rational to infer multisensory properties from responsiveness to multiple unisensory inputs. However, whereas conjunction analyses can identify candidate multisensory regions that respond to inputs from multiple senses, even when presented alone (see Figure 13.1b), they cannot capture integration processes in which one unisensory (e.g., visual) input in itself does not elicit a significant response, but rather modulates the response elicited by another unisensory (e.g., auditory) input (see Figure 13.1c). In fact, at the single neuron level, recent neurophysiological studies have demonstrated that these sorts of modulatory multisensory interactions seem to be a rather common phenomenon in both higher level regions such as STS (Barraclough et al. 2005; Avillac et al. 2007) and particularly in low level, putatively unisensory regions (Allman et al. 2009; Meredith and Allman 2009; Dehner et al. 2004; Kayser et al. 2008). Conjunction approaches are blind to these modulatory interactions that can instead be revealed by interaction analyses (see below). Even though, based on neurophysiological results, regions that respond to multiple unisensory inputs are likely to be involved in multisensory integration, conjunction analyses cannot formally dissociate (1) genuine multisensory integration from (2) regional convergence with independent sensory neuronal populations. (1) In the case of true multisensory integration, multisensory neurons
235
Characterization of Multisensory Integration with fMRI (a)
Auditory
Visual
A
V
(b)
1 0.8 0.6 0.4 0.2 0 –0.2
Threshold
V
A
Conjunction
(c)
1 0.8 0.6 0.4 0.2 0 –0.2
Threshold
A
V
FIGURE 13.1 Conjunction design and analysis. (a) Experimental design. (1) Auditory: environmental sounds; (2) visual: pictures or video clips. Example stimuli are presented as visual images and corresponding sound spectrograms. (b and c) Data analysis and interpretation. (b) A region responding to auditory “and” visual inputs when presented in isolation is identified as multisensory in a conjunction analysis. (c) A region responding only to auditory but not visual inputs is identified as unisensory in a conjunction analysis. Therefore, conjunction analyses cannot capture modulatory interactions in which one sensory (e.g., visual) input in itself does not elicit a response, but significantly modulates response of another sensory input (e.g., auditory). Bar graphs represent effect for auditory (black) and visual (darker gray) stimuli, and “multisensory” (lighter gray) effect as defined by a conjunction.
would respond to unisensory inputs from multiple sensory modalities (e.g., AV neurons to A inputs and V inputs). (2) In the case of pure regional convergence, the blood oxygen level dependent (BOLD) response is generated by independent populations of either auditory neurons or visual neurons (e.g., A neurons to A and V neurons to V inputs). Given the low spatial resolution of fMRI, both cases produce a “conjunction” BOLD response profile, i.e., regional activation that is elicited by unisensory inputs from multiple senses. Hence, conjunction analyses cannot unambiguously identify multisensory integration. From a statistical perspective, it is important to note that the term “conjunction analysis” has been used previously to refer to two distinct classes of statistical tests that have later on been coined (1) “global null conjunction analysis” (Friston et al. 1999, 2005) and (2) “conjunction null conjunction analysis” (Nichols et al. 2005). (1) A global null conjunction analysis generalizes the one-sided t-test to multiple dimensions (i.e., comparable to an F-test, but unidirectional) and enables inferences about k or more effects being present. Previous analyses based on minimum statistics have typically used the null hypothesis that k = 0. Hence, they tested whether one or more effects were present. In the context of multisensory integration, this sort of global null conjunction analysis tests whether “at least one” unisensory input significantly activates a particular region or voxel (with all unisensory inputs eliciting an effect greater than a particular minimum t value). (2) The more stringent conjunction null conjunction analysis (implemented in most software packages) explicitly tests whether a region is significantly activated by both classes of unisensory inputs. Hence, a conjunction null conjunction analysis forms a logical “and” operation of the two statistical comparisons. This second type of inference, i.e., a logical “and” operation, is needed when identifying multisensory convergence with
236
The Neural Bases of Multisensory Processes
the help of conjunction analyses. Nevertheless, because conjunction analyses were used primarily in the early stages of fMRI multisensory research, when this distinction was not yet clearly drawn, most of the previous research is actually based on the more liberal and, in this context, inappropriate global null conjunction analysis. For instance, initial studies identified integration sites of motion information by performing a global null conjunction analysis on motion effects in the visual, tactile, and auditory domains (Bremmer et al. 2001). Future studies are advised to use the more stringent conjunction null conjunction approach to identify regional multisensory convergence.
13.1.2 Max and Mean Criteria Although conjunction analyses look for commonalities in activations to unisensory inputs from multiple sensory modalities, fMRI studies based on the max criterion include unisensory and multisensory stimulation conditions. For statistical inference, the BOLD response evoked by a bisensory input is compared to the maximal BOLD response elicited by any of the two unisensory inputs. This max criterion is related, yet not identical, to the multisensory enhancement used in neurophysiological studies. fMRI studies quantify the absolute multisensory enhancement (e.g., AV – max(A,V); van Atteveldt et al. 2004). Neurophysiological studies usually evaluate the relative multisensory enhancement, i.e., the multisensory enhancement standardized by the maximal unisensory response, e.g., (AV – max(A,V))/max(A,V) (Stein and Meredith 1993; Stein et al. 2009). Despite the similarities in criterion, the interpretation and conclusions that can be drawn from neurophysiological and fMRI results differ. Although in neurophysiology, multisensory enhancement or depression in activity in single neurons unambiguously indicate multisensory integration, multisensory BOLD enhancement does not compellingly prove multisensory integration within a region. For instance, if a region contains independent visual and auditory neuronal populations, the response to an audiovisual stimulus should be equal to the sum of the auditory and visual responses, and hence, exceed the response to the maximal unisensory response (Calvert et al. 2001). Hence, like the conjunction analysis, the max criterion cannot dissociate genuine multisensory integration from regional convergence with independent unisensory populations. Nevertheless, it may be useful to further characterize the response profile of multisensory regions identified in interaction analyses using the max criterion (see Section 13.1.3.5). In addition to the max criterion, some researchers have proposed or used a mean criterion, i.e., the response to the bisensory input should be greater than the mean response to the two unisensory inputs when presented in isolation (Beauchamp 2005). However, even in true unisensory (e.g., visual) regions, responses to audiovisual stimuli (equal to visual response) are greater than the mean of the auditory and visual responses (equal to ½ visual response). Hence, the mean criterion does not seem to be theoretically warranted and will therefore not be discussed further (Figure 13.2).
13.1.3 Interaction Approaches As demonstrated in the discussion of the conjunction and max criterion approaches, the limited spatial resolution of the BOLD response precludes dissociation of true multisensory integration from regional convergence—when the bisensory response is equal to the sum of the two unisensory responses. Given this fundamental problem of independent unisensory neuronal populations within a particular region, more stringent methodological approaches have therefore posed response additivity as the null hypothesis and identified multisensory integration through response nonlinearities, i.e., the interaction between, for example, visual and auditory inputs (Calvert et al. 2001; Calvert 2001). 13.1.3.1 Classical Interaction Design: 2 × 2 Factorial Design Manipulating Presence versus Absence of Sensory Inputs In a 2 × 2 factorial design, multisensory integration is classically identified through the interaction between presence and absence of input from two sensory modalities, e.g., (A – fixation) ≠
237
Characterization of Multisensory Integration with fMRI (a)
(b)
Auditory
Visual
A
V
Max criterion:
Audiovisual
AV
Max (A – Fix, V – Fix) < AV – Fix
“Potentially multisensory” 1 0.8 Max 0.6 0.4 0.2 0 V AV A –0.2
MSE
Max criterion: Max (A – Fix, V – Fix) < AV – Fix “Only unisensory” 1 0.8 Max 0.6 0.4 0.2 0 V A AV MSE –0.2
(c)
(d)
Mean criterion:
“Only unisensory” 1 0.8 0.6 0.4 0.2 0 A V –0.2
(A – Fix) + (V – Fix) < AV – Fix 2
Mean
AV
AV – mean
FIGURE 13.2 Max and mean criteria. (a) Experimental design. (1) Auditory: environmental sounds; (2) visual: pictures or video clips; (3) audiovisual: sounds + concurrent pictures. Example stimuli are presented as visual images and corresponding sound spectrograms. (b–d) Data analysis and interpretation. (b) A region where audiovisual response is equal to sum of auditory and visual responses is identified as potentially multisensory. However, this activation profile could equally well emerge in a region with independent auditory and visual neuronal populations. (c and d) A “unisensory” region responding equally to auditory and audiovisual inputs but not to visual inputs is identified as unisensory by max criterion (C), but as multisensory by mean criterion (d). Bar graphs represent effect for auditory (black), visual (darker gray), and audiovisual (lighter gray) stimuli, and “multisensory” (gray) effect as defined by max (multisensory enhancement) or mean criteria.
(AV – V). For example, the interaction approach investigates whether the response to an auditory stimulus depends on the presence versus the absence of a visual stimulus. To relate the interaction approach to the classical neurophysiological criterion of superadditivity, we can rewrite this formula as (AV – fixation) ≠ (A – fixation) + (V – fixation) ↔ (AV + fixation) ≠ (A + V). In other words, the response to the bisensory stimulus is different from the sum of two unisensory stimuli when presented alone (with each stimulus evoked response being normalized relative to, e.g., prestimulus baseline activity; Stanford et al. 2005; Perrault et al. 2005). A positive interaction identifies regions where the bisensory response exceeds the sum of the unisensory responses—hence referred to as
238
The Neural Bases of Multisensory Processes
a superadditive response. Similarly, subadditive (and even suppressive) effects can be identified by negative interactions. Although previous fMRI research has largely ignored and discarded subadditive interactions for methodological reasons (Beauchamp 2005), recent neurophysiological studies have clearly revealed the relevance of different, i.e., superadditive and subadditive interaction profiles for multisensory integration (Stanford et al. 2005; Laurienti et al. 2005; Stanford and Stein 2007; Sugihara et al. 2006; Avillac et al. 2007). This emphasizes the need to develop methodological approaches in fMRI that enable the interpretation of subadditive interactions. A BOLD response profile consistent with a significant superadditive and subadditive interaction cannot be attributed to the summation of independent auditory and visual responses within a region and hence implicates a region in multisensory integration. Furthermore, in contradistinction to the conjunction analysis, the interaction approach does not necessitate that a multisensory region responds to unisensory input from multiple sensory modalities. Therefore, it can also capture the modulatory interactions in which auditory input modulates the processing of visual input even though the auditory input does not elicit a response when presented alone. However, this classical interaction design gives rise to four major drawbacks. First, by definition, the interaction term can only identify nonlinear combinations of modality-specific inputs, leaving out additive multisensory integration effects that have been observed at the single neuron level. Second, for the interaction term to be valid and unbiased, the use of “fixation” (the absence of auditory and visual information) precludes that subjects perform a task on the stimuli (Beauchamp 2005). This is because task-related activations are absent during the “fixation” condition, leading to an overestimation of the summed unisensory relative to the bisensory fMRI-responses in the interaction term. Yet, even in the absence of a task, the interaction term may be unbalanced with respect to processes that are induced by stimuli but not during the fixation condition. For instance, stimulus-induced exogenous attention is likely to be enhanced for (A + V) relative to (AV + fixation). Third, subadditive interactions may be because of nonlinearities or ceiling effects not only in the neuronal but also in the BOLD response—rendering the interpretation ambiguous. Fourth, during the recognition of complex environmental stimuli such as speech, objects, or actions, multisensory interactions could emerge at multiple processing levels, ranging from the integration of low-level spatiotemporal to higher-level object-related perceptual information. These different types of integration processes are all included in the statistical comparison (i.e., interaction) when using a “fixation” condition (Werner and Noppeney 2010c). Hence, a selective dissociation of integration at multiple processing stages such as spatiotemporal and object-related information is not possible (Figure 13.3). 13.1.3.2 Interaction Design: 2 × 2 Factorial Design Manipulating Informativeness or Reliability of Sensory Inputs Some of the drawbacks of the classical interaction design can, in part, be addressed in a 2 × 2 factorial design that manipulates (1) visual informativeness (intact = Vi, noise = Vn) and (2) auditory informativeness (intact = Ai, noise = An). Even though the audiovisual noise stimulus does not provide visual or auditory object information, pure noise stimuli can be treated as a “degraded object stimulus” by subjects (Gosselin and Schyns 2003). Hence, in contrast to the classical interaction that manipulates the presence versus the absence of inputs, subjects can perform a task on the “noise” stimulus rendering the interaction AiVi + VnAn ≠ AiVn + ViAn matched with respect to stimulus evoked attention and response selection processes at least to a certain degree. Obviously, conditions cannot be matched entirely with respect to task demands. However, performance differences in a multisensory integration study should generally not be considered a confound, but rather an interesting property of multisensory integration. Indeed, it is an important question how neural processes mediate multisensory benefits. Furthermore, as auditory and visual inputs are provided in all conditions, the audiovisual interaction focuses selectively on the integration of higher-order object features rather than low-level spatiotemporal information (Figure 13.4). Hence, this design is a first step toward dissociating multisensory integration at multiple processing stages (Werner and Noppeney 2010a).
239
Characterization of Multisensory Integration with fMRI (a)
Auditory Absent
AV
V
Absent
Visual
Present
Present
+ A
Fix
(b) Interaction: Superadditive: (AV + Fix) – (A + V) 1 0.8 0.6 0.4 0.2 0 –0.2
A
V
AV
Fix
> 0
AV + Fix A + V
(c) Subadditive: (AV + Fix) – (A + V) < 0 Enhancement: AV > max (A, V) 1 0.8 0.6 0.4 0.2 0 A V AV Fix AV + Fix A + V –0.2 (d) Subadditive: Suppression: 1 0.8 0.6 0.4 0.2 0 –0.2
A
MSI
MSI
(AV + Fix) – (A + V) < 0 AV < max (A, V)
V
AV
Fix
AV + Fix A + V MSI
FIGURE 13.3 Classical interaction design: 2 × 2 factorial design manipulating presence versus absence of sensory inputs. (a) Experimental design: 2 × 2 factorial design with the factors (1) auditory: present versus absent; (2) visual: present versus absent. Example stimuli are presented as visual images and corresponding sound spectrograms. (b–d) Data analysis and interpretation. Three activation profiles are illustrated. (b) Superadditive interaction as indexed by a positive MSI effect. (c) Subadditive interaction as indexed by a negative interaction term in context of audiovisual enhancement. (d) Subadditive interaction as indexed by a negative interaction term in context of audiovisual suppression. Please note that subadditive (yet not suppressive) interactions can also result from nonlinearities in BOLD response. Bar graphs represent effect for auditory (black), visual (darker gray), and audiovisual (lighter gray) stimuli, and “multisensory” (gray) effect as defined by audiovisual interaction (AV + Fix) – (A + V). To facilitate understanding, two additional bars are inserted indicating sums that enter into interaction, i.e., AV + Fix and A + V.
240
The Neural Bases of Multisensory Processes Auditory
(a)
Noise
AiVi
ViAn
AiVn
AnVn
Noise
Visual
Intact
Intact
(b) Interaction: (AiVi + AdVd) – (AiVd + ViAd) = MSI e.g., superadditive 1 0.8 0.6 0.4 0.2 0 –0.2
AiVn
ViAn
AiVi AnVn
AiVi AiVn + AnVn + ViAn
MSI
FIGURE 13.4 Interaction design: 2 × 2 factorial design manipulating reliability of sensory inputs. (a) Experimental design. 2 × 2 factorial design with the factors (1) auditory: reliable versus unreliable; (2) visual: reliable versus unreliable. Example stimuli are presented as visual images and corresponding sound spectrograms. Please note that manipulating stimulus reliability rather than presence evades the problem of fixation condition. (b) Data analysis and interpretation. One activation profile is illustrated as an example: superadditive interaction as indexed by a positive MSI effect.
13.1.3.3 Elaborate Interaction Design: m × n Factorial Design (i.e., More than Two Levels) The drawbacks of the classical interaction design can be ameliorated further if the factorial design includes more than two levels. For instance, in a 3 × 3 factorial design, auditory and visual modalities may include three levels of sensory input: (1) sensory intact = Vi or Ai, (2) sensory degraded = Vd or Ad, or (3) sensory absent (Figure 13.5). This more elaborate interaction design enables the dissociation of audiovisual integration at multiple stages of information processing (Werner and Noppeney 2010b). The interaction approach can thus open up the potential for a fine-grained characterization of the neural processes underlying the integration of different types of audiovisual information. In addition to enabling the estimation of interactions, it also allows us to compare interactions across different levels. For instance, in a 3 × 3 factorial design, we can investigate whether an additive response combination for degraded stimuli turns into subadditive response combinations for intact stimuli by comparing the superadditivitydegraded to superadditivityintact (formally: AdVd + fixation – Vd – Ad > AiVi + fixation – Vi – Ai → AdVd – Vd – Ad > AiVi – Vi – Ai). Thus, an additive integration profile at one particular sensory input level becomes an interesting finding when it is statistically different from the integration profile (e.g., subadditive) at a different input level. In this way, the interaction approach that is initially predicated on response nonlinearities is rendered sensitive to additive combinations of unisensory responses. Testing for changes in superadditivity (or subadditivity) across different stimulus levels can also be used as a test for the principle of inverse effectiveness. According to the principle of inverse effectiveness, superadditivity is expected to decrease with stimulus efficacy as defined by, for instance, stimulus intensity or informativeness. A more superadditive or less subadditive integration profile would be expected for weak signal intensities (Stein and Stanford 2008). Finally, it should be emphasized that this
241
Characterization of Multisensory Integration with fMRI Auditory
(a)
Degraded
Absent
AiVi
AdVi
AaVi
AiVd
AdVd
AaVd
Degraded Absent
Visual
Intact
Intact
+ AiVa
AdVa
Fix
(b) Interaction: (AiVi + Fix) – (Ai + Vi) = MSIi 1 0.8 0.6 0.4 0.2 0 –0.2
Ai
Vi
AiVi
Fix
AiVi + Fix
Ai + Vi MSIi
(c) Interaction: (AdVd + Fix) – (Ad + Vd) = MSId 1 0.8 0.6 0.4 0.2 0 –0.2
Ad
Vd
AdVd
Fix
AdVd + Fix
Ad + Vd
MSId
(d) Inverse effectiveness: MSId – MSIi 1 0.8 0.6 0.4 0.2 0 –0.2
IE MSId
MSIi
FIGURE 13.5 “Elaborate” interaction design with more than two levels. (a) Experimental design: 3 × 3 factorial design with factors (1) auditory: (i) auditory intact = Ai, (ii) auditory degraded = Ad, and (iii) auditory absent Aa; (2) visual: (i) visual intact = Vi, (ii) visual degraded = Vd, and (iii) visual absent Va. Example stimuli are presented as visual images and corresponding sound spectrograms. (b–d) Data analysis and interpretation. This more elaborate design enables computation of (b) interaction for intact stimuli (MSIi), (c) interaction for degraded stimuli (MSId), and (d) inverse effectiveness contrast, i.e., MSId – MSI i = (AdVd – Vd – Ad) – (A iVi – Vi – Ai) that does not depend on fixation condition.
242
The Neural Bases of Multisensory Processes
more complex inverse effectiveness contrast does not depend on the “fixation” condition, as that is included on both sides of the inequality (and eliminated from the contrast). Thus, the inverse effectiveness contrast is an elegant way to circumvent the problems associated with the fixation condition mentioned above (Stevenson et al. 2009; Stevenson and James 2009; Werner and Noppeney 2010b; also, for a related approach in which audiovisual interactions are compared between intelligible and nonintelligible stimuli, see Lee and Noppeney 2010). 13.1.3.4 Interaction Analyses Constrained by Maximum Likelihood Estimation Model A more elaborate interaction design also accommodates more sophisticated analyses developed from the maximum likelihood framework. Numerous psychophysics studies have shown that humans integrate information from multiple senses in a Bayes optimal fashion by forming a weighted average of the independent sensory estimates (maximum likelihood estimation, MLE; Ernst and Banks 2002; Knill and Saunders 2003). This multisensory percept is Bayes optimal in that it yields the most reliable percept (n.b., reliability is the inverse of variance). Combining fMRI and an elaborate interaction design, we can investigate the neural basis of Bayes optimal multisensory integration at the macroscopic scale as provided by the BOLD response. First, we can investigate whether regional activations are modulated by the relative reliabilities of the unisensory estimates as predicted by the MLE model. For instance, in visuo–tactile integration, we would expect the activation in the somatosensory cortex during visuo–tactile stimulation to increase when the reliability of visual input is reduced and higher weight is attributed to the tactile input (Helbig et al. 2010). Second, we can investigate whether differential activations (i.e., bisensory–unisensory) in higher-order association cortices, for instance, reflect the increase in reliability during bisensory stimulation as predicted by the MLE model. This reliability increase for bisensory stimulation should be maximal when the reliabilities of the two unisensory inputs are equal. By cleverly manipulating the reliabilities of the two sensory inputs, we can thus independently test the two main MLE predictions within the same interaction paradigm: (1) the contributions of the sensory modalities to multisensory processing depend on the reliability of the unisensory estimates and (2) the reliability of the multisensory estimate is greater than the reliability of each unisensory estimate. 13.1.3.5 Combining Interaction Analyses with Max Criterion Interaction analyses can be used to refute the possibility of independent unisensory neuronal populations in a region. Nevertheless, a significant interaction is still open to many different functional interpretations. Further insights need to be gained from the activation profile of the unisensory and bisensory conditions that formed the interaction contrast. More formally, the activation profiles of superadditive and subadditive interactions can be further characterized according to the max criterion (for a related approach, see Avillac et al. 2007; Perrault et al. 2005; Werner and Noppeney 2010c). For instance, a subadditive interaction in which the audiovisual response is greater than the maximal unisensory response may simply be because of nonlinearities in the BOLD response (e.g., saturation effects) and needs to be interpreted with caution. In contrast, a subadditive interaction in which the audiovisual response is smaller than the maximal unisensory response cannot easily be attributed to such nonlinearities in the BOLD response. Instead, suppressive interactions indicate that one sensory input modulates responses to the other sensory input (Sugihara et al. 2006). Finally, a subadditive interaction with equivalent responses for auditory, visual, and audiovisual conditions is most parsimoniously explained by amodal functional properties of a particular brain region. Rather than genuinely integrating inputs from multiple sensory modalities, an amodal region may be located further “upstream” and be involved in higher-order processing of already integrated inputs. For instance, in audiovisual speech integration, a region involved in amodal semantic processing may be equally activated via visual, auditory, or audiovisual inputs. These examples demonstrate that a significant interaction is not the end, but rather the starting point of analysis and interpretation. To reach conclusive interpretations, a careful characterization of the activation profile is required.
Characterization of Multisensory Integration with fMRI
243
13.1.4 Congruency Manipulations Congruency manipulations are based on the rationale that if a region distinguishes between congruent and incongruent component pairs, it needs to have access to both sensory inputs. Congruency manipulations can be used to focus selectively on different aspects of information integration. For instance, audiovisual stimuli can be rendered incongruent in terms of space (Fairhall and Macaluso 2009; Busse et al. 2005; Bonath et al. 2007), time (Noesselt et al. 2007; Lewis and Noppeney 2010), phonology (van Atteveldt et al. 2007a; Noppeney et al. 2008), or semantics (Doehrmann and Naumer 2008; Hein et al. 2007; Noppeney et al. 2008, 2010; Sadaghiani et al. 2009; Adam and Noppeney 2010). Thus, congruency manipulations seem to be ideal to dissociate multisensory integration at multiple processing stages. However, the interpretation of congruency results is impeded by the fact that incongruencies are usually artifactual and contradict natural environmental statistics. At the behavioral level, it is well-known that multisensory integration breaks down and no unified multisensory percept is formed when the senses disagree. However, it is currently unknown how the human brain responds when it encounters discrepancies between the senses. Most of the previous fMRI research has adopted the view that integration processes are reduced for incongruent sensory inputs (Calvert et al. 2000; van Atteveldt et al. 2004; Doehrmann and Naumer 2008). Hence, comparing congruent to incongruent conditions was thought to reveal multisensory integration regions. However, the brain may also unsuccessfully attempt to integrate the discrepant sensory inputs. In this case, activations associated with multisensory integration may actually be enhanced for unfamiliar incongruent (rather than familiar congruent) sensory inputs. A similar argument has been put forward in the language processing domain where activations associated with lexical retrieval were found to be enhanced for pseudowords relative to familiar words, even though pseudowords are supposedly not endowed with a semantic representation (Price et al. 1996). Finally, within the framework of predictive coding, the brain may act as a prediction device and generate a prediction error signal when presented with unpredictable incongruent sensory inputs. Again, in this case, increased activations would be expected for incongruent rather than congruent sensory inputs in brain areas that are involved in processing the specific stimulus attributes that define the incongruency (e.g., temporal, spatial, semantic, etc.). As fMRI activations are known to be very susceptible to top-down modulation and cognitive set, these inherent interpretational ambiguities limit the role of incongruency manipulations in the investigation of multisensory integration, particularly for fMRI (rather than neurophysiological) studies. In fact, a brief review of the literature seems to suggest that congruency manipulations strongly depend on the particular cognitive set and experimental paradigm. Under passive listening/viewing conditions, increased activations have been reported primarily for congruent relative to incongruent conditions (Calvert et al. 2000; van Atteveldt et al. 2004). In contrast, in selective attention paradigms, where subjects attend to one sensory modality and ignore sensory inputs from other modalities, the opposite pattern has been reported, i.e., increased activations are observed for incongruent relative to congruent inputs (Noppeney et al. 2008, 2010; Sadaghiani et al. 2009). Finally, when subjects perform a congruency judgment that requires access and comparison of the two independent unisensory percepts and hence precludes natural audiovisual integration, differences between congruent and incongruent stimulus pairs are attenuated (van Atteveldt et al. 2007b). This complex pattern of fMRI activations suggest that incongruency does not simply prevent the brain from integrating sensory inputs, but elicits a range of other cognitive effects and top-down modulations that need to be taken into account when interpreting fMRI results.
13.1.5 fMRI Adaptation (or Repetition Suppression) fMRI adaptation (used here synonymously with repetition suppression) refers to the phenomenon that prior processing of stimuli (or stimulus attributes) decreases activation elicited by processing
244
The Neural Bases of Multisensory Processes
subsequent stimuli with identical attributes. Repetition suppression has frequently been interpreted as the fMRI analogue of neuronal response suppression, i.e., a decrease in neuronal firing rate as recorded in nonhuman primates (Desimone 1996). Despite current uncertainties about its underlying neural mechanisms, fMRI repetition suppression has been widely used as a tool for dissociating and mapping the various stages of sensory and cognitive processing. These fMRI experiments are based on the rationale that the sensitivity of a brain region to variations in stimulus attributes determines the degree of repetition suppression: the more a brain region is engaged in processing and hence sensitive to a particular stimulus feature, the more it will adapt to stimuli that are identical with respect to this feature—even though they might vary with respect to other dimensions (GrillSpector and Malach 2001; Grill-Spector et al. 2006). Repetition suppression can thus be used to define the response selectivity and invariance of neuronal populations within a region. Initial fMRI adaptation paradigms have used simple block designs, i.e., they presented alternating blocks of “same (adaptation)” versus “different (no adaptation)” stimuli. However, arrangement of the stimuli in blocks introduces a strong attentional confound that renders the interpretations of the adaptation effect difficult (even when attempts are made to maintain attention in a control task). More recent studies have therefore used randomized fMRI adaptation paradigms that reduce attentional topdown modulation at least to a certain degree. In addition to attentional confounds, task effects (e.g., response priming) need to be very tightly controlled in adaptation paradigms (for further discussion, see Henson and Rugg 2003; Henson 2003). In the field of multisensory integration, fMRI adaptation may be used to identify “amodal” neural representations. Thus, despite the changes in sensory modality, a multisensory or amodal region should show fMRI adaptation when presented with identical stimuli in different sensory modalities. For instance, presenting identical words subsequently in a written and spoken format, this crossmodal adaptation effect was used to identify amodal or multisensory phonological representations (Noppeney et al. 2008; Hasson et al. 2007). fMRI adaptation paradigms may also be combined with the outlined interaction approach. Here, a 2 × 2 factorial design would manipulate the repetition of (1) visual and (2) auditory features. A region that integrates visual and auditory features is then expected to show an interaction between the auditory and visual repetition effects, e.g., an increased visual adaptation, if the auditory feature is also repeated (Tal and Amedi 2009). This experimental approach has recently been used to study form and motion integration within the visual domain (Sarkheil et al. 2008). Most commonly, fMRI adaptation is used to provide insights into subvoxel neuronal representation. This motivation is based on the so-called fatigue model that proposes that the fMRI adaptation effect is attributable to a “fatigue” (as indexed by decreased activity) of the neurons initially responding to a specific stimulus (Grill-Spector and Malach 2001). For instance, let us then assume that a voxel contains populations of A and B neurons and responds equally to stimuli A and B, so that a standard paradigm would not be able to reveal selectivity for stimulus A. Yet, repetitive presentation of stimulus A will only fatigue the A-responsive neurons. Therefore, subsequent presentation of stimulus B will lead a rebound response of the “fresh” B neurons. Thus, it was argued the fMRI adaptation can increase the spatial resolution to a subvoxel level. Along similar lines, fMRI adaptation could potentially be used to dissociate unisensory and multisensory neuronal populations. In the case of independent populations of visual and auditory neurons (no multisensory neurons), after adaptation to a specific visual stimulus, a rebound in activation should be observed when the same stimulus is presented in the auditory modality. This activation increase should be comparable to the rebound observed when presented with a new unrelated stimulus. In contrast, if a region contains multisensory neurons, it will adapt when presented with the same stimulus irrespective of sensory modality. Thus, within the fatigue framework, fMRI adaptation may help us to dissociate unisensory and multisensory neuronal populations that evade standard analyses. However, it is likely that voxels containing visual and auditory neurons will also include audiovisual neurons. This mixture of multiple neuronal populations within a voxel may produce a more complex adaptation profile than illustrated in our toy example. Furthermore, given the diversity of multisensory enhancement and depression profiles for concurrently presented sensory inputs,
245
Characterization of Multisensory Integration with fMRI
the adaptation profile for asynchronously presented inputs from multiple modalities is not yet well characterized—it may depend on several factors such as the temporal relationship, stimulus intensity, and a voxel’s responsiveness. Even in the “simple” unisensory case, the interpretation of fMRI adaptation results is impeded by our lack of understanding of the underlying neuronal mechanisms as well as the relationship between the decreased BOLD activation and neuronal response suppression (for review and discussion, see Henson and Rugg 2003; Henson 2003). In fact, multiple models and theories have been advanced to explain repetition suppression. (1) According to the fMRI adaptation approach (the “fatigue” model mentioned above), the number of neurons that are important for stimulus representation and processing remain constant, but show reductions in their firing rates for repeated stimuli (Grill-Spector and Malach 2001). (2) Repetition suppression has been attributed to a sharpening Presentation 1
Presentation 2 (a)
ity al
od
m
ry
so
en
Sa
m
es
Sa
m
es
tim
ul
us
Stim 1
Stim 1
1 0.8 0.6 0.4 0.2 0 –0.2
(b)
ulus ity e stim odal Sam ry m o s n se rent Diffe
(c)
iff
y lit us ul oda tim m t s ry en so er en iff nt s e er
D
D
Diff e Sam rent st imu e se lus nso ry m oda lity
Stim 1
Stim 2
(d)
Stim 2
1 0.8 0.6 0.4 0.2 0 –0.2 1 0.8 0.6 0.4 0.2 0 –0.2 1 0.8 0.6 0.4 0.2 0 –0.2
Cases -Unisensory -Multisensory -Amodal
-Unisensory
-Amodal -Multisensory?
1 0.8 0.6 0.4 0.2 0 –0.2
-Unisensory -Multisensory -Amodal
1 0.8 0.6 0.4 0.2 0 –0.2
-Unisensory -Multisensory -Amodal
FIGURE 13.6 Cross-modal fMRI adaptation paradigm and BOLD predictions. Figure illustrates BOLD predictions for different stimulus pairs with (1) stimulus and/or (2) sensory modality being same or different for the two presentations. Please note that this simplistic toy example serves only to explain fundamental principles rather than characterizing the complexity of multisensory adaptation profiles (see text for further discussion). (a) Same stimulus, same sensory modality: decreased BOLD response is expected in unisensory, multisensory, and amodal areas. (b) Same stimulus, different sensory modality: decreased BOLD response is expected for higher-order “amodal” regions and not for unisensory regions. Given the complex interaction profiles for concurrently presented sensory inputs, prediction for multisensory regions is unclear. Different stimulus, same sensory modality (c) and different stimulus, different sensory modality (d). No fMRI adaptation is expected of unisensory, multisensory, or amodal regions.
246
The Neural Bases of Multisensory Processes
of the cortical stimulus representations, whereby neurons that are not essential for stimulus processing respond less for successive stimulus presentations (Wiggs and Martin 1998). (3) In neural network models, repetition suppression is thought to be mediated by synaptic changes that decrease the settling time of an attractor neural network (Becker et al. 1997; Stark and McClelland 2000). (4) Finally, hierarchical models of predictive coding have proposed that response suppression reflects reduced prediction error, i.e., the brain learns to predict the stimulus attributes on successive exposures to identical stimuli, the firing rate of stimulus-evoked error units are suppressed by top-down predictions mediated by backward connections from higher-level cortical areas (Friston 2005). The predictive coding model raises questions about the relationship between cross-modal congruency and adaptation effects. Both fMRI adaptation and congruency designs manipulate the “congruency” between two stimuli. The two approaches primarily differ in the (a)synchrony between the two sensory inputs. For instance, spoken words and the corresponding facial movements would be presented synchronously in a classical congruency paradigm and sequentially in an adaptation paradigm. The different latencies of the sensory inputs may induce distinct neural mechanisms for congruency and/or adaptation effects. Yet, events in the natural environment often produce temporal asynchronies between sensory signals. For instance, facial movements usually precede the auditory speech signal. Furthermore, the asynchrony between visual and auditory signals depends on the distance between signal source and observer because of differences in velocity of light and sound. Finally, the neural processing latencies for signals from different sensory modalities depend on the particular brain regions and stimuli, which will lead, in turn, to variations in the width and asymmetry of temporal integration windows as a function of stimulus and region. Collectively, the variability in latency and temporal integration window suggests a continuum between “syn chronous” congruency effects and “asynchronous” adaptation effects that may rely on distinct and shared neural mechanisms (Figure 13.6).
13.2 MULTISENSORY REPRESENTATIONS: MULTIVARIATE DECODING AND PATTERN CLASSIFIER ANALYSES All methodological approaches discussed thus far were predicated on encoding models using mass-univariate statistics. In other words, these approaches investigated how external variables or stimulus functions cause and are thus encoded by brain activations in a regionally specific fashion. This is a mass-univariate approach because a general linear model with the experimental variables as predictors is estimated independent for each voxel time course followed by statistical inference (n.b., statistical dependencies are usually taken into account at the stage of statistical inference, using, e.g., Gaussian random field theory; Friston et al. 1995). Over the past 5 years, multivariate decoding models and pattern classifiers have progressively been used in functional imaging studies. In contrast to encoding models that infer a mapping from experimental variables to brain activations, these decoding models infer a mapping from brain activations to cognitive states. There are two main approaches: (1) canonical correlation analyses (and related models such as linear discriminant analyses, etc.) infer a mapping from data features (voxel activations) to cognitive states using classical multivariate statistics (based on Wilk’s lambda). Recently, an alternative Bayesian method, multivariate Bayesian decoding, has been proposed that uses a parametric empirical or hierarchical Bayesian model to infer the mapping from voxel activations to a target variable (Friston et al. 2008). (2) Pattern classifiers (e.g., using support vector machines) implicitly infer a mapping between voxel patterns and cognitive states via cross-validation schemes and classification performance on novel unlabeled feature vectors (voxel activation pattern). To this end, the data are split into two (or multiple) sets. In a cross-validation scheme, the classifier is trained on set 1 and its generalization performance is tested on set 2 (for a review, see Haynes and Rees 2006; Pereira et al. 2009). Linear classifiers are often used in functional imaging, as the voxel weights after training provide direct insights into the contribution of different voxels to the classification performance. Thus, even if the classifier is applied to the entire brain, the voxel weights
Characterization of Multisensory Integration with fMRI
247
may indicate regional functional specialization. Furthermore, multivariate decoding approaches can also be applied locally (at each location in the brain) using searchlight procedures (Nandy and Cordes 2003; Kriegeskorte et al. 2006). Because multivariate decoding and pattern classifiers extract the discriminative signal from multiple voxels, they can be more sensitive than univariate encoding approaches and provide additional insights into the underlying distributed neural representations. By carefully designing training and test sets, pattern classifiers can also characterize the invariance of the neural representations within a region. Within the field of multisensory integration, future studies may, for instance, identify amodal representations by investigating whether a pattern classifier that is trained on visual stimuli generalizes to auditory stimuli. In addition, pattern classifiers trained on different categories of multisensory stimuli could be used to provide a more fine-grained account of multisensory representations in low level putatively unisensory and higher order multisensory areas.
13.3 FUNCTIONAL INTEGRATION: EFFECTIVE CONNECTIVITY ANALYSES From the perspective of functional integration, effective connectivity analyses can be used to investigate how information from multiple senses is integrated via distinct interactions among brain regions. In contrast with functional connectivity analyses that simply characterize statistical dependencies between time series in different voxels or regions, effective connectivity analyses investigate the influence that one region exerts on another region. The aim of these analyses is to estimate and make inference about the coupling among brain areas and how this coupling is influenced by experimental context (e.g., cognitive set, task). We will limit our discussion to approaches that have already been applied in the field of multisensory integration. From the experimenter’s perspective, the models are organized according to data-driven and hypothesis-driven approaches for effective connectivity, even though this is only one of many differences and possible classifications.
13.3.1 Data-Driven Effective Connectivity Analysis: Psychophysiological Interactions and Granger Causality Early studies have used simple regression models to infer a context-dependent change in effective connectivity between brain regions. In psychophysiological interaction analyses, the activation time courses in each voxel within the brain are regressed on the time course in a particular seed voxel under two contexts (Friston et al. 1997). A change in coupling is inferred from a change in regression slopes under the two contexts. Based on a psychophysiological interaction analysis, for instance, visuo–tactile interactions in the lingual gyrus were suggested to be induced by increased connectivity from the parietal cortex (Macaluso et al. 2000). Similarly, a psychophysiological interaction analysis was used to demonstrate increased coupling between the left prefrontal cortex and the inferior temporal gyrus in blind, relative to sighted, subjects as a results of cross-modal plasticity (Noppeney et al. 2003). More recent approaches aim to infer directed connectivity based on Granger causality that is temporal precedence. A time series X is said to Granger cause Y, if the history of X (i.e., the lagged values of X) provides statistically significant information about future values of Y, after taking into account the known history of Y. Inferences of Granger causality are based on multivariate autoregressive models or directed information transfer (a measure derived from mutual information; Roebroeck et al. 2005; Goebel et al. 2003; Harrison et al. 2003; Hinrichs et al. 2006). It is important to note that Granger causality does not necessarily imply true causality because a single underlying process may cause both signals X and Y, yet with different lags. Furthermore, temporal differences between regions in hemodynamic time series that result from variations in vascular architecture and hemodynamic response functions may be misinterpreted as causal influences. The second problem can be partly controlled by comparing Granger causality across two conditions and prior deconvolution to obtain an estimate of the underlying neuronal
248
The Neural Bases of Multisensory Processes
signals (Roebroeck et al. 2009; David et al. 2008). As a primarily data-driven approach, the analysis estimates the Granger causal influences of a seed region on all other voxels in the brain. Because this analysis approach does not require an a priori selection of regions of interest, it may be very useful to generate hypotheses that may then be further evaluated on new data in a more constrained framework. Recently, Granger causality has been used to investigate and reveal top-down influences from the STS on auditory cortex/planum temporale in the context of letter–speech sound congruency (multivariate autoregressive models; van Atteveldt et al. 2009) and temporal synchrony manipulations (directed information transfer; Noesselt et al. 2007). For instance, van Atteveldt et al. (2009) have suggested that activation increases for congruent relative to incongruent letter–sound pairs may be mediated via increased connectivity from the STS. Similarly, Granger causality has been used to investigate the influence of somatosensory areas on the lateral occipital complex during shape discrimination (Deshpande et al. 2010; Peltier et al. 2007).
13.3.2 Hypothesis-Driven Effective Connectivity Analysis: Dynamic Causal Modeling The basic idea of dynamic causal modeling (DCM) is to construct a reasonable realistic model of interacting brain regions that form the key players of the functional system under investigation (Friston et al. 2003). DCM treats the brain as a dynamic input–state–output system. The inputs correspond to conventional stimulus functions encoding experimental manipulations. The state variables are neuronal activities and the outputs are the regional hemodynamic responses measured with fMRI. The idea is to model changes in the states, which cannot be observed directly, using the known inputs and outputs. Critically, changes in the states of one region depend on the states (i.e., activity) of others. This dependency is parameterized by effective connectivity. There are three types of parameters in a DCM: (1) input parameters that describe how much brain regions respond to experimental stimuli, (2) intrinsic parameters that characterize effective connectivity among regions, and (3) modulatory parameters that characterize changes in effective connectivity caused by experimental manipulation. This third set of parameters, the modulatory effects, allows us to explain context-sensitive activations by changes in coupling among brain areas. Importantly, this coupling (effective connectivity) is expressed at the level of neuronal states. DCM uses a forward model, relating neuronal activity to fMRI data, which can be inverted during the model fitting process. Put simply, the forward model is used to predict outputs using the inputs. During model fitting, the parameters are adjusted so that the predicted and observed outputs match. Thus, DCM differs from (auto)regressive-like models that were discussed in the previous section in three important aspects: (1) it is a hypothesis-driven approach that requires a priori selection of regions and specification of model space in terms of potential connectivity structures, (2) the neuronal responses are driven by experimentally designed inputs rather than endogenous noise, and (3) the regional interactions emerge at the neuronal level and are transformed into observable BOLD response using a biophysically plausible hemodynamic forward model. DCM can be used to make two sorts of inferences: first, we can compare multiple models that embody hypotheses about functional neural architectures. Using Bayesian model selection, we will infer the optimal model given the data (Penny et al. 2004; Stephan et al. 2009). Second, given the optimal model, we can make inference on connectivity parameters (Friston et al. 2003). For instance, we can compare the strength of forward and backward connections or test whether attention modulates the connectivity between sensory areas. In the field of multisensory integration, DCM has been used to investigate whether incongruency effects emerge via forward or backward connectivity. Comparing DCMs in which audiovisual incongruency modulates either the forward or the backward connectivity, we suggested that increased activation for incongruent relative to congruent stimulus pairs is mediated via enhanced forward connectivity from low-level auditory areas to STS and IPS (Noppeney et al. 2008). More recently, we used DCM to address the question of whether audiovisual interactions in low-level auditory areas (superior temporal gyrus; Driver and Noesselt 2008; Schroeder and Foxe 2005) are mediated via direct connectivity from visual occipital areas or
249
Characterization of Multisensory Integration with fMRI (a) ‘Direct influence’ DCM
(b) ‘Indirect influence’ DCM
STS
STG A
AV
STS AV
AV
CaS
STG
CaS
V
A
V
FIGURE 13.7 Candidate dynamic causal models. (a) “Direct” influence DCM: audiovisual costimulation modulates direct connectivity between auditory and visual regions. (b) “Indirect” influence DCM: audiovisual costimulation modulates indirect connectivity between auditory and visual regions. STG, superior temporal gyrus; CaS, calcarine sulcus; A, auditory input; V, visual input; AV, audiovisual input.
indirect pathways via the STS. Partitioning the model space into “direct,” “indi rect,” or “indirect + direct” models suggested that visual input may influence auditory processing in the superior temporal gyrus via direct and indirect connectivity from visual cortices (Lewis and Noppeney 2010; Noppeney et al. 2010; Werner and Noppeney 2010a; Figure 13.7).
13.4 CONCLUSIONS AND FUTURE DIRECTIONS Multisensory integration has been characterized with fMRI using a variety of experimental design and statistical analysis approaches. When applied in isolation, each approach provides only limited insights and can lead to misinterpretations. A more comprehensive picture may emerge by combining the potentials of multiple methodological approaches. For instance, pattern classifiers and fMRI adaptation may be jointly used to provide insights into subvoxel neuronal representations and dissociate unisensory and multisensory neuronal populations. Amodal neural representations may then be identified, if the classification performance and fMRI adaptation generalizes across stimuli from different sensory modalities. Increased spatial resolution at higher field strength will enable us to more thoroughly characterize the response properties of individual regions. To go beyond structure–function mapping, we also need to establish the effective connectivity between regions using neurophysiologically plausible observation models. Understanding the neural mechanisms of multisensory integration will require an integrative approach combining computational modeling and the complementary strengths of fMRI, EEG/MEG, and lesion studies.
ACKNOWLEDGMENTS We thank Sebastian Werner, Richard Lewis, and Johannes Tünnerhoff for helpful comments on a previous version of this manuscript and JT for his enormous help with preparing the figures.
REFERENCES Adam, R., U. Noppeney. 2010. Prior auditory information shapes visual category-selectivity in ventral occipitotemporal cortex. NeuroImage 52:1592–1602. Allman, B.L., L.P., Keniston, and M.A. Meredith. 2009. Not just for bimodal neurons anymore: The contribution of unimodal neurons to cortical multisensory processing. Brain Topography 21:157–167. Avillac, M., H.S. Ben, and J.R. Duhamel. 2007. Multisensory integration in the ventral intraparietal area of the macaque monkey. Journal of Neuroscience 27:1922–1932. Barraclough, N.E., D. Xiao, C.I. Baker, M.W. Oram, and D.I. Perrett. 2005. Integration of visual and auditory information by superior temporal sulcus neurons responsive to the sight of actions. Journal of Cognitive Neuroscience 17:377–391.
250
The Neural Bases of Multisensory Processes
Beauchamp, M.S. 2005. Statistical criteria in FMRI studies of multisensory integration. Neuroinformatics 3:93–113. Beauchamp, M.S., B.D. Argall, J. Bodurka, J.H. Duyn, and A. Martin. 2004. Unraveling multisensory integration: Patchy organization within human STS multisensory cortex. Nature Neuroscience 7:1190–1192. Becker, S., M. Moscovitch, M. Behrmann, and S. Joordens. 1997. Long-term semantic priming: a computational account and empirical evidence. Journal of Experimental Psychology. Learning, Memory, and Cognition 23:1059–1082. Bonath, B., T. Noesselt, A. Martinez, J. Mishra, K. Schwiecker, H.J. Heinze, and S.A. Hillyard. 2007. Neural basis of the ventriloquist illusion. Current Biology 17:1697–1703. Bremmer, F., A. Schlack, N.J. Shah et al. 2001. Polymodal motion processing in posterior parietal and premotor cortex: A human fMRI study strongly implies equivalencies between humans and monkeys. Neuron 29:287–296. Busse, L., K.C. Roberts, R.E. Crist, D.H. Weissman, and M.G. Woldorff. 2005. The spread of attention across modalities and space in a multisensory object. Proceedings of the National Academy of Sciences of the United States of America 102:18751–18756. Calvert, G.A. 2001. Crossmodal processing in the human brain: Insights from functional neuroimaging studies. Cerebral Cortex 11:1110–1123. Calvert, G.A., R. Campbell, and M.J. Brammer. 2000. Evidence from functional magnetic resonance imaging of crossmodal binding in the human heteromodal cortex. Current Biology 10:649–657. Calvert, G.A., P.C. Hansen, S.D. Iversen, and M.J. Brammer. 2001. Detection of audio-visual integration sites in humans by application of electrophysiological criteria to the BOLD effect. NeuroImage 14: 427–438. David, O., I. Guillemain, S. Saillet et al. 2008. Identifying neural drivers with functional MRI: An electrophysiological validation. PLoS Biology 6:2683–2697. Dehner, L.R., L.P. Keniston, H.R. Clemo, and M.A. Meredith 2004. Cross-modal circuitry between auditory and somatosensory areas of the cat anterior ectosylvian sulcal cortex: A ‘new’ inhibitory form of multisensory convergence. Cerebral Cortex 14:387–403. Deshpande, G., X. Hu, S. Lacey, R. Stilla, and K. Sathian. 2010. Object familiarity modulates effective connectivity during haptic shape perception. NeuroImage 49:1991–2000. Desimone, R. 1996. Neural mechanisms for visual memory and their role in attention. Proceedings of the National Academy of Sciences of the United States of America 93:13494–13499. Doehrmann, O., and M.J. Naumer. 2008. Semantics and the multisensory brain: how meaning modulates processes of audio-visual integration. Brain Research 1242:136–150. Driver, J., and T. Noesselt 2008. Multisensory interplay reveals crossmodal influences on ‘sensory-specific’ brain regions, neural responses, and judgments. Neuron 57:11–23. Ernst, M.O., and M.S. Banks. 2002. Humans integrate visual and haptic information in a statistically optimal fashion. Nature 415:429–433. Fairhall, S.L., and E. Macaluso. 2009. Spatial attention can modulate audiovisual integration at multiple cortical and subcortical sites. European Journal of Neuroscience 29:1247–1257. Friston, K. 2005. A theory of cortical responses. Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences 360:815–836. Friston, K., C. Chu, J. Mourao-Miranda, O. Hulme, G. Rees, W. Penny, and J. Ashburner. 2008. Bayesian decoding of brain images. NeuroImage 39:181–205. Friston, K.J., C. Buechel, G.R. Fink, J. Morris, E. Rolls, and R.J. Dolan. 1997. Psychophysiological and modulatory interactions in neuroimaging. NeuroImage 6:218–229. Friston, K.J., L. Harrison, and W. Penny. 2003. Dynamic causal modelling. NeuroImage 19:1273–1302. Friston, K.J., A. Holmes, K.J. Worsley, J.B. Poline, C.D. Frith, and R. Frackowiak. 1995. Statistical parametric mapping: A general linear approach. Human Brain Mapping 2:189–210. Friston, K.J., A.P. Holmes, C.J. Price, C. Buchel, and K.J. Worsley. 1999. Multisubject fMRI studies and conjunction analyses. NeuroImage 10:385–396. Friston, K.J., W.D. Penny, and D.E. Glaser. 2005. Conjunction revisited. NeuroImage 25:661–667. Goebel, R., A. Roebroeck, D.S. Kim, and E. Formisano. 2003. Investigating directed cortical interactions in time-resolved fMRI data using vector autoregressive modeling and Granger causality mapping. Magnetic Resonance Imaging 21:1251–1261. Gosselin, F., and P.G. Schyns. 2003. Superstitious perceptions reveal properties of internal representations. Psychological Science 14:505–509. Grill-Spector, K., and R. Malach. 2001. fMR-adaptation: A tool for studying the functional properties of human cortical neurons. Acta Psychologica 107:293–321.
Characterization of Multisensory Integration with fMRI
251
Grill-Spector, K., R. Henson, and A. Martin. 2006. Repetition and the brain: neural models of stimulus-specific effects. Trends in Cognitive Sciences 10:14–23. Harrison, L., W.D. Penny, and K. Friston. 2003. Multivariate autoregressive modeling of fMRI time series. NeuroImage 19:1477–1491. Hasson, U., J.I. Skipper, H.C. Nusbaum, and S.L. Small. 2007. Abstract coding of audiovisual speech: Beyond sensory representation. Neuron 56:1116–1126. Haynes, J.D., and G. Rees. 2006. Decoding mental states from brain activity in humans. Nature Reviews. Neuroscience 7:523–534. Hein, G., O. Doehrmann, N.G. Muller, J. Kaiser, L. Muckli, and M.J. Naumer. 2007. Object familiarity and semantic congruency modulate responses in cortical audiovisual integration areas. Journal of Neuroscience 27:7881–7887. Helbig, H.B., M.O. Ernst, E. Ricciardi, P. Pietrini, A. Thielscher, K.M. Mayer, J. Schultz, and U. Noppeney. 2010. Reliability of visual information modulates tactile shape processing in primary somatosensory cortices (Submitted for publication). Henson, R.N. 2003. Neuroimaging studies of priming. Progress in Neurobiology 70:53–81. Henson, R.N., and M.D. Rugg. 2003. Neural response suppression, haemodynamic repetition effects, and behavioural priming. Neuropsychologia 41:263–270. Hinrichs, H., H.J. Heinze, and M.A. Schoenfeld. 2006. Causal visual interactions as revealed by an information theoretic measure and fMRI. NeuroImage 31:1051–1060. Kayser, C., C.I. Petkov, and N.K. Logothetis. 2008. Visual modulation of neurons in auditory cortex. Cerebral Cortex 18:1560–1574. Knill, D.C., and J.A. Saunders. 2003. Do humans optimally integrate stereo and texture information for judgments of surface slant? Vision Research 43:2539–2558. Kriegeskorte, N., R. Goebel, and P. Bandettini. 2006. Information-based functional brain mapping. Proceedings of the National Academy of Sciences of the United States of America 103:3863–3868. Laurienti, P.J., T.J. Perrault, T.R. Stanford, M.T. Wallace, and B.E. Stein. 2005. On the use of superadditivity as a metric for characterizing multisensory integration in functional neuroimaging studies. Experimental Brain Research 166:289–297. Lee, H., and U. Noppeney. Physical and perceptual factors shape the neural mechanisms that integrate audiovisual signals in speech comprehension (submitted for publication). Lewis, R., and U. Noppeney. 2010. Audiovisual synchrony improves motion discrimination via enhanced connectivity between early visual and auditory areas. Journal of Neuroscience 30:12329–12339. Macaluso, E., C.D. Frith, and J. Driver. 2000. Modulation of human visual cortex by crossmodal spatial attention. Science 289:1206–1208. Meredith, M.A., and B.L. Allman. 2009. Subthreshold multisensory processing in cat auditory cortex. Neuroreport 20:126–131. Nandy, R.R., and D. Cordes. 2003. Novel nonparametric approach to canonical correlation analysis with applications to low CNR functional MRI data. Magnetic Resonance in Medicine 50:354–365. Nichols, T., M. Brett, J. Andersson, T. Wager, and J.B. Poline. 2005 Valid conjunction inference with the minimum statistic. NeuroImage 25:653–660. Noesselt, T., J.W. Rieger, M.A. Schoenfeld et al. 2007. Audiovisual temporal correspondence modulates human multisensory superior temporal sulcus plus primary sensory cortices. Journal of Neuroscience 27:11431–11441. Noppeney, U., K. Friston, and C. Price. 2003. Effects of visual deprivation on the organisation of the semantic system. Brain 126:1620–1627. Noppeney, U., O. Josephs, J. Hocking, C.J. Price, and K.J. Friston. 2008. The effect of prior visual information on recognition of speech and sounds. Cerebral Cortex 18:598–609. Noppeney, U., D. Ostwald, S. Werner. 2010. Perceptual decisions formed by accumulation of audiovisual evidence in prefrontal cortex. Journal of Neuroscience 30:7434–7446. Peltier, S., R. Stilla, E. Mariola, S. LaConte, X. Hu, and K. Sathian. 2007. Activity and effective connectivity of parietal and occipital cortical regions during haptic shape perception. Neuropsychologia 45:476–483. Penny, W.D., K.E. Stephan, A. Mechelli, and K.J. Friston. 2004. Comparing dynamic causal models. NeuroImage 22:1157–1172. Pereira, F., T. Mitchell, and M. Botvinick. 2009. Machine learning classifiers and fMRI: A tutorial overview. NeuroImage 45:S199–S209. Perrault Jr., T.J., J.W. Vaughan, B.E. Stein, and M.T. Wallace. 2005. Superior colliculus neurons use distinct operational modes in the integration of multisensory stimuli. Journal of Neurophysiology 93: 2575–2586.
252
The Neural Bases of Multisensory Processes
Price, C.J., R.J. Wise, and R.S. Frackowiak. 1996. Demonstrating the implicit processing of visually presented words and pseudowords. Cerebral Cortex 6:62–70. Roebroeck, A., E. Formisano, and R. Goebel. 2005. Mapping directed influence over the brain using Granger causality and fMRI. NeuroImage 25:230–242. Roebroeck, A., E. Formisano, and R. Goebel. 2009. The identification of interacting networks in the brain using fMRI: Model selection, causality and deconvolution. NeuroImage. Sadaghiani, S., J.X. Maier, and U. Noppeney. 2009. Natural, metaphoric, and linguistic auditory direction signals have distinct influences on visual motion processing. Journal of Neuroscience 29:6490–6499. Sarkheil, P., Q.C. Vuong, H.H. Bulthoff, and U. Noppeney. 2008. The integration of higher order form and motion by the human brain. NeuroImage 42:1529–1536. Schroeder, C.E., and J. Foxe. 2005. Multisensory contributions to low-level, ‘unisensory’ processing. Current Opinion in Neurobiology 15:454–458. Seltzer, B., M.G. Cola, C. Gutierrez, M. Massee, C. Weldon, and C.G. Cusick. 1996. Overlapping and nonoverlapping cortical projections to cortex of the superior temporal sulcus in the rhesus monkey: Double anterograde tracer studies. Journal of Comparative Neurology 370:173–190. Stanford, T.R., and B.E. Stein. 2007. Superadditivity in multisensory integration: Putting the computation in context. Neuroreport 18:787–792. Stanford, T.R., S. Quessy, B.E. Stein. 2005. Evaluating the operations underlying multisensory integration in the cat superior colliculus. Journal of Neuroscience 25:6499–6508. Stark, C.E., and J.L. McClelland. 2000. Repetition priming of words, pseudowords, and nonwords. Journal of Experimental Psychology. Learning, Memory, and Cognition 26:945–972. Stein, B.E., and M.A. Meredith. 1993. The Merging of the Senses. Cambridge, MA: MIT Press. Stein, B.E., and T.R. Stanford. 2008. Multisensory integration: current issues from the perspective of the single neuron. Nature Reviews. Neuroscience 9:255–266. Stein, B.E., T.R. Stanford, R. Ramachandran, T.J. Perrault Jr., and B.A. Rowland. 2009. Challenges in quantifying multisensory integration: alternative criteria, models, and inverse effectiveness. Experimental Brain Research 198(2–3):131–126. Stephan, K.E., W.D. Penny, J. Daunizeau, R.J. Moran, and K.J. Friston. 2009. Bayesian model selection for group studies. NeuroImage 46(4):1004–1017. Erratum in NeuroImage 48(1):311. Stevenson, R.A., and T.W. James. 2009. Audiovisual integration in human superior temporal sulcus: Inverse effectiveness and the neural processing of speech and object recognition. NeuroImage 44:1210–1223. Stevenson, R.A., S. Kim, and T.W. James. 2009. An additive-factors design to disambiguate neuronal and areal convergence: Measuring multisensory interactions between audio, visual, and haptic sensory streams using fMRI. Experimental Brain Research 198(2–3):183–194 Sugihara, T., M.D. Diltz, B.B. Averbeck, and L.M. Romanski. 2006. Integration of auditory and visual communication information in the primate ventrolateral prefrontal cortex. Journal of Neuroscience 26:11138–11147. Tal, N., and A. Amedi. 2009. Multisensory visual-tactile object related network in humans: insights gained using a novel crossmodal adaptation approach. Experimental Brain Research 198:165–182. van Atteveldt, N., E. Formisano, R. Goebel, and L. Blomert. 2004. Integration of letters and speech sounds in the human brain. Neuron 43:271–282. van Atteveldt, N.M., E. Formisano, L. Blomert, and R. Goebel. 2007a. The effect of temporal asynchrony on the multisensory integration of letters and speech sounds. Cerebral Cortex 17:962–974. van Atteveldt, N.M., E. Formisano, R. Goebel, and L. Blomert. 2007b. Top-down task effects overrule automatic multisensory responses to letter-sound pairs in auditory association cortex. NeuroImage 36:1345–1360. van Atteveldt, N., A. Roebroeck, and R. Goebel. 2009. Interaction of speech and script in human auditory cortex: Insights from neuro-imaging and effective connectivity. Hearing Research 258(1–2):152–164 Wallace, M.T., R. Ramachandran, and B.E. Stein. 2004. A revised view of sensory cortical parcellation. Proceedings of the National Academy of Sciences of the United States of America 101:2167–2172. Werner, S., and U. Noppeney. 2010a. Distinct functional contributions of primary sensory and association areas to audiovisual integration in object categorization. Journal of Neuroscience 30:2662–2675. Werner, S., and U Noppeney. 2010b. Superadditive responses in superior temporal sulcus predict audiovisual benefits in object categorization. Cerebral Cortex 20:1829–1842. Werner, S., and U. Noppeney. 2010c. The contributions of transient and sustained response codes to audiovisual integration. Cerebral Cortex 21(4):920–931. Wiggs, C.L., and A. Martin. 1998. Properties and mechanisms of perceptual priming. Current Opinion in Neurobiology 8:227–233.
14
Modeling Multisensory Processes in Saccadic Responses Time-Window-ofIntegration Model Adele Diederich and Hans Colonius
CONTENTS 14.1 Summary............................................................................................................................... 253 14.2 Multisensory Processes Measured through Response Time.................................................254 14.3 TWIN Modeling.................................................................................................................... 255 14.3.1 Basic Assumptions..................................................................................................... 255 14.3.2 Quantifying Multisensory Integration in the TWIN Model...................................... 257 14.3.3 Some General Predictions of TWIN......................................................................... 257 14.4 TWIN Models for Specific Paradigms: Assumptions and Predictions................................. 258 14.4.1 Measuring Cross-Modal Effects in Focused Attention and Redundant Target Paradigms.................................................................................................................. 258 14.4.2 TWIN Model for the FAP......................................................................................... 259 14.4.2.1 TWIN Predictions for the FAP...................................................................260 14.4.3 TWIN Model for RTP............................................................................................... 263 14.4.3.1 TWIN Predictions for RTP.........................................................................264 14.4.4 Focused Attention versus RTP.................................................................................. 265 14.5 TWIN Model for Focused Attention: Including a Warning Mechanism..............................266 14.5.1 TWIN Predictions for FAP with Warning................................................................268 14.6 Conclusions: Open Questions and Future Directions............................................................ 270 Appendix A..................................................................................................................................... 271 A.1 Deriving the Probability of Interaction in TWIN................................................................. 271 A.1.1 Focused Attention Paradigm..................................................................................... 271 A.1.2 Redundant Target Paradigm...................................................................................... 272 A.1.3 Focused Attention and Warning................................................................................ 273 References....................................................................................................................................... 274
14.1 SUMMARY Multisensory research within experimental psychology has led to the emergence of a number of lawful relations between response speed and various empirical conditions of the experimental setup (spatiotemporal stimulus configuration, intensity, number of modalities involved, type of instruction, and so forth). This chapter presents a conceptual framework to account for the effects of 253
254
The Neural Bases of Multisensory Processes
cross- modal stimulation on response speed. Although our framework applies to measures of crossmodal response speed in general, here we focus on modeling saccadic reaction time as a measure of orientation performance toward cross-modal stimuli. The central postulate is the existence of a critical “time-window-of-integration” (TWIN) controlling the combination of information from different modalities. It is demonstrated that a few basic assumptions about this timing mechanism imply a remarkable number of empirically testable predictions. After introducing a general version of the TWIN model framework, we present various specifications and extensions of the original model that are geared toward more specific experimental paradigms. Our emphasis will be on predictions and empirical testability of these model versions, but for experimental data, we refer the reader to the original literature.
14.2 MULTISENSORY PROCESSES MEASURED THROUGH RESPONSE TIME For more than 150 years, response time (RT) has been used in experimental psychology as a ubiquitous measure to investigate hypotheses about the mental and motor processes involved in simple cognitive tasks (Van Zandt 2002). Interpreting RT data, in the context of some specific experimental paradigm, is subtle and requires a high level of technical skill. Fortunately, over the years, many sophisticated mathematical and statistical methods for response time analysis and corresponding processing models have been developed (Luce 1986; Schweickert et al., in press). One reason for the sustained popularity of RT as a measure of mental processes may be the simple fact that these processes always have to unfold over time. A similar rationale, of course, is valid for other methods developed to investigate mental processes, such as electrophysiological and related brain-imaging techniques, and it may be one reason why we are currently witnessing some transfer of concepts and techniques from RT analysis into these domains (e.g., Sternberg 2001). Here, we focus on the early, dynamic aspects of simultaneously processing cross-modal stimuli—combinations of vision, audition, and touch—as they are revealed by a quantitative stochastic analysis of response times. One of the first psychological studies on cross-modal interaction using RT to measure the effect of combining stimuli from different modalities and of varying their intensities is the classic article by Todd (1912). A central finding, supported by subsequent research, is that the occurrence of crossmodal effects critically depends on the temporal arrangement of the stimulus configuration. For example, the speedup of response time to a visual stimulus resulting from presenting an accessory auditory stimulus typically becomes most pronounced when the visual stimulus precedes the auditory by an interval that equals the difference in RT between response to the visual alone and the auditory alone (Hershenson 1962). The rising interest in multisensory research in experimental psychology over the past 20 years has led to the emergence of a number of lawful relations between response speed, on the one hand, and properties of the experimental setting, such as (1) spatiotemporal stimulus configuration, (2) stimulus intensity levels, (3) number of modalities involved, (4) type of instruction, and (5) semantic congruity, on the other. In the following, rather than reviewing the abundance of empirical results, we present a modeling framework within which a number of specific quantitative models have been developed and tested. Although such models can certainly not reflect the full complexity of the underlying multisensory processes, their predictions are sufficiently specific to be rigorously tested through experiments. For a long time, the ubiquitous mode of assessing response speed has been to measure the time it takes to press a button, or to release it, by moving a finger or foot. With the advance of modern eye movement registration techniques, the measurement of gaze shifts has become an important additional technique to assess multisensory effects. In particular saccadic reaction time, i.e., the time from the presentation of a target stimulus to the beginning of the eye movement, is ideally suited for studying both the temporal and spatial rules of multisensory integration. Although participants can be asked to move their eyes to either visual, auditory, or somatosensory targets because the ocular system is geared to the visual system, the saccadic RT characteristics will be specific to each modality. For example, it is well-known that saccades to visual targets have a higher level of accuracy than
Modeling Multisensory Processes in Saccadic Responses
255
those to auditory or somatosensory stimuli. Note also, as the superior colliculus is an important site of oculomotor control (e.g., Munoz and Wurtz 1995), measuring saccadic responses is an obvious choice for studying the behavioral consequences of multisensory integration.
14.3 TWIN MODELING We introduce a conceptual framework to account for the effects of cross-modal stimulation as measured by changes in response speed.* The central postulate is the existence of a critical TWIN controlling the integration of information from different modalities. The starting idea simply is that a visual and an auditory stimulus must not be presented too far away from each other in time for bimodal integration to occur. As we will show, this seemingly innocuous assumption has a number of nontrivial consequences that any multisensory integration model of response speed has to satisfy. Most prominently, it imposes a process consisting of—at least—two serial stages: one early stage, before the outcome of the time window check has occurred, and a later one, in which the outcome of the check may affect further processing. Although the TWIN framework applies to measures of cross-modal response speed in general, the focus is on modeling saccadic reaction time. First, a general version of the TWIN model and its predictions, introduced by Colonius and Diederich (2004), will be described. Subsequently, we present various extensions of the original model that are geared toward more specific experimental paradigms. Our emphasis will again be on the predictions and empirical testability of these model versions but because of space limitations, no experimental data will be presented here.
14.3.1 Basic Assumptions A classic explanation for a speedup of responses to cross-modal stimuli is that subjects are merely responding to the first stimulus detected. Taking these detection times to be random variables and glossing over some technical details, observed reaction time would then become the minimum of the reaction times to the visual, auditory, or tactile signal leading to a purely statistical facilitation effect (also known as probability summation) in response speed (Raab 1962). Over time, numerous studies have shown that this race model was not sufficient to explain the observed speedup in saccadic reaction time (Harrington and Peck 1998; Hughes et al. 1994, 1998; Corneil and Munoz 1996; Arndt and Colonius 2003). Using Miller’s inequality as a benchmark test (cf. Colonius and Diederich 2006; Miller 1982), saccadic responses to bimodal stimuli have been found to be faster than predicted by statistical facilitation, in particular, when the stimuli were spatially aligned. Moreover, in the race model, there is no natural explanation for the decrease in facilitation observed with variations in many cross-modal stimulus properties, e.g., increasing spatial disparity between the stimuli. Nevertheless, the initial anatomic separation of the afferent pathways for different sensory modalities suggests that an early stage of peripheral processing exists, during which no intermodal interaction may occur. For example, a study by Whitchurch and Takahashi (2006) collecting (head) saccadic reaction times in the barn owl lends support to the notion of a race between early visual and auditory processes depending on the relative intensity levels of the stimuli. In particular, their data suggest that the faster modality initiates the saccade, whereas the slower modality remains available to refine saccade trajectory. Thus, there are good reasons for retaining the construct of an—albeit very peripheral—race mechanism. Even under invariant experimental conditions, observed responses typically vary from one trial to the next, presumably because of an inherent variability of the underlying neural processes in both ascending and descending pathways. In analogy to the classic race model, this is taken into account in the TWIN framework by assuming any processing duration to be a random variable. In particular, the peripheral processing times for visual, auditory, and somatosensory stimuli are * See Section 14.6 for possible extensions to other measures of performance.
256
The Neural Bases of Multisensory Processes
assumed to be stochastically independent random variables. This leads to the first postulate of the TWIN model: (B1) First Stage Assumption: The first stage consists in a (stochastically independent) race among the peripheral processes in the visual, auditory, and/or somatosensory pathways triggered by a cross-modal stimulus complex.
The existence of a critical “spatiotemporal window” for multisensory integration to occur has been suggested by several authors, based on both neurophysiological and behavioral findings in humans, monkey, and cat (e.g., Bell et al. 2005; Meredith 2002; Corneil et al. 2002; Meredith et al. 1987; see Navarra et al. 2005 for a recent behavioral study). This integration may manifest itself in the form of an increased firing rate of a multisensory neuron (relative to unimodal stimulation), an acceleration of saccadic reaction time (Frens et al. 1995; Diederich et al. 2003), an effective audiovisual speech integration (Van Wassenhove et al. 2007), or in an improved or degraded judgment of temporal order of bimodal stimulus pairs (cf. Spence and Squire 2003). One of the basic tenets of the TWIN framework, however, is the priority of temporal proximity over any other type of proximity: rather than assuming a joint spatiotemporal window of integration permitting interaction to occur only for both spatially and temporally neighboring stimuli, the TWIN model allows for cross-modal interaction to occur, for example, even for spatially rather distant stimuli of different modalities as long as they fall within the time window. (B2) TWIN Assumption: Multisensory integration occurs only if the peripheral processes of the first stage all terminate within a given temporal interval, the TWIN.
In other words, a visual and an auditory stimulus may occur at the same spatial location, or the lip movements of a speaker may be perfectly consistent with the utterance, no intersensory interaction effect will be possible if the data from the two sensory channels are registered too distant from each other in time. Thus, the window acts like a filter determining whether afferent information delivered from different sensory organs is registered close enough in time to allow for multisensory integration. Note that passing the filter is a necessary, but not sufficient, condition for multisensory integration to occur. The reason is that the amount of multisensory integration also depends on other aspects of the stimulus set, such as the spatial configuration of the stimuli. For example, response depression may occur with nearly simultaneous but distant stimuli, making it easier for the organism to focus attention on the more important event. In other cases, multisensory integration may fail to occur—despite near-simultaneity of the unisensory events—because the a priori probability for a cross-modal event is very small (e.g., Körding et al. 2007). Although the priority of temporal proximity seems to afford more flexibility for an organism in a complex environment, the next assumption delimits the role of temporal proximity to the first processing stage: (B3) Assumption of Temporal Separability: The amount of interaction manifesting itself in an increase or decrease of second stage processing time is a function of cross-modal stimulus features, but it does not depend on the presentation asynchrony (stimulus onset asynchrony, SOA) of the stimuli.
This assumption is based on a distinction between intra- and cross-modal stimulus properties, where the properties may refer to both subjective and physical properties. Cross-modal properties are defined when stimuli of more than one modality are present, such as spatial distance of target to nontarget, or subjective similarity between stimuli of different modalities. Intramodal properties, on the other hand, refer to properties definable for a single stimulus, regardless of whether this property is definable in all modalities (such as intensity) or in only one modality (such as wavelength for color or frequency for pitch). Intramodal properties can affect the outcome of the race in the first stage and, thereby, the probability of an interaction. Cross-modal properties may affect the amount of cross-modal interaction occurring in the second stage. Note that cross-modal features cannot influence first stage processing time because the stimuli are still being processed in separate pathways.
Modeling Multisensory Processes in Saccadic Responses
257
(B4) Second Stage Assumption: The second stage comprises all processes after the first stage including preparation and execution of a response.
The assumption of only two stages is certainly an oversimplification. Note, however, that the second stage is defined here by default: it includes all subsequent, possibly overlapping, processes that are not part of the peripheral processes in the first stage (for a similar approach, see Van Opstal and Munoz 2004). Thus, the TWIN model retains the classic notion of a race mechanism as an explanation for cross-modal interaction but restricts it to the very first stage of stimulus processing.
14.3.2 Quantifying Multisensory Integration in the TWIN Model To derive empirically testable predictions from the TWIN framework, its assumptions must be put into more precise form. According to the two-stage assumption, total saccadic reaction time in the cross-modal condition can be written as a sum of two nonnegative random variables defined on a common probability space:
RTcross-modal = S1 + S2,
(14.1)
where S1 and S2 refer to first and second stage processing time, respectively (a base time would also be subsumed under S2). Let I denote the event that multisensory integration occurs, having probability P(I). For the expected reaction time in the cross-modal condition then follows: E[RTcrossmodal ] = E[ S1 ] + E[ S2 ] = E[ S1 ] + P( I ) ⋅ E[ S2 | I ] + (1 − P (I )) ⋅ E[S2 | I c ]
= E[S1 ] + E[S2 | I c ] − P (I ) ⋅ (E[S2 | I c ] − E[ S2 | I ]),
where E[S2|I] and E[S2|Ic] denote the expected second stage processing time conditioned on interaction occurring (I) or not occurring (Ic), respectively. Putting Δ ≡ E[S2|Ic] – E[S2|I], this becomes
E[RTcross-modal] = E[S1] + E[S2|Ic] – P(I) · Δ.
(14.2)
That is, mean RT to cross-modal stimuli is the sum of mean RT of the first stage processing time, mean RT of the second stage processing when no interaction occurs, and the term P(I) · Δ, which is a measure of the expected amount of intersensory interaction in the second stage with positive Δ values corresponding to facilitation, and negative values corresponding to inhibition. This factorization of expected intersensory interaction into the probability of interaction P(I) and the amount and sign of interaction (Δ) is an important feature of the TWIN model. According to Assumptions B1 to B4, the first factor, P(I), depends on the temporal configuration of the stimuli (SOA), whereas the second factor, Δ, depends on nontemporal aspects, in particular their spatial configuration. Note that this separation of temporal and nontemporal factors is in accordance with the definition of the window of integration: the incidence of multisensory integration hinges on the stimuli to occur in temporal proximity, whereas the amount and sign of interaction (Δ) is modulated by nontemporal aspects, such as semantic congruity or spatial proximity reaching, in the latter case, from enhancement for neighboring stimuli to possible inhibition for distant stimuli (cf. Diederich and Colonius 2007b).
14.3.3 Some General Predictions of TWIN In the next section, more specific assumptions on first stage processing time, S1, and probability of interaction P(I) will be introduced to derive detailed quantitative predictions for specific
258
The Neural Bases of Multisensory Processes
experimental cross-modal paradigms. Nonetheless, even at the general level of the framework introduced thus far, a number of qualitative empirical predictions of TWIN are possible. SOA effects. The amount of cross-modal interaction should depend on the SOA between the stimuli because the probability of integration, P(I), changes with SOA. Let us assume that two stimuli from different modalities differ considerably in their peripheral processing times. If the faster stimulus is delayed (in terms of SOA) so that the arrival times of both stimuli have a high probability of falling into the window of integration, then the amount of cross-modal interaction should be largest for that value of SOA (see, e.g., Frens et al. 1995; Colonius and Arndt 2001). Intensity effects. Stimuli of high intensity have relatively fast peripheral processing times. Therefore, for example, if a stimulus from one modality has a high intensity compared to a stimulus from the other modality, the chance that both peripheral processes terminate within the time window will be small, assuming simultaneous stimulus presentations. The resulting low value of P(I) is in line with the empirical observation that a very strong signal will effectively rule out any further reduction of saccadic RT by adding a stimulus from another modality (e.g., Corneil et al. 2002). Cross-modal effects. The amount of multisensory integration (Δ) and its sign (facilitation or inhibition) occurring in the second stage depend on cross-modal features of the stimulus set, for example, spatial disparity and laterality (laterality here refers to whether all stimuli appear in the same hemisphere). Cross-modal features cannot have an influence on first stage processing time because the modalities are being processed in separate pathways. Conversely, parameter Δ not depending on SOA cannot change its sign as a function of SOA and, therefore, the model cannot simultaneously predict facilitation to occur for some SOA values and inhibition for others. Some empirical evidence against this prediction has been observed (Diederich and Colonius 2008). In the classic race model, the addition of a stimulus from a modality not yet present will increase (or, at least, not decrease) the amount of response facilitation. This follows from the fact that— even without assuming stochastic independence—the probability of the fastest of several processes terminating processing before time t will increase with the number of “racers” (e.g., Colonius and Vorberg 1994). In the case of TWIN, both facilitation and inhibition are possible under certain conditions as follows: Number of modalities effect. The addition of a stimulus from a modality not yet present will increase (or, at least, not decrease) the expected amount of interaction if the added stimulus is not “too fast” and the time window is not “too small.” The latter restrictions are meant to guarantee that the added stimulus will fall into the time window, thereby increasing the probability of interaction to occur.
14.4 TWIN MODELS FOR SPECIFIC PARADIGMS: ASSUMPTIONS AND PREDICTIONS In a cross-modal experimental paradigm, the individual modalities may either be treated as being on an equal footing, or one modality may be singled out as a target modality, whereas stimuli from the remaining modalities may be ignored by the participant as nontargets. Cross-modal effects are assessed in different ways, depending on task instruction. As shown below, the TWIN model can take these different paradigms into account simply by modifying the conditions that lead to an opening of the time window.
14.4.1 Measuring Cross-Modal Effects in Focused Attention and Redundant Target Paradigms In the redundant target paradigm (RTP; also known as the divided attention paradigm), stimuli from different modalities are presented simultaneously or with certain SOA, and the participant is instructed to respond to the stimulus detected first. Typically, the time to respond in the cross-
259
Modeling Multisensory Processes in Saccadic Responses
modal condition is faster than in either of the unimodal conditions. In the focused attention paradigm (FAP), cross-modal stimulus sets are presented in the same manner, but now participants are instructed to respond only to the onset of a stimulus from a specifically defined target modality, such as the visual, and to ignore the remaining nontarget stimulus (the tactile or the auditory). In the latter setting, when a stimulus of a nontarget modality, for example, a tone, appears before the visual target at some spatial disparity, there is no overt response to the tone if the participant is following the task instructions. Nevertheless, the nontarget stimulus has been shown to modulate the saccadic response to the target: depending on the exact spatiotemporal configuration of target and nontarget, the effect can be a speedup or an inhibition of saccadic RT (see, e.g., Amlôt et al. 2003; Diederich and Colonius 2007b), and the saccadic trajectory can be affected as well (Doyle and Walker 2002). Some striking similarities to human data have been found in a detection task utilizing both paradigms. Stein et al. (1988) trained cats to orient to visual or auditory stimuli, or both. In one paradigm, the target was a visual stimulus (a dimly illuminating LED) and the animal learned that although an auditory stimulus (a brief, low-intensity broadband noise) would be presented periodically, responses to it would never be rewarded, and the cats learned to “ignore” it (FAP). Visual– auditory stimuli were always presented spatially coincident, but their location varied from trial to trial. The weak visual stimulus was difficult to detect and the cats’ performance was 0. 14.4.3.1 TWIN Predictions for RTP In this paradigm, both stimuli are on an equal footing and, therefore, negative SOA values need not be introduced. Each SOA value now indicates the time between the stimulus presented first and the one presented second, regardless of modality. SOA effects. The probability of cross-modal interaction decreases with increasing SOA: the later the second stimulus is presented, the less likely it is to win the race and to open the window of integration; alternatively, if the window has already been opened by the first stimulus, the less likely it is to fall into that window with increasing SOA. For large enough SOA values, mean saccadic RT in the cross-modal condition approaches the mean for the stimulus presented first. To fix ideas, we now assume, without loss of generality, that a visual stimulus of constant intensity is presented first and that an auditory stimulus is presented second, or simultaneous with the visual, and at different intensities. Predictions then depend on the relative intensity difference between both stimuli. Note that the unimodal means constitute upper bounds for bimodal mean RT. Intensity effects. For a visual stimulus presented first, increasing the intensity of the auditory stimulus (presented second) increases the amount of facilitation.
Modeling Multisensory Processes in Saccadic Responses
265
SOA and intensity effects predicted by a parametric TWIN version. Figure 14.1 (right panels) shows the quantitative predictions of TWIN for SOA and intensity variations under exponential distributions for the peripheral processing times. Parameters are the same as for the FAP predictions (left panels). Panels 1 and 2 show mean RT and P(I) as a function of SOA for various intensity levels (λ parameters) of the auditory stimulus. Both panels exhibit the predicted monotonicity in SOA and intensity. The third panel, depicting MRE, reveals some nonmonotonic behavior in both SOA and intensity. Without going into numerical details, this nonmonotonicity of MRE can be seen to be because of a subtle interaction between two mechanisms, both being involved in the generation of MRE: (1) statistical facilitation occurring in the first stage and (2) opening of the time window. The former is maximal if presentation of the stimulus processed faster is delayed by an SOA equal to the difference in mean RT in the unimodal stimulus conditions, that is when peripheral processing times are in physiological synchrony; for example, if mean RT to an auditory stimulus is 110 ms and mean RT to a visual stimulus is 150 ms, the maximal amount of statistical facilitation is expected when the auditory stimulus is presented 150 ms – 110 ms = 40 ms after the visual stimulus. The SOA value being “optimal” for statistical facilitation, however, need not be the one producing the highest probability of opening the time window that was shown to be decreasing with SOA. Moreover, the nonmonotonicity in intensity becomes plausible if one realizes that variation in intensity results in a change in mean processing time analogous to an SOA effect: for example, lowering auditory stimulus intensity has an effect on statistical facilitation and the probability of opening the time window that is comparable to increasing SOA.
14.4.4 Focused Attention versus RTP Top-down versus bottom-up. The distinction between RTP and FAP is not only an interesting experimental variation as such but it may also provide an important theoretical aspect. In fact, because physically identical stimuli can be presented under the same spatiotemporal configuration in both paradigms, any differences observed in the corresponding reaction times would have to be because of the instructions being different, thereby pointing to a possible separation of top-down from bottom-up processes in the underlying multisensory integration mechanism. Probability of integration. Moreover, comparing both paradigms yields some additional insight into the mechanics of TWIN. Note that under equivalent stimulus conditions, IFAP ⊂ IRTP; this relation follows from the observation that
IFAP = IRTP ∩ {A + τ is the winner of the race}.
It means that any realization of the peripheral processing times that leads to an opening of the time window under the focused attention instruction also leads to the same event under the redundant target instruction. Thus, the probability of integration under redundant target instructions cannot be smaller than that under focused attention instruction: P(IFAP) ≤ P(IRTP), given identical stimulus conditions (see also Figure 14.1). Inverse effectiveness. It is instructive to consider the effect of varying stimulus intensity in both paradigms when both stimuli are presented simultaneously (SOA = 0) and at intensity levels producing the same mean peripheral speed, i.e., with the same intensity parameters, λV = λA. Assuming exponential distributions, Figure 14.2 depicts the probability of integration (upper panels) and MRE (lower panels) as a function of time window width (ω) for both paradigms and with each curve presenting a specific intensity level. The probability of integration increases monotonically from zero (for ω = 0) toward 0.5 for the focused attention, and toward 1 for the RTP. For the former, the probability of integration cannot surpass 0.5 because, for any given window width, the target process has the same chance of winning as the nontarget process under the given λ parameters. For both paradigms, P(I), as a function of ω, is ordered with respect to intensity level: it increases monotonically
266
The Neural Bases of Multisensory Processes FAP
1
0.75 Pr(I)
Pr(I)
0.75 0.5
0.25 0
RTP
1
0.5
0.25
0
50 100 200 Time window width (ms)
0
300
0
30
30
20
20
10 0
300
RTP
MRE
MRE
FAP
50 100 200 Time window width (ms)
10
0
50 100 200 Time window width (ms)
300
0
0
50 100 200 Time window width (ms)
300
FIGURE 14.2 TWIN predictions for FAP (left panels) and RTP (right panels) as a function of time window width (ω) at SOA = 0. Upper panels depict probability of integration P(I), whereas lower panels show MRE. Each curve corresponds to a specific intensity parameter of stimuli. Peripheral processing times for auditory and visual stimuli are 1/λA = 1/λV equal to 30 ms (dashed line), 50 ms (solid), 70 ms (dash-dotted), and 90 ms (black dotted). Mean second stage processing time is μ = 100 ms). Interaction parameter is Δ = 20 ms.
with the mean processing time of both stimuli* (upper panels of Figure 14.2). The same ordering is found for MRE in the FAP; somewhat surprisingly, however, the ordering is reversed for MRE in the RTP: increasing intensity implies less enhancement, i.e., it exhibits the “inverse effectiveness” property often reported in empirical studies (Stein and Meredith 1993; Rowland and Stein 2008). Similar to the above discussion of intensity effects for RTP, this is because of an interaction generated by increasing intensity: it weakens statistical facilitation in first stage processing but simultaneously increases the probability of integration.
14.5 TWIN MODEL FOR FOCUSED ATTENTION: INCLUDING A WARNING MECHANISM Although estimates for the TWIN vary somewhat across subjects and task specifics, a 200-ms width showed up in several studies (e.g., Eimer 2001; Sinclair and Hammond 2009). In a focused attention task, when the nontarget occurs at an early point in time (i.e., 200 ms or more before the target), a substantial decrease of RT compared to the unimodal condition has been observed by Diederich * This is because of a property of the exponential distribution: mean and SD are identical.
267
Modeling Multisensory Processes in Saccadic Responses
and Colonius (2007a). This decrease, however, no longer depended on whether target and nontarget appeared at ipsilateral or contralateral positions, thus supporting the hypothesis that the nontarget plays the role of a spatially unspecific alerting cue, or warning signal, for the upcoming target whenever the SOA is large enough. The hypothesis of increased cross-modal processing triggered by an alerting cue had already been advanced by Nickerson (1973), who called it “preparation enhancement.” In the eye movement literature, the effects of a warning signal have been studied primarily in the context of explaining the “gap effect,” i.e., the latency to initiate a saccade to an eccentric target is reduced by extinguishing the fixation stimulus approximately 200 ms before target onset (Reuter-Lorenz et al. 1991; Klein and Kingston 1993). An early study on the effect of auditory or visual warning signals on saccade latency, but without considering multisensory integration effects, was conducted by Ross and Ross (1981). Here, the dual role of the nontarget—inducing multisensory integration that is governed by the above-mentioned spatiotemporal rules, on the one hand, and acting as a spatially unspecific crossmodal warning cue, on the other—will be taken into account by an extension of TWIN that yields an estimate of the relative contribution of either mechanism for any specific SOA value. (W) Assumption on warning mechanism: If the nontarget wins the processing race in the first stage by a margin wide enough for the TWIN to be closed again before the arrival of the target, then subsequent processing will be facilitated or inhibited (“warning effect”) without dependence on the spatial configuration of the stimuli.*
The time margin by which the nontarget may win against the target will be called head start denoted as γ. The assumption stipulates that the head start is at least as large as the width of the time window for a warning effect to occur. That is, the warning mechanism of the nontarget is triggered whenever the nontarget wins the race by a head start γ ≥ ω ≥ 0. Taking, for concreteness, the auditory as nontarget modality, occurrence of a warning effect corresponds to the event:
W = {A + τ + γ < V}.
The probability of warning to occur, P(W), is a function of both τ and γ. Because γ ≥ ω ≥ 0 this precludes the simultaneous occurrence of both warning and multisensory interaction within one and the same trial and, therefore, P(I ∩ W) = 0 (because no confusion can arise, we write I for IFAP throughout this section). The actual value of the head start criterion is a parameter to be estimated in fitting the model under Assumption W. The expected saccadic reaction time in the cross-modal condition in the TWIN model with warning assumption can then be shown to be E[RTcross-modal ] = E[ S1 ] + E[ S2 ] = E[ S1 ] + E[ S2 | I c ∩ W c ] − P( I ) ⋅ {E[ S2 | I c ∩ W c ] − E[S2 | I ]}
− P(W ) ⋅ {E[S2 | I c ∩ W c ] − E[ S2 | W ]},
* In the study of Diederich and Colonius 2008, an alternative version of this assumption was considered as well (version B). If the nontarget wins the processing race in the first stage by a wide enough margin, then subsequent processing will in part be facilitated or inhibited without dependence on the spatial configuration of the stimuli. This version is less restrictive: All that is needed for the nontarget to act as a warning signal is a “large enough” headstart against the target in the race and P(I ∩ W) can be larger than 0. Assuming that the effects on RT of the two events I and W, integration and warning, combine additively, it can then be shown that the cross-modal interaction prediction of this model version is captured by the same equation as under the original version, i.e., Equation 14.17 below. The only difference is in the order restriction for the parameters, γ ≥ ω. Up to now, no empirical evidence has been found in favor of one of the two versions over the other.
268
The Neural Bases of Multisensory Processes
where E[S2|I], E[S2|W], and E[S2|Ic ∩ Wc] denote the expected second stage processing time conditioned on interaction occurring (I), warning occurring (W), or neither of them occurring (Ic ∩ Wc), respectively (Ic, Wc stand for the complement of events I, W). Setting ∆ ≡ E[ S 2 | I c ∩ W c ] − E [ S 2 | I ]
κ ≡ E[ S 2 | I c ∩ W c ] − E[ S 2 | W ]
where κ denotes the amount of the warning effect (in milliseconds), this becomes
E[RTcross-modal] = E[S1] + E[S2|Ic ∩ Wc] – P(I) · Δ – P(W) · κ.
(14.15)
In the unimodal condition, neither integration nor warning are possible. Thus,
E[RTunimodal] = E[S1] + E[S2|Ic ∩ Wc],
(14.16)
and we arrive at a simple expression for the combined effect of multisensory integration and warning, cross-modal interaction (CI),
CI ≡ E[RTunimodal] – E[RTcross-modal] = P(I) · Δ + P(W) · κ.
(14.17)
Recall that the basic assumptions of TWIN imply that for a given spatial configuration and nontarget modality, there are no sign reversals or changes in magnitude of Δ across all SOA values. The same holds for κ. Note, however, that Δ and κ can separately take on positive or negative values (or zero) depending on whether multisensory integration and warning have a facilitative or inhibitory effect. Furthermore, for the probability of integration P(I), the probability of warning P(W) does change with SOA.
14.5.1 TWIN Predictions for FAP with Warning The occurrence of a warning effect depends on intramodal characteristics of the target and the nontarget, such as modality or intensity. Assuming that increasing stimulus intensity goes along with decreased reaction time (for auditory stimuli, see, e.g., Frens et al. 1995; Arndt and Colonius 2003; for stimuli, see Diederich and Colonius 2004b), TWIN makes specific predictions regarding the effect of nontarget intensity variation. Intensity effects. An intense (auditory) nontarget may have a higher chance to win the race with a head start compared to a weak nontarget. In general, increasing the intensity of the nontarget (1) increases the probability of it functioning as a warning signal, and (2) makes it more likely for the nontarget to win the peripheral race against the target process. SOA effects. The probability of warning P(W) decreases monotonically with SOA: the later the nontarget is presented, the smaller its chances to win the race against the target with some head start γ. This differs from the nonmonotonic relationship predicted between P(IFAP) and SOA (see above). It is interesting to note that the difference in how P(I) and P(W) should depend on SOA is, in principle, empirically testable without any distributional assumptions by manipulating the conditions of the experiment. Specifically, if target and nontarget are presented in two distinct spatial conditions, for example, ipsilateral and contralateral, one would expect Δ to take on two different values, Δi and Δc, whereas P(W) · κ, the expected nonspatial warning effect, should remain the same under both conditions. Subtracting the corresponding cross-modal interaction terms then gives, after canceling the warning effect terms (Equation 14.17),
CIi – CIc = P(I) · (Δi – Δc).
(14.18)
269
Modeling Multisensory Processes in Saccadic Responses
This expression is an observable function of SOA and, because the factor Δi – Δc does not depend on SOA by Assumption B3, it should exhibit the same functional form as P(I): increasing and then decreasing (see Figure 14.1, middle left panel). Context effects. The magnitude of the warning effect may be influenced by the experimental design. Specifically, presenting nontargets from different modalities in two distinct presentation modes, e.g., blocking or mixing the modality of the auditory and tactile nontargets within an experimental block of trials, such that supposedly no changes in the expected amount of multisensory integration should occur, then subtraction of the corresponding CI values yields, after canceling the integration effect terms, CIblocked – CImixed = P(W) · (κmixed – κ blocked),
(14.19)
a quantity that should decrease monotonically with SOA because P(W) does. The extension of the model to include warning effects has been probed for both auditory and tactile nontargets. Concerning the warning assumptions, no clear superiority of version A over version Warning
Integration and warning
150
Mean RT (ms)
Mean RT (ms)
150
140
140
1
1
0.5
0.5
0
0
12
12
8
8
MRE
MRE
Pr(W), Pr(I)
130
Pr(W)
130
4
4
0 −400 −300 −200 −100 0 SOA (ms)
0 100
200
−400 −300 −200 −100 0 SOA (ms)
100
200
FIGURE 14.3 TWIN predictions for FAP when only warning occurs (left panels) and when both integration and warning occur (right panels). Parameters are chosen as before: 1/λV = 50 and μ = 100, resulting in a mean RT for visual stimulus of 150 ms. Peripheral processing times for auditory stimuli are 1/λA = 10 ms (dashed line), 1/λA = 30 ms (solid), 1/λA = 70 ms (dash-dotted), and 1/λA = 90 ms (black dotted).
270
The Neural Bases of Multisensory Processes
B was found in the data. For detailed results on all of the tests described above, we refer the reader to Diederich and Colonius (2008). SOA and intensity: quantitative predictions. To illustrate the predictions of TWIN with warning for mean SRT, we choose the following set of parameters. As before, the intensity parameter for the visual modality is set to 1/λV = 50 (ms) and to 1/λA = 10, 30, 70, or 90 (ms) for the (auditory) nontarget, the parameter for second stage processing time when no integration and no warning occurs, μ ≡ E[S2|Ic ∩ Wc], is set to 100 ms, and the TWIN to 200 ms. The parameter for multisensory integration is set to Δi = 20 ms for bimodal stimuli presented ipsilaterally, and κ is set to 5 ms (Figure 14.3).
14.6 CONCLUSIONS: OPEN QUESTIONS AND FUTURE DIRECTIONS The main contribution of the TWIN framework thus far is to provide an estimate of the multisensory integration effect—and, for the extended model, also of a possible warning effect—that is “contaminated” neither by a specific SOA nor by intramodal stimulus properties such as intensity. This is achieved through factorizing* expected cross-modal interaction into the probability of interaction in a given trial, P(I), times the amount of interaction Δ (cf. Equation 14.2), the latter being measured in milliseconds. Some potential extensions of the TWIN framework are discussed next. Although the functional dependence of P(I) on SOA and stimulus parameters is made explicit in the rules governing the opening and closing of the time window, the TWIN model framework as such does not stipulate a mechanism for determining the actual amount of interaction. By Assumption B4, Δ depends on cross-modal features like, for example, spatial distance between the stimuli of different modalities, and by systematically varying the spatial configuration, some insight into the functional dependence can be gained (e.g., Diederich and Colonius 2007b). Given the diversity of intersensory interaction effects, however, it would be presumptuous to aim at a single universal mechanism for predicting the amount of Δ. This does not preclude incorporating multisensory integration mechanisms into the TWIN framework within a specific context such as a spatial orienting task. Such an approach, which includes stipulating distributional properties of second stage processing time in a given situation, would bring along the possibility of a stronger quantitative model test, namely at the level of the entire observable reaction time distribution rather than at the level of means only. In line with the framework of modeling multisensory integration as (nearly) optimal decision making (Körding et al. 2007), we have recently suggested a decision rule that determines an optimal window width as a function of (1) the prior odds in favor of a common multisensory source, (2) the likelihood of arrival time differences, and (3) the payoff for making correct or wrong decisions (Colonius and Diederich 2010). Another direction is to extend the TWIN framework to account for additional experimental paradigms. For example, in many studies, a subject’s task is not simply to detect the target but to perform a speeded discrimination task between two stimuli (Driver and Spence 2004). Modeling this task implies not only a prediction of reaction time but also of the frequency of a correct or incorrect discrimination response. Traditionally, such data have been accommodated by assuming an evidence accumulation mechanism sequentially sampling information from the stimulus display favoring either response option A or B, for example, and stopping as soon as a criterion threshold for one or the other alternative has been reached. A popular subclass of these models are the diffusion models, which have been considered models of multisensory integration early on (Diederich 1995, 2008). At this point, however, it is an open question how this approach can be reconciled with the TWIN framework.
* Strictly speaking, this only holds for the focused attention version of TWIN; for the redundant target version, an estimate of the amount of statistical facilitation is required and can be attained empirically (cf. Colonius and Diederich 2006).
271
Modeling Multisensory Processes in Saccadic Responses
One of the most intriguing neurophysiological findings has been the suppression of multisensory integration ability of superior colliculus neurons by a temporary suspension of corticotectal inputs from the anterior ectosylvian sulcus and the lateral suprasylvian sulcus (Clemo and Stein 1986; Jiang et al. 2001). A concomitant effect on multisensory orientation behavior observed in the cat (Jiang et al. 2002) suggests the existence of more general cortical influences on multisensory integration. Currently, there is no explicit provision of a top-down mechanism in the TWIN framework. Note, however, that the influence of task instruction (FAP vs. RTP) is implicitly incorporated in TWIN because the probability of integration is supposed to be computed differently under otherwise identical stimulus conditions (cf. Section 14.4.4). It is a challenge for future development to demonstrate that the explicit incorporation of top-down processes can be reconciled with the two-stage structure of the TWIN framework.
APPENDIX A A.1 DERIVING THE PROBABILITY OF INTERACTION IN TWIN The peripheral processing times V for the visual and A for the auditory stimulus have an exponential distribution with parameters λV and λA, respectively. That is, fV (t ) = λV e − λV t , fA (t ) = λA e − λA t
for t ≥ 0, and f V(t) = fA(t) ≡ 0 for t < 0. The corresponding distribution functions are referred to as FV(t) and FA(t).
A.1.1 Focused Attention Paradigm The visual stimulus is the target and the auditory stimulus is the nontarget. By definition, P(I FAP ) = Pr ( A + τ < V < A + τ + ω ) ∞
=
∫ f (x){F (x + τ + ω ) − F (x + τ )} dx, A
V
V
0
where τ denotes the SOA value and ω is the width of the integration window. Computing the integral expression requires that we distinguish between three cases for the sign of τ + ω: (1) τ < τ + ω < 0 −τ
P(I FAP ) =
∫
{
}
λA e − λA x 1 − e − λV ( x +τ +ω ) d x
− τ −ω ∞
+
∫ λ e {e A
− λA x
− λV ( x +τ )
}
− e − λV ( x +τ +ω ) d x
−τ
=
λV λV + λ A
e λAτ ( −1 + e λA ω ) ;
272
The Neural Bases of Multisensory Processes
(2) τ < 0 < τ + ω −τ
P(I FAP ) =
∫ λ e {1 − e − λA x
A
− λV ( x +τ +ω )
} dx
0
∞
+
∫ λ e {e − λAx
A
}
− λV ( x +τ )
− e − λV ( x +τ +ω ) d x
−τ
=
1 λV + λ A
{λ
A
(1 − e−
λV (ω +τ )
) + λV (1 − e )}; λA τ
(3) 0 < τ < τ + ω ∞
P(I FAP ) =
∫ λ e {e − λA x
A
− λV ( x +τ )
}
− e − λV ( x +τ +ω ) d x
0
=
λA λV + λ A
{e
− λV τ
− e − λ V (ω + τ )
}
The mean RT for cross-modal stimuli is c E[RTVA,τ ] = E[V ] + E[ S2 | I FAP ] − P( I FAP ) ⋅ ∆
=
1
+ − P( I FAP ) ⋅ ∆
λV
and the mean RT for the visual target is E[RTV ] =
1 λV
+ ,
where 1/λV, the mean of the exponential distribution, is the mean RT of the first stage and μ is the mean RT of the second stage when no interaction occurs.
A.1.2 Redundant Target Paradigm The visual stimulus is presented first and the auditory stimulus second. By definition,
P(IRTP) = Pr{max(V, A + τ) < min(V, A + τ) + ω}
If the visual stimulus wins: (1) 0 ≤ τ ≤ ω τ
P(I RTPV ) =
− λV x
(1 − e−
− λV x
(1 − e
∫λ e V
λ A ( x +ω −τ )
) dx
0
∞
+
∫λ e V
− λ A ( x +ω −τ )
− (1 − e − λA ( x −τ ) ) ) d x
τ
=
1 λV (1 − e λA ( − ω +τ ) ) + λ A (1 − e( − λV τ ) ; λV + λ A
{
}
273
Modeling Multisensory Processes in Saccadic Responses
(2) 0 < ω ≤ τ τ
P(I RTPV ) =
∫ λ x (1 − e V
− λ A ( x +ω −τ )
) dx
τ −ω
∞
+
∫ λ e (1 − e V
− λV x
− λ A ( x +ω − τ )
)
− (1 − e − λA ( x −τ ) ) d x
τ
=
λA λV + λ A
{e
− λV τ
}
⋅ ( −1 + e λV ω )
If the auditory stimulus wins: 0 < τ ≤ τ + ω and ∞
P(I RTPA ) =
∫ λ e {e − λA x
A
− λ V ( x +τ )
}
− e − λ V ( x +τ +ω ) d x
0
=
λA λV + λ A
{e
− e − λ V ( ω +τ )
− λV τ
}
The probability that the visual or the auditory stimulus wins is therefore P( I RTP ) = P( I RTPV ) + P( I RTPA ).
The mean RT for cross-modal stimuli is c E[RTVA,τ ] = E[min(V , A + τ )] + E[ S2 | I RTP ] − P( I RTP ) ⋅ ∆
=
1 λV
1
− e − λV τ ⋅
λV
1
−
+ − P( I RTP ) ⋅ ∆
λV + λ A
and the mean RT for the visual and auditory stimulus is E[RTV ] =
1 λV
+ ,
and E[RTA ] =
1 λA
+ ,
A.1.3 Focused Attention and Warning By definition, P(W ) = Pr ( A + τ + γ A < V ) ∞
=
∫ f (x){1 − F (x + τ + γ A
V
A
)} d x
0
∞
= 1−
∫ f ( x ) F (a + τ + γ A
0
V
A
) d x.
274
The Neural Bases of Multisensory Processes
Again, we need to consider different cases: (1) τ + γA < 0 ∞
∫
P(W ) = 1 −
{
}
λ A e − λ A a 1 − e − λ V ( a +τ + γ A ) d a
− τ −γ A
= 1−
λV λV + λ A
e λ A (τ + γ A ) ;
(2) τ + γA ≥ 0 ∞
∫ λ e {1 − e
P(W ) = 1 −
A
− λA a
− λV ( a +τ + γ A )
} da
0
=
λA λV + λ A
e − λV ( τ + γ A ) .
The mean RT for cross-modal stimuli is c E[RTVA,τ ] = E[V ] + E[ S2 | I FAP ] − P( I FAP ) ⋅ ∆ − P(W ) ⋅ κ
=
1 λV
+ − P( I FAP ) ⋅ ∆ − P(W ) ⋅ κ
where 1/λV is the mean RT of the first stage, μ is the mean RT of the second stage when no interaction occurs, P(IFAP) · Δ is the expected amount of intersensory interaction, and P(W) · κ is the expected amount of warning.
REFERENCES Amlôt, R., R. Walker, J. Driver, and C. Spence. 2003. Multimodal visual-somatosensory integration in saccade generation. Neuropsychologia 41:1–15. Anastasio, T.J., P.E. Patton, and K. Belkacem-Boussaid. 2000. Using Bayes’ rule to model multisensory enhancement in the superior colliculus. Neural Computation 12:1165–1187. Arndt, A., and H. Colonius. 2003. Two separate stages in crossmodal saccadic integration: Evidence from varying intensity of an auditory accessory stimulus. Experimental Brain Research 150:417–426. Bell, A.H., A. Meredith, A.J. Van Opstal, and D.P. Munoz. 2005. Crossmodal integration in the primate superior colliculus underlying the preparation and initiation of saccadic eye movements. Journal of Neurophysiology 93:3659–3673. Clemo, H.R., and B.E. Stein. 1986. Effects of cooling somatosensory corticotectal influences in cat. Journal of Neurophysiology 55:1352–1368. Colonius, H., and P. Arndt. 2001. A two-stage model for visual-auditory interaction in saccadic latencies. Perception & Psychophysics, 63:126–147. Colonius, H., and A. Diederich. 2002. A maximum-likelihood approach to modeling multisensory enhancement. In Advances in Neural Information Processing Systems 14, T.G. Ditterich, S. Becker, and Z. Ghahramani (eds.). Cambridge, MA: MIT Press. Colonius, H., and A. Diederich. 2004. Multisensory interaction in saccadic reaction time: A time-window-ofintegration model. Journal of Cognitive Neuroscience 16:1000–1009. Colonius, H., and A. Diederich. 2006. Race model inequality: Interpreting a geometric measure of the amount of violation. Psychological Review 113(1):148–154. Colonius, H., and A. Diederich. 2010. The optimal time window of visual–auditory integration: A reaction time analysis. Frontiers in Integrative Neuroscience, 4:11. doi:10.3389/fnint.2010.00011.
Modeling Multisensory Processes in Saccadic Responses
275
Colonius, H., and D. Vorberg. 1994. Distribution inequalities for parallel models with unlimited capacity. Journal of Mathematical Psychology 38:35–58. Colonius, H., A. Diederich, and R. Steenken. 2009. Time-window-of-integration (TWIN) model for saccadic reaction time: Effect of auditory masker level on visual-auditory spatial interaction in elevation. Brain Topography 21:177–184. Corneil, B.D., and D.P. Munoz. 1996. The influence of auditory and visual distractors on human orienting gaze shifts. Journal of Neuroscience 16:8193–8207. Corneil, B.D., M. Van Wanrooij, D.P. Munoz, A.J. Van Opstal. 2002. Auditory-visual interactions subserving goal-directed saccades in a complex scene. Journal of Neurophysiology 88:438–454. Diederich, A. 1995. Intersensory facilitation of reaction time: Evaluation of counter and diffusion coactivation models. Journal of Mathematical Psychology 39:197–215. Diederich, A. 2008. A further test on sequential sampling models accounting for payoff effects on response bias in perceptual decision tasks. Perception & Psychophysics 70(2):229–256. Diederich, A., and H. Colonius. 2004a. Modeling the time course of multisensory interaction in manual and saccadic responses. In Handbook of multisensory processes, ed. G. Calvert, C. Spence, and B.E. Stein, 395–408. Cambridge, MA: MIT Press. Diederich, A., and H. Colonius. 2004b. Bimodal and trimodal multisensory enhancement: Effects of stimulus onset and intensity on reaction time. Perception & Psychophysics 66(8):1388–1404. Diederich, A., and H. Colonius. 2007a. Why two “distractors” are better than one: Modeling the effect of nontarget auditory and tactile stimuli on visual saccadic reaction time. Experimental Brain Research 179:43–54. Diederich, A., and H. Colonius. 2007b. Modeling spatial effects in visual–tactile saccadic reaction time. Perception & Psychophysics 69(1):56–67. Diederich, A., and H. Colonius. 2008. Crossmodal interaction in saccadic reaction time: Separating multisensory from warning effects in the time window of integration model. Experimental Brain Research 186:1–22. Diederich, A., H. Colonius, D. Bockhorst, and S. Tabeling. 2003. Visual–tactile spatial interaction in saccade generation. Experimental Brain Research 148:328–337. Diederich, A., H. Colonius, and A. Schomburg. 2008. Assessing age-related multisensory enhancement with the time-window-of-integration model. Neuropsychologia 46:2556–2562. Doyle, M.C., and R. Walker. 2002. Multisensory interactions in saccade target selection: Curved saccade trajectories Experimental Brain Research 142:116–130. Driver, J., and C. Spence. 2004. Crossmodal spatial attention: Evidence from human performance. In Crossmodal space and crossmodal attention, ed. C. Spence and J. Driver, 179–220. Oxford: Oxford Univ. Press. Eimer, M. 2001. Crossmodal links in spatial attention between vision, audition, and touch: Evidence from event-related brain potentials. Neuropsychologia 39:1292–1303. Frens, M.A., A.J. Van Opstal, and R.F. Van der Willigen. 1995. Spatial and temporal factors determine auditory– visual interactions in human saccadic eye movements. Perception & Psychophysics 57:802–816. Harrington, L.K., and C.K. Peck. 1998. Spatial disparity affects visual–auditory interactions in human sensorimotor processing. Experimental Brain Research 122:247–252. Hershenson, M. 1962. Reaction time as a measure of intersensory facilitation. Journal of Experimental Psychology 63:289–293. Hughes, H.C., P.-A. Reuter-Lorenz, G. Nozawa, and R. Fendrich. 1994. Visual–auditory interactions in sensorimotor processing: Saccades versus manual responses. Journal of Experimental Psychology: Human Perception and Performance 20:131–153. Hughes, H.C., M.D. Nelson, and D.M. Aronchick. 1998. Spatial characteristics of visual–auditory summation in human saccades. Vision Research 38:3955–3963. Jiang, W., M.T. Wallace, H. Jiang, J.W. Vaughan, and B.E. Stein. 2001. Two cortical areas mediate multisensory integration in superior colliculus neurons. Journal of Neurophysiology 85:506–522. Jiang, W., H. Jiang, and B.E. Stein. 2002. Two cortical areas facilitate multisensory orientation behaviour. Journal of Cognitive Neuroscience 14:1240–1255. Körding, K.P., U. Beierholm, W.J. Ma, S. Quartz, J.B. Tenenbaum et al. 2007. Causal inference in multisensory perception. PLoS ONE 2(9):e943, doi:10.1371/journal.pone.0000943. Klein, R., and A. Kingstone. 1993. Why do visual offsets reduce saccadic latencies? Behavioral and Brain Sciences 16(3):583–584. Luce, R.D. 1986. Response times: Their role in inferring elementary mental organization. New York: Oxford Univ. Press. Meredith, M.A. 2002. On the neural basis for multisensory convergence: A brief overview. Cognitive Brain Research 14:31–40.
276
The Neural Bases of Multisensory Processes
Meredith, M.A., and B.E. Stein. 1986. Visual, auditory, and somatosensory convergence on cells in superior colliculus results in multisensory integration. Journal of Neurophysiology 56:640–662. Meredith, M.A., J.W. Nemitz, and B.E. Stein. 1987. Determinants of multisensory integration in superior colliculus neurons. I. Temporal factors. Journal of Neuroscience 10:3215–3229. Miller, J.O. 1982. Divided attention: Evidence for coactivation with redundant signals. Cognitive Psychology 14:247–279. Munoz, D.P., and R. H. Wurtz. 1995. Saccade-related activity in monkey superior colliculus. I. Characteristics of burst and buildup cells. Journal of Neurophysiology 73:2313–2333. Navarra, J., A. Vatakis, M. Zampini, S. Soto-Faraco, W. Humphreys, and C. Spence. 2005. Exposure to asynchronous audiovisual speech extends the temporal window for audiovisual integration. Cognitive Brain Research 25:499–507. Nickerson, R.S. 1973. Intersensory facilitation of reaction time: Energy summation or preparation enhancement. Psychological Review 80:489–509. Raab, D.H. 1962. Statistical facilitation of simple reaction times. Transactions of the New York Academy of Science 24:574–590. Reuter-Lorenz, P.A., H.C. Hughes, and R. Fendrich. 1991. The reduction of saccadic latency by prior offset of the fixation point: An analysis of the gap effect. Perception & Psychophysics 49(2):167–175. Ross, S.M., and L.E. Ross. 1981. Saccade latency and warning signals: Effects of auditory and visual stimulus onset and offset. Perception & Psychophysics 29(5):429–437. Rowland, B.A., and B.E. Stein. 2008. Temporal profiles of response enhancement in multisensory integration. Frontiers in Neuroscience 2:218–224. Schweickert, R., D.L. Fisher, and K. Sung. Discovering Cognitive Architecture by Selectively Influencing Mental Processes. London: World Scientific Publishing (in press). Sinclair, C., and G.R. Hammond. 2009. Excitatory and inhibitory processes in primary motor cortex during the foreperiod of a warned reaction time task are unrelated to response expectancy. Experimental Brain Research 194:103–113. Spence, C., and S. Squire. 2003. Multisensory integration: Maintaining the perception of synchrony. Current Biology 13:R519–R521. Stein, B.E., and Meredith M.A. 1993. The Merging of the Senses. Cambridge, MA: MIT Press. Stein, B.E., W.S. Huneycutt, and M.A. Meredith. 1988. Neurons and behavior: The same rules of multisensory integration apply. Brain Research 448:355–358. Stein, B.E., W. Jiang, and T.R. Stanford. 2004. Multisensory integration in single neurons in the midbrain. In Handbook of multisensory processes, ed. G. Calvert, C. Spence, and B.E. Stein, 243–264. Cambridge, MA: MIT Press. Sternberg, S. 2001. Separate modifiability, mental modules, and the use of pure and composite measures to reveal them. Acta Psychologica 106:147–246. Todd, J.W. 1912. Reaction to multiple stimuli, in Archives of Psychology, No. 25. Columbia contributions to philosophy and psychology, ed. R.S. Woodworth, Vol. XXI, No. 8, New York: The Science Press. Townsend, J.T., and G. Nozawa. 1995. Spatio-temporal properties of elementary perception: An investigation of parallel, serial, and coactive theories. Journal of Mathematical Psychology 39:321–359. Van Opstal, A.J., and D.P. Munoz. 2004. Auditory–visual interactions subserving primate gaze orienting. In Handbook of multisensory processes, ed. G. Calvert, C. Spence, and B.E. Stein, 373–393. Cambridge, MA: MIT Press. Van Wassenhove, V., K.W. Grant, and D. Poeppel. 2007. Temporal window of integration in auditory–visual speech perception. Neuropsychologia 45:598–607. Van Zandt, T. 2002. Analysis of response time distributions. In Stevens’ handbook of experimental psychology, vol. 4, 3rd edn, ed. H. Pashler. New York: Wiley & Sons, Inc. Whitchurch, E.A., and T.T. Takahashi. 2006. Combined auditory and visual stimuli facilitate head saccades in the barn owl (Tyto alba). Journal of Neurophysiology 96:730–745.
Section IV Development and Plasticity
15
The Organization and Plasticity of Multisensory Integration in the Midbrain Thomas J. Perrault Jr., Benjamin A. Rowland, and Barry E. Stein
CONTENTS 15.1 15.2 15.3 15.4
Impact of Multisensory Integration....................................................................................... 279 Organization of Multisensory Organization in Adult SC......................................................280 SC Multisensory Integration Depends on Influences from Cortex....................................... 287 Ontogeny of SC Multisensory Integration............................................................................. 288 15.4.1 Impact of Developing in Absence of Visual–Nonvisual Experience........................ 289 15.4.2 Altering Early Experience with Cross-Modal Cues by Changing Their Spatial Relationships.............................................................................................................. 291 15.4.3 Role of Cortical Inputs during Maturation................................................................ 291 15.4.4 Ontogeny of Multisensory Integration in Cortex...................................................... 292 15.4.5 Ontogeny of SC Multisensory Integration in a Primate............................................ 292 Acknowledgments........................................................................................................................... 294 References....................................................................................................................................... 294 A great deal of attention has been paid to the physiological processes through which the brain integrates information from different senses. This reflects the substantial impact of this process on perception, cognitive decisions, and overt behavior. Yet, less attention has been given to the postnatal development, organization, and plasticity associated with this process. In the present chapter we examine what is known about the normal development of multisensory integration and how early alterations in postnatal experience disrupt, change, and dramatically alter the fundamental properties of multisensory integration. The focus here is on the multisensory layers of the cat superior colliculus (SC), a system that has served as an excellent model for understanding multisensory integration at the level of the single neuron and at the level of overt orientation behavior. Before discussing this structure’s normal development and its capacity to change, it is important to examine what has been learned about multisensory integration and the functional role of the SC in this process.
15.1 IMPACT OF MULTISENSORY INTEGRATION The ability of the brain to integrate information from different sources speeds and enhances its ability to detect, locate, and identify external events as well as the higher-order and behavioral processes necessary to deal with these events (Corneil and Munoz 1996; Frens et al. 1995a; Hughes et al. 1994; Marks 2004; Newell 2004; Sathian et al. 2004; Shams et al. 2004; Stein et al. 1989; Stein and Meredith 1993; Woods et al. 2004). All brains engage in this process of multisensory integration, and do so at multiple sites within the nervous system (Calvert et al. 2004a). The proper identification of an event includes the ability to disambiguate potentially confusing signals, including those associated with speech and animal communication (Bernstein et al. 2004; Busse et al. 2005; Corneil 279
280
The Neural Bases of Multisensory Processes
and Munoz 1996; Frens et al. 1995b; Ghazanfar et al. 2005; Ghazanfar and Schroeder 2006; Grant et al. 2000; Hughes et al. 1994; King and Palmer 1985; Lakatos et al. 2007; Liotti et al. 1998; Marks 2004; Massaro 2004; Newell 2004; Partan 2004; Recanzone 1998; Sathian 2000, 2005; Sathian et al. 2004; Schroeder and Foxe 2004; Senkowski et al. 2007; Shams et al. 2004; Stein et al. 1989; Sugihara et al. 2006; Sumby and Pollack 1954; Talsma et al. 2006, 2007; Wallace et al. 1996; Weisser et al. 2005; Woldorff et al. 2004; Woods and Recanzone 2004a, 2004b; Zangaladze et al. 1999). The facilitation of these capabilities has enormous survival value, so its retention and elaboration in all extant species is no surprise. What is surprising is that despite the frequent discussion of this phenomenon in adults (see Calvert et al. 2004b; Ghazanfar and Schroeder 2006; Spence and Driver 2004; Stein and Meredith 1993), there is much less effort directed to understanding how this process develops, and how it adapts to the environment in which it will be used. The multisensory neuron in the cat SC is an excellent model system to explore the organization and plasticity of multisensory integration. This is because it is not only the primary site of converging inputs from different senses (Fuentes-Santamaria et al. 2008; Stein et al. 1993; Wallace et al. 1993), but because it is involved in well-defined behaviors (orientation and localization), thereby providing an opportunity to relate physiology to behavior. Furthermore, we already know a good deal about the normal development of the unisensory properties of SC neurons (Kao et al. 1994; Stein 1984) and SC neurons have been one of the richest sources of information about the ontogeny and organization of multisensory integration (Barth and Brett-Green 2004; Calvert et al. 2004b; Groh and Sparks 1996a, 1996b; Gutfreund and Knudsen 2004; Jay et al. 1987a, 1987b; King et al. 2004; Lakatos et al. 2007; Peck 1987b; Sathian et al. 2004; Senkowski et al. 2007; Stein 1984; Stein and Arigbede 1972; Stein and Clamann 1981; Stein and Meredith 1993; Stein et al. 1973, 1976, 1993; Wallace 2004; Woods et al. 2004a). Of the most interest in the present context are two experimental observations. The first is that influences from the cortex are critical for the maturation of SC multisensory integration, the second is that experience during early postnatal life guides the nature of that integrative process. These are likely to be interrelated observations given the well-known plasticity of neonatal cortex. One reasonable possibility is that experience is coded in the cortex and in the morphology and functional properties of its connections with the SC.
15.2 ORGANIZATION OF MULTISENSORY ORGANIZATION IN ADULT SC Traditionally, the seven-layered structure of the SC has been subdivided into two functional sets of laminae: the superficial laminae (I–III) are exclusively visual, and the deeper laminae (IV–VII) contain unisensory (visual, auditory, and somatosensory) and multisensory neurons of all possible combinations (Stein and Meredith 1993). Visual, auditory, and somatosensory representations in the SC are all arranged in a similar map-like fashion so that they are all in register with each other (see Figure 15.1; Meredith and Stein 1990; Meredith et al. 1991; Middlebrooks and Knudsen 1984; Stein and Clamann 1981; Stein et al. 1976, 1993). The frontal regions of sensory space (forward visual and auditory space, and the face), are represented in the anterior aspect of the structure, whereas more temporal space (and the rear of the body) are represented in the posterior SC. Superior sensory space is represented in the medial aspect of the structure, and inferior space in the more lateral aspect of the structure. As a consequence, the neurons in a given region of the SC represent the same region of sensory space. These sensory maps are in register with the premotor map in the SC. This is a convenient way of matching incoming sensory information with the outgoing signals that program an orientation to the initiating event (Grantyn and Grantyn 1982; Groh et al. 1996a, 1996b; Guitton and Munoz 1991; Harris 1980; Jay and Sparks 1984, 1987a, 1987b; Munoz and Wurtz 1993a, 1993b; Peck 1987b; Sparks 1986; Sparks and Nelson 1987; Stein and Clamann 1981; Wurtz and Goldberg 1971; Wurtz and Albano 1980). Each multisensory SC neuron has multiple receptive fields, one for each of the modalities to which it responds. As would be expected from the structure’s map-like representations of the senses,
281
The Organization and Plasticity of Multisensory Integration in the Midbrain
Nas al
Nasal Temporal
ora
l
Sup erio r Infe rior
Medial
Caudal
Multisensory
Face
r erio Sup rior e Inf
Body Dor sal Ven tral
Tem p
Visual Auditory
Somatosensory
FIGURE 15.1 Correspondence of visual, auditory, and somatosensory representations in SC. Horizontal and vertical meridians of different sensory representations in SC suggest a common coordinate system representing multisensory space. (From Stein, B.E., and Meredith, M.A., The merging of the senses, MIT Press, Cambridge, 1993. With permission.)
these receptive fields are in spatial coincidence with each other (King et al. 1996; Meredith and Stein 1990; Meredith et al. 1991, 1992). Cross-modal stimuli that are in spatial and temporal coincidence with one another and fall within the excitatory receptive fields of a given neuron function synergistically. They elicit more vigorous responses (more impulses) than are evoked by the strongest of them individually. This is called “multisensory enhancement” and is illustrated in Figure 15.2. However, when these same stimuli are disparate in space, such that one falls within its excitatory receptive Response enhancement
Response depression
Auditory RF
S
94% 100
Sum
50
4
I
N Ao
V Ai
2 0
V Ai VAi
0
Mean impulses
Mean impulses
6
*
100 10 8 6 4 2 0 Visual RF
50
Sum *
V AoVAo
–47%
% Interaction
8
% Interaction
10
0
50
FIGURE 15.2 Multisensory enhancement and depression. Middle: visual (dark gray) and auditory (light gray) receptive fields (RF) of this SC neuron are plotted on hemispheres representing visual and auditory space. Each concentric circle represents 10° of space with right caudal aspect of auditory space represented by the half hemisphere. White bar labeled V represents a moving visual stimulus, whereas speakers labeled A0 and Ai represent auditory stimuli. Left: response enhancement occurred when visual and auditory stimuli were placed in spatial congruence (VAi). Note, in plot to the left, multisensory response exceeded sum of visual and auditory responses (horizontal dotted line) and was 94% greater than response to the most effective component stimulus (visual). Right: response depression occurred when visual and auditory stimuli were spatially disparate (VA0) so that multisensory response was 47% less than response to visual stimulus.
282
The Neural Bases of Multisensory Processes
field and the other falls within the inhibitory portion of its receptive field, the result is “multisensory depression.” Now the response consists of fewer impulses than that evoked by the most effective individual component stimulus. This ubiquitous phenomenon of enhancement and depression has been described in the SC and cortex for a number of organisms ranging from the rat to the human (Barth and Brett-Green 2004; Calvert et al. 2004b; DeGelder et al. 2004; Fort and Giard 2004; Ghazanfar and Schroeder 2006; King and Palmer 1985; Lakatos et al. 2007; Laurienti et al. 2002; Lovelace et al. 2003; Macaluso and Driver 2004; Meredith and Stein 1983, 1986a, 1986b, 1996; Morgan et al. 2008; Romanski 2007; Sathian et al. 2004; Schroeder et al. 2001; Schroeder and Foxe 2002, 2004; Wallace and Stein 1994; Wallace et al. 1992, 1993, 1998, 2004b). The clearest indicator that a neuron can engage in multisensory integration is its ability to show multisensory enhancement because multisensory depression occurs only in a subset of neurons that show multisensory enhancement (Kadunce et al. 2001). The magnitude of response enhancement will vary dramatically, both among neurons across the population as well as within a particular neuron throughout its dynamic range. This variation is in part due to differences in responses to different cross-modal stimulus combinations. When spatiotemporally aligned cross-modal stimuli are poorly effective, multisensory response enhancement magnitudes are often proportionately greater than those elicited when stimuli are robustly effective. Single neurons have demonstrated that multisensory responses are capable of exceeding predictions based on the simple addition of the two unisensory responses. These superadditive interactions generally occur at the lower end of a given neuron’s dynamic range and as stimulus effectiveness increases, multisensory responses tend to exhibit more additive or subadditive interactions (Alvarado et al. 2007b; Perrault et al. 2003, 2005; Stanford and Stein 2007; Stanford et al. 2005), a series of transitions that are consistent with the concept of “inverse effectiveness” (Meredith and Stein 1986b), in which the product of an enhanced multisensory interaction is proportionately largest when the effectiveness of the cross-modal stimuli are weakest. Consequently, the proportionate benefits that accrue to performance based on this neural process will also be greatest. This makes intuitive sense because highly effective cues are generally easiest to detect, locate, and identify. Using the same logic, the enhanced magnitude of a multisensory response is likely to be proportionately largest at its onset, because it is at this point when the individual component responses would be just beginning, and thus, weakest. Recent data suggests this is indeed the case (Rowland et al. 2007a, 2007b; see Figure 15.3). This is of substantial interest because it means that individual responses often, if not always, involve multiple underlying computations: superadditivity at their onset and additivity (and perhaps subadditivity) as the response evolves. In short, the superadditive multisensory computation may be far more common than previously thought, rendering the initial portion of the response of far greater impact than would otherwise be the case and markedly increasing its likely role in the detection and localization of an event. Regarding computational modes, one should be cautious when interpreting multisensory response enhancements from pooled samples of neurons. As noted earlier, the underlying computation varies among neurons as a result of their inherent properties and the specific features of the cross-modal stimuli with which they are evaluated. Many of the studies cited above yielded significant population enhancements that appear “additive,” yet one cannot conclude from these data that this was their default computation (e.g., Alvarado et al. 2007b; Perrault et al. 2005; Stanford et al. 2005). This is because they were examined with a battery of stimuli whose individual efficacies were disproportionately high. Because of inverse effectiveness, combinations of such stimuli would, of course, be expected to produce less robust enhancement and a high incidence of additivity (Stanford and Stein 2007). If those same neurons were tested with minimally effective stimuli exclusively, the incidence of superadditivity would have been much higher. Furthermore, most neurons, regardless of the computation that best describes their averaged response, exhibit superadditive computations at their onset, when activity is weakest (Rowland and Stein 2007). It is important to consider that this initial portion of a multisensory response may have the greatest impact on behavior (Rowland et al. 2007a).
The Organization and Plasticity of Multisensory Integration in the Midbrain
Trials
V
Trials
3
VA
2
A
1 0
0
100 200 300 Time from V stim onset (ms)
100 200 300 Time from V stim onset (ms)
0
300
Event estimate comparison
0.5
A
VA
.25
VA
V
100 200 Time from V stim onset (ms)
A
0
0
100 200 300 Time from V stim onset (ms)
Qsum (# impulses)
0
Qsum comparison
Event estimate
Trials
Impulse rasters
283
V 0
100 200 Time from V stim onset (ms)
300
FIGURE 15.3 Temporal profile of multisensory enhancement. Left: impulse rasters illustrating responses of a multisensory SC neuron to visual (V), auditory (A), and combined visual–auditory (VA) stimulation. Right: two different measures of response show the same basic principle of “initial response enhancement.” Multisensory responses are enhanced from their very onset and have shorter latencies than either of individual unisensory responses. Upper right: measure is mean stimulus-driven cumulative impulse count (qsum), reflecting temporal evolution of enhanced response. Bottom right: an instantaneous measure of response efficacy using event estimates. Event estimates use an appropriate kernel function that convolves impulse spike trains into spike density functions that differentiate spontaneous activity from stimulus-driven activity using a mutual information measure. Spontaneous activity was then subtracted from stimulus-driven activity and a temporal profile of multisensory integration was observed. (From Rowland, B.A., and Stein, B.E., Frontiers in Neuroscience, 2, 218–224, 2008. With permission.)
This process of integrating information from different senses is computationally distinct from the integration of information within a sense. This is likely to be the case, in large part, because the multiple cues in the former provide independent estimates of the same initiating event whereas the multiple cues in the latter contain substantial noise covariance (Ernst and Banks 2002). Using this logic, one would predict that a pair of within-modal stimuli would not yield the same response enhancement obtained with a pair of cross-modal stimuli even if both stimulus pairs were positioned at the same receptive field locations. On the other hand, one might argue that equivalent results would be likely because, in both cases, the effect reflects the amount of environmental energy. This latter argument posits that multiple, redundant stimuli explain the effect, rather than some unique underlying computation (Gondan et al. 2005; Leo et al. 2008; Lippert et al. 2007; Miller 1982; Sinnett et al. 2008). The experimental results obtained by Alvarado and colleagues (Figure 15.4) argue for the former explanation. The integration of cross-modal cues produced significantly greater response products than did the integration of within-modal cues. The two integration products also reflected very different underlying neural computations, with the latter most frequently reflecting subadditivity—a computation that was rarely observed with cross-modal cues (Alvarado et al. 2007b). Gingras et al. (2009) tested the same assumption and came to the same conclusions using an overt behavioral measure in which cats performed a detection and localization task in response to cross-modal (visual– auditory) and within-modal (visual–visual or auditory–auditory) stimulus combinations (Gingras et al. 2009; Figure 15.5).
284
The Neural Bases of Multisensory Processes (a) Multisensory response (impulses)
Multisensory integration cross-modal stimulus condition
Combined unisensory response (impulses)
(b)
30 R = 0.93 25 20 y = 1.29x + 1.11
15 10 5 0
Significant difference No significant difference
0
5 10 15 20 25 Best unisensory response (impulses)
30
Unisensory integration within-modal stimulus condition 30
R = 0.94
25 20 15 10
y = 0.91x + 0.77
5 0
Significant difference No significant difference
0
5 10 15 20 25 Best unisensory response (impulses)
30
30 R = 0.94 25 20 15 10 y = 0.87x + 1.16 5 Multisensory neurons 0 0 5 10 15 20 25 30 30 R = 0.95 25 20 15 10 y = 0.96x + 0.26 5 Unisensory neurons 0 0 5 10 15 20 25 30
FIGURE 15.4 Physiological comparisons of multisensory and unisensory integration. (a) Magnitude of response evoked by a cross-modal stimulus (y-axis) is plotted against magnitude of largest response evoked by component unisensory stimuli (x-axis). Most of observations show multisensory enhancement (positive deviation from solid line of unity). (b) The same cannot be said for response magnitudes evoked by two withinmodal stimuli. Here, typical evoked response is not statistically better than that evoked by largest response to a component stimulus. Within-modal responses are similar in both multisensory and unisensory neurons (insets on right). (From Alvarado, J.C. et al., Journal of Neurophysiology 97, 3193–205, 2007b. With permission.)
Because the SC is a site at which modality-specific inputs from the different senses converge (Meredith and Stein 1986b; Stein and Meredith 1993; Wallace et al. 1993), it is a primary site of their integration, and is not a reflection of multisensory integration elsewhere in the brain. The many unisensory structures from which these inputs are derived have been well-described (e.g., see Edwards et al. 1979; Huerta and Harting 1984; Stein and Meredith 1993; Wallace et al. 1993). Most multisensory SC neurons send their axons out of the structure to target motor areas of the brainstem and spinal cord. It is primarily via this descending route that the multisensory responses of SC neurons effect orientation behaviors (Moschovakis and Karabelas 1985; Peck 1987a; Stein and Meredith 1993; Stein et al. 1993). Thus, it is perhaps no surprise that the principles found governing the multisensory integration at the level of the individual SC neuron also govern SC-mediated overt behavior (Burnett et al. 2004, 2007; Jiang et al. 2002, 2007; Stein et al. 1989; Wilkinson et al. 1996).
27%
C
36%
Wrong location +9% –29%*
+49%*
–56%*
W NG
38%
Correct
No-Go
26%
–29%
C
*
0°
+137%*
(c)
0
50
100
150
200
250
25
–30
30
–30
15
–15
30
30
15
0
+45°
V1A1
35
V1V2
–15
0
+58%
+156% *
*
A1 V1 V2 V1V2V1A1
Best unisensory accuracy (%)
20
45
–45
45
–45
+30°
*
*
+15°
+63%
+147%
A1 V1 V2 V1V2V1A1
*
*
A1 V1 V2 V1V2V1A1
+32%
+125%
FIGURE 15.5 Multisensory integration was distinct from unisensory visual–visual integration. (a) At every spatial location, multisensory integration produced substantial performance enhancements (94–168%; mean, 137%), whereas unisensory visual integration produced comparatively modest enhancements (31–79%; mean, 49%). Asterisks indicate comparisons that were significantly different (χ2 test; P < 0.05). (b) Pie charts to left show performance in response to modality-specific auditory (A1) and visual (V1 and V2 are identical) stimuli. Figures within the bordered region show performance to cross-modal (V1A1) and within-modal (V1V2) stimulus combinations. No-Go errors (NG; gray) and Wrong Localization errors (W; white) were significantly decreased as a result of multisensory integration, but only No-Go errors were significantly reduced as a result of unisensory integration. (c) Differential effect of multisensory and unisensory integration was reasonably constant, regardless of effectiveness of best component stimulus, and both showed an inverse relationship, wherein benefits were greatest when effectiveness of component stimuli was lowest. V, visual; A, auditory; C, correct. (From Gingras, G. et al., Journal of Neuroscience, 29, 4897–902, 2009. With permission.)
24%
V1V2
% Change 100
25%
W
C
W
+31% *
+137%
A1 V1 V2 V1V2V1A1
NG
65%
NG
17%
–50 W
C
W
18%
V1A1
–15°
C
NG
NG
30%
*
A1 V1 V2 V1V2V1A1
+52%
*
0
V1 & V2
*
+94%
50
51%
*
–30°
+45%
–45°
*
A1 V1 V2 V1V2V1A1
*
+123%
A1 V1 V2 V1V2V1A1
+79%
+168%
150
A1
42%
% Accuracy
(b)
0
50
100
% Enhancement
(a)
The Organization and Plasticity of Multisensory Integration in the Midbrain 285
286
The Neural Bases of Multisensory Processes Control A
V1
V2
V3
V4
20 15 10 5 0
V1
V2
V3
V4
V5
AV2
AV3
AV4
AV5
Impulses
Impulses
20 15 10 5 0
AV1
AV2
AV3
AV4
Multisensory AV5
Impulses VA V
15
*
*
150 +89%
200 10
+37%
V1
150
100 +58% 50
100 5
0 0
AES
200
+129%
5
rLS
15
*
V2 V3 V4 Visual effectiveness
V5
0
50 +9%
0%
0%
V1
V2 V3 V4 Visual effectiveness
–14% –3%
Response enhancement (%)
+180% 10
AV1
20 15 10 5 0
Impulses
20 15 10 5 0
Mean impulses
AES & rLS deactivated
Modality-specific V5 A
0
V5
FIGURE 15.6 SC multisensory integration depends on influences from association cortex. SC responses to auditory (A), visual (V), and multisensory (AV) stimuli were recorded before (left) and after (right) deactivation of association cortex. Visual stimulus was presented at multiple (five) levels of effectiveness. At the top of the figure are individual stimulus traces, impulse rasters, and peristimulus time histograms for each response. Graphs at bottom summarize these data showing mean response levels (lines) and percentage of multisensory enhancement (bars) observed for each of stimulus pairings. Before cortical deactivation, enhanced responses showed characteristic “inverse effectiveness” profile with larger unisensory responses associated with smaller multisensory enhancements. However, after cortical deactivation (shaded region of inset), multisensory enhancements were eliminated at each of stimulus effectiveness levels tested so that multisensory and unisensory responses were no longer significantly different. (From Jiang, W. et al., Journal of Neurophysiology, 85, 506–22, 2001. With permission.)
287
The Organization and Plasticity of Multisensory Integration in the Midbrain
15.3 SC MULTISENSORY INTEGRATION DEPENDS ON INFLUENCES FROM CORTEX Although, as noted above, SC neurons become multisensory as a result of receiving converging inputs from multiple visual, auditory, and somatosensory sources, this does not automatically render them capable of integrating these multiple sensory inputs. Rather, a specific component of the circuit must be operational: the projection from the association cortex. As shown in Figure 15.6, deactivating this input renders SC neurons incapable of multisensory integration. Their multisensory responses now approximate those elicited by the most effective modality-specific component stimulus, a result that is paralleled at the level of overt behavior (Alvarado et al. 2007a; Jiang and Stein 2003; Jiang et al. 2001, 2002, 2006; Stein and Meredith 1993a; Stein et al. 2002; Wallace and Stein 1994, 1997). This association cortical area in the cat is the anterior ectosylvian sulcus (AES), and an adjacent area, the rostral aspect of the lateral suprasylvian sulcus (rLS). The homologue in other species has not yet been determined. These two areas appear to be unique in this context (Burnett et al. 2004; Jiang et al. 2003, 2006, 2007; Wilkinson et al. 1996). Thus, when one of them is damaged during early life, the other can take on its role, but when both are damaged, no other cortical areas seem capable of substituting for them. In the normal animal, they generally function together in mediating SC multisensory integration, but the AES is the more important of the two, as many more neurons in the SC are dependent on AES influences than on rLS influences for this capability (Jiang et al. 2001). The intense experimental scrutiny on the influences of AES over SC multisensory integration has helped us understand the nature of these descending influences. First, their projections to the SC are derived from unisensory neurons; second, they converge from different subregions of the AES (visual, AEV; auditory, FAES; and somatosensory, SIV) onto a given SC neuron in a pattern that matches the convergence pattern from non-AES input sources (Fuentes-Santamaria et al. 2008; Wallace et al. 1992). For example, an individual multisensory SC neuron that receives converging visual input from the retina and auditory input from the inferior colliculus, will also likely receive convergent input from AEV and FAES.
FAES SIV
FIGURE 15.7 (See color insert.) SC neurons receive converging input from different sensory subregions of anterior ectosylvian (association) cortex. Flourescent tracers were deposited in auditory (FAES; green) and somatosensory (SIV; red) subregions. Axons of these cortical neurons often had boutons in contact with SC neurons, and sometimes could be seen converging onto the same target neurons. Presumptive contact points are indicated by arrows. (From Fuentes-Santamaria, V. et al., Cerebral Cortex, 18, 1640–52, 2008. With permission.)
288
The Neural Bases of Multisensory Processes
Rowland et al. (2007b) used these convergence patterns as the basis for an explanatory model in which AES inputs and other inputs have different convergence patterns on the dendrites of their SC target neurons (Rowland et al. 2007b; Figure 15.7). The model assumption of N-methyld-aspartate (NMDA) (and 2-amino-3-(5-methyl-3-oxo-1,2-oxazol-4-yl)propanoic acid (AMPA)) receptors at every dendritic region provides the possibility of producing nonlinear interaction between inputs that cluster in the same region. These clustering inputs are selectively those from AES, and are preferentially on proximal dendrites. The currents they introduce affect one another, and produce a nonlinear amplification through the NMDA receptors, something that the inputs from non-AES areas cannot do because they are more computationally segregated from one another. All inputs also contact a population of inhibitory interneurons, and these also contact SC multisensory neurons, so that the output of the SC neuron depends on the relative balance of excitatory inputs from the direct projecting inputs and the shunting inhibition via the inhibitory interneurons.
15.4 ONTOGENY OF SC MULTISENSORY INTEGRATION The multisensory properties of SC neurons described above are not characteristic of the neonate. This is evident from studies of the cat SC. The cat is an excellent model for exploring the ontogeny of sensory information processing because it is an altricial species, so that a good deal of its development is observable after birth. At this time, its eyelids are still fused and its ear canals have not yet opened. Most SC neurons are unresponsive to sensory stimuli at this time, and the few that do respond to external stimulation are activated by tactile stimuli, often on the perioral region. This is a condition that is already evident in late fetal stages (Stein et al. 1973) and has been thought to help prepare the infant for finding the nipple and suckling (Larson and Stein 1984). The first neurons that respond to auditory stimulation are encountered at approximately 5 days postnatal, but neurons responsive to visual neurons in the multisensory (i.e., deep) layers are not evident until approximately 3 weeks postnatal, long after their overlying superficial layer counterparts have been active (Kao et al. 1994; Stein et al. 1973, 1984; Wallace and Stein 1997). Just as the appearance of multisensory neurons is delayed relative to their unisensory counterparts, so is the maturation of their most characteristic property, multisensory integration. This may be because they, compared with their unisensory neighbors, have to accommodate a more complex task: determining which signals from different senses should be coupled, and which should be segregated. The first multisensory neurons that appear are those responsive to somatosensory and auditory stimuli. They become active at about postnatal day 10, several days after auditory responsiveness appears. Visual–auditory, visual–somatosensory, and trisensory neurons become active at about 3 weeks, as soon as deep-layer visual responsiveness is evident. Yet, the capacity to integrate a neuron’s multiple sensory inputs does not appear until approximately 5 weeks of age, and at this time, very few neurons are capable of this feat (Figure 15.8a). During this time, the characteristic response properties of these neurons change dramatically, exhibiting substantially reduced receptive fields and decreased response latencies (Figure 15.8b and c). Achieving the normal complement of multisensory neurons capable of multisensory integration requires months of development, a period of maturation during which inputs from the association cortex also become functional (Stein and Gallagher 1981b; Stein et al. 2002; Wallace and Stein 1997, 2000). The observation that this ontogenetic process is so gradual was taken to suggest that this period is one in which experience plays a substantial role in guiding the maturation of multisensory integration. One possibility considered was that the brain is learning to expect that certain physical properties of cues from different senses are linked to common events, specifically their timing and location. This would provide the brain with a way of crafting the principles that govern multisensory integration to adapt to the environment in which it will be used. To examine this possibility, animals were reared without the opportunity to obtain experience with visual and nonvisual cues (i.e.,
289
The Organization and Plasticity of Multisensory Integration in the Midbrain (a) 70
% Multisensory neurons
60 50 40
s Po
tn
5
30
16-20 13-15 eeks) e (w 9-1011-12 g a l 8 ata 6
adult
7
4 3
20
2
10
Unimodal Multisensory
1
0 0
10
15
(c)
500 400
Somatosensory Auditory Visual
300 200
adult
250
Somatosensory Auditory Visual
200 150 100 50
100 0
20
Postnatal age (weeks)
Mean latency (ms)
Receptive field size (% of adult value)
(b)
5
0
5
10
15
20
Postnatal age (weeks)
adult
0
0
5
10
Postnatal age (weeks)
15
adult
FIGURE 15.8 Developmental chronology of SC multisensory neurons. (a) Percentage of multisensory neurons as a proportion of sensory-responsive neurons in deep SC is shown as a function of postnatal age. Each closed circle represents a single age, and increasing proportion of such neurons is also shown on pie charts. (b) Rapid decrease in size of different receptive fields (as a percentage of mean adult value) of multisensory neurons is shown as a function of postnatal age. (c) Decrease in response latencies of multisensory neurons to each modality-specific stimulus is shown as a function of postnatal age. (From Wallace, M.T., and Stein, B.E., Journal of Neuroscience, 17, 2429–44, 1997. With permission.)
in darkness), and also in situations in which the spatial cues associated with common events were perturbed. The first experimental condition tests the notion that in the absence of such experience, multisensory integration would not develop, and the second tests the possibility that the specific features of experience guide the formation of the principles governing multisensory integration.
15.4.1 Impact of Developing in Absence of Visual–Nonvisual Experience In this experimental series, animals were reared in darkness until they were 6 months of age, a time at which most of the physiological properties of SC neurons appear mature, or near-mature. These animals developed a near-normal set of visual, auditory, and somatosensory neurons that were highly responsive to natural physiological stimuli (Wallace et al. 2001, 2004a). That these neurons were atypical, however, was indicated by their abnormally large receptive fields, receptive fields that
290
(b) Disparity-reared
V
A
Stimuli coincident in ARF
V
A
Stimuli disparate
V
A
No change +16% V
A VA
No change –3% V
A VA * +144%
0 0
Stimuli coincident in VRF
75 150 225 % Change (MSI)
A VA
0
V
75 150 225 % Change (MSI)
Mean impulses/trial 0 5 10 15 Mean impulses/trial 0 5 10 15 Mean impulses/trial 0 5 10 15 Mean impulses/trial 0 5 10 15
No change +8%
0
Impulses 0 5 10 15 20 25 Impulses 0 5 10 15 20 25
A
VA
Impulses 0 5 10 15 20 25
V
A
Impulses 0 5 10 15 20 25
V
75 150 225 % Change (MSI)
(a) Dark-reared
75 150 225 % Change (MSI)
The Neural Bases of Multisensory Processes
V
A VA
FIGURE 15.9 Early experience influences receptive field and response properties of SC multisensory neurons. Impact of dark rearing (a) and disparity rearing (b) on properties of adult multisensory neurons are shown using two exemplar neurons. Rearing in absence of visual experience was characterized by large visual and auditory receptive fields (a) that were more characteristic of neonates than adults. This neuron was typical of population of neurons from dark-reared animals. It was responsive to visual and auditory stimuli, but its inexperience with visual–auditory stimuli was evident in its lack of ability to integrate those cross-modal stimuli to producing an enhanced response. Responses from neuron depicted in panel (b) were characteristic of those affected by a rearing environment in which visual and auditory stimuli were always spatially disparate. Its visual and auditory receptive fields did not develop normal spatial register, but were completely out of alignment. It was also incapable of “normal” multisensory integration as indicated by absence of enhanced responses to spatiotemporally aligned cross-modal stimuli (B1 and B2). Nevertheless, it did show multisensory enhancement to spatially disparate stimuli (B3), revealing that its multisensory integrative properties had been crafted to adapt them to presumptive environment in which they would be used. (Adapted from Wallace, M.T. et al., Journal of Neuroscience, 24, 9580–4, 2004a; Wallace, M.T. et al., Proceedings of the National Academy of Sciences of the United States of America, 101, 2167–72, 2004b; Wallace, M.T., and Stein, B.E., Journal of Neurophysiology, 97, 921–6, 2007.)
The Organization and Plasticity of Multisensory Integration in the Midbrain
291
were more characteristic of a neonate than of an adult animal. These neurons were also unable to integrate their multiple sensory inputs as evidenced by the absence of visual–auditory integration (Figure 15.9a). This too made them appear more like neonatal, or adults who have had association cortex removed, than like adult animals (Jiang et al. 2006). These observations are consistent with the idea that experience with cross-modal cues is necessary for integrating those cues.
15.4.2 Altering Early Experience with Cross-Modal Cues by Changing Their Spatial Relationships If early experience does indeed craft the principles governing multisensory integration, changes in those experiences should produce corresponding changes in those principles. Under normal circumstances, cross-modal events provide cues that have a high degree of spatial and temporal fidelity. In short, the different sensory cues come from the same event, so they come from about the same place at about the same time. Presumably, with extensive experience, the brain links stimuli from the two senses by their temporal and spatial relationships. In that way, similar concordances among cross-modal stimuli that are later encountered facilitate the detection, localization, and identification of those initiating events. Given those assumptions, any experimental changes in the physical relationships of the crossmodal stimuli that are experienced during early life should be reflected in adaptations in the principles governing multisensory integration. In short, they should be appropriate for that “atypical” environment and inappropriate for the normal environment. To examine this expectation, a group of cats was reared in a darkroom from birth to 6 months of age, and were periodically presented with visual and auditory cues that were simultaneous, but derived from different locations in space (Wallace and Stein 2007). This was accomplished by fixing speakers and light-emitting diodes to different locations on the wall of the cages. When SC neurons were then examined, many had developed visual–auditory responsiveness. Most of them looked similar to those found in animals reared in the dark. They had very large receptive fields, and were unable to integrate their visual–auditory inputs. The retention of these neonatal properties was not surprising in light of the fact that these stimuli presented in an otherwise dark room required no response, and were not associated with any consequence. However, there were a substantial number of SC neurons in these animals that did appear to reflect their visual–auditory experience. Their visual–auditory receptive fields had contracted as would be expected with sensory experience, but they had also developed poor alignment. A number of them had no overlap between them (see Figure 15.9b), a relationship almost never seen in animals reared in illuminated conditions or in animals reared in the dark. However, it did reflect their unique rearing condition. Most significant in the present context is that they could engage in multisensory integration. However, only when the cross-modal stimuli were disparate in space were they able to fall simultaneously in their respective visual and auditory receptive fields. In this case, the magnitude of the response to the cross-modal stimulus was significantly enhanced, just as in normally reared animals when presented with spatially aligned visual–auditory stimuli. Similarly, the cross-modal stimulus configurations that are spatially coincident fail to fall within the corresponding receptive fields of the neuron, and the result is to produce response depression or no integration (see Kadunce et al. 2001; Meredith and Stein 1996). These observations are consistent with the prediction above, and reveal that early experience with the simple temporal coincidence of the two cross-modal stimuli was sufficient for the brain to link them, and initiate multisensory integration.
15.4.3 Role of Cortical Inputs during Maturation The data from the above experiments did not reveal where in the multisensory SC circuitry these early sensory experiences were exerting their greatest effects. Nevertheless, the fact that the cortex is known to be highly dependent on early experience for its development made it a prime candidate
292
The Neural Bases of Multisensory Processes
for this role. To test this idea, Rowland and colleagues (Stein and Rowland 2007) reversibly deactivated both AES and rLS during the period (25–81 days postnatal) in which multisensory integration normally develops (see Wallace and Stein 1997), so that their neurons were unable to participate in these sensory experiences. This was accomplished by implanting a drug-infused polymer over these cortical areas. The polymer would gradually release its store of muscimol, a gamma-aminobutyric acid A (GABAa) receptor agonist that blocked neuronal activity. Once the stores of muscimol were depleted over many weeks, or the polymer was physically removed, these cortical areas would once again become active and responsive to external stimulation. As predicted, SC neurons in these animals were unable to integrate their visual and auditory inputs to enhance their responses. Rather, their responses were no greater to the cross-modal combination of stimuli than they were to the most effective of its component stimuli. Furthermore, comparable deficits were apparent in overt behavior. Animals were no better at localizing a cross-modal stimulus than they were at localizing the most effective of its individual component stimuli. Although these data do not prove the point, they do suggest that the cortical component of the SC multisensory circuit is a critical site for incorporating the early sensory experiences required for the development of SC multisensory integration.
15.4.4 Ontogeny of Multisensory Integration in Cortex The development of the cortex is believed to lag the development of the midbrain, and this principle would be expected to extend to the maturation of sensory response properties. Consequently, the inability of SC neurons in the neonatal cat brain to exhibit multisensory integration before 4 postnatal weeks suggests that the property would develop even later in the cortex. To evaluate this issue, multisensory neurons were studied in the developing AES. Although, as discussed above, neurons from the AES that project to the SC are unisensory, there are multisensory neurons scattered along the AES and concentrated at the borders between its three largely modality-specific zones. The visual–auditory neurons in this “SC independent” multisensory group were the target of this study. They, like their counterparts in the SC, share many fundamental characteristics of an integrated response, such as response enhancement and depression (Wallace et al. 1992), and significant alterations in their temporal response profile (Royal et al. 2009). Neurons in the AES can serve as a good maturational referent for the SC. As predicted, multisensory neurons in the neonatal AES were unable to integrate their visual and auditory inputs. They too developed their capacity for multisensory integration only gradually, and did so within a time window that began and ended later in ontogeny than does the time window for SC neurons (Wallace et al. 2006). The data not only support the contention that cortical sensory processes lag those of the midbrain during development, but also raise the possibility that, just as in the SC, experience with visual and auditory stimuli in cross-modal configurations is required for the maturation of multisensory integration. The likelihood of this possibility was strengthened using the same rearing strategy as discussed earlier. Animals were raised in the dark to preclude visual–nonvisual experience. As a result, AES neurons failed to develop the capacity to integrate their visual and auditory inputs. Once again, this rearing condition did not impair the development of visually-responsive, auditory-responsive, and even visual–auditory neurons. They were common. The rearing condition simply impaired AES multisensory neurons from developing an ability to use these inputs synergistically (Carriere et al. 2007).
15.4.5 Ontogeny of SC Multisensory Integration in a Primate The multisensory properties of SC neurons discussed above are not unique to the cat. Although their incidence is somewhat lower, multisensory neurons in the rhesus monkey SC have properties very similar to those described above (Wallace et al. 1996). They have multiple, overlapping receptive fields and show multisensory enhancement and multisensory depression, respectively, to
293
The Organization and Plasticity of Multisensory Integration in the Midbrain
spatially aligned and spatially disparate cross-modal stimuli. Although there may seem to be no a priori reason to assume that their maturation would depend on different factors than those of the cat, the monkey, unlike the cat, is a precocial species. Its SC neurons have comparatively more time to develop in utero than do those of the cat. Of course, they also have to do so in the dark, making one wonder if the late in utero visual-free experiences of the monkey have some similarity to the visual-free environment of the dark-reared cat. Wallace and Stein (2001) examined the multisensory properties of the newborn monkey SC and found that, unlike the SC of the newborn cat, there were already multisensory neurons present (Wallace et al. 2001; Figure 15.10). However, as in the cat SC, these multisensory neurons were unable to integrate visual–nonvisual inputs. Their responses to combinations of coincident visual and auditory or somatosensory cues were no better than were their responses to the most effective of these component stimuli individually. Although there is no data regarding when they develop this capacity, and whether dark-rearing would preclude its appearance, it seems highly likely that the monkey shares the same developmental antecedents for the maturation of multisensory integration as the cat. Recent reports in humans suggest that this may be a general mammalian plan. People who have experienced early visual deprivation due to dense congenital cataracts were examined many years after surgery to remove those cataracts. The observations are consistent with predictions that would be made from the animal studies. Specifically, their vision appeared to be normal, but their ability to integrate visual–nonvisual information was significantly less well developed than in normal subjects. This ability was compromised in a variety of tasks including those that involved speech and those that did not (Putzar et al. 2007). Whether neurons in the human SC, like those in the SC of cat and monkey, are incapable of multisensory integration is not yet known. However, human infants do poorly on tasks requiring the integration of visual and auditory information to localize events before 8 months of age (Neil
Multisensory neurons (14.7%)
VAS (1%)
AS (1%)
VA
9.3%
11.1%
VS VA
5.3 %
Adult
Multisensory neurons (28%) AS (0.9%) VAS VS 6.5%
7.4 %
37.0%
17.6%
Somatosensory
Visual
17.6%
Auditory
Modality-specific neurons (72%)
Somatosensory
49.5 %
23.2 %
Visual
12.6 %
Newborn
Auditory
Modality-specific neurons (85.3%)
FIGURE 15.10 Modality convergence patterns in SC of newborn and adult (inset) monkey. Pie charts show distributions of all recorded sensory-responsive neurons in multisensory laminas (IV–VII) of SC. (From Wallace, M.T., and Stein, B.E., Journal of Neuroscience, 21, 8886–94, 2001. With permission.)
294
The Neural Bases of Multisensory Processes
et al. 2006), and do poorly on tasks requiring the integration of visual and haptic information before 8 years of age (Gori et al. 2008). These data indicate that multisensory capabilities develop over far longer periods in the human brain than in the cat brain, an observation consistent with the long period of postnatal life devoted to human brain maturation. These observations, coupled with those indicating that early sensory deprivation has a negative effect on multisensory integration even far later in life suggests that early experience with cross-modal cues is essential for normal multisensory development in all higher-order species. If so, we can only wonder how well the human brain can adapt its multisensory capabilities to the introduction of visual or auditory input later in life via prosthetic devices. Many people who had congenital hearing impairments, and later received cochlear implants, have shown remarkable accommodation to them. They learn to use their newly found auditory capabilities with far greater precision than one might have imagined when they were first introduced. Nevertheless, it is not yet known whether they can use them in concert with other sensory systems. Although the population of people with retinal implants is much smaller, there are very encouraging reports among them as well. However, the same questions apply: Are they able to acquire the ability to engage in some forms of multisensory integration after experience with visual–auditory cues later in life and, if so, how much experience and what kinds of experiences are necessary for them to develop this capability? These issues remain to be determined.
ACKNOWLEDGMENTS The research described here was supported in part by NIH grants NS36916 and EY016716.
REFERENCES Alvarado, J.C., T.R. Stanford, J.W. Vaughan, and B.E. Stein. 2007a. Cortex mediates multisensory but not unisensory integration in superior colliculus. Journal of Neuroscience 27:12775–86. Alvarado, J.C., J.W. Vaughan, T.R. Stanford, and B.E. Stein. 2007b. Multisensory versus unisensory integration: Contrasting modes in the superior colliculus. Journal of Neurophysiology 97:3193–205. Barth, D.S., and B. Brett-Green. 2004. Multisensory-Evoked Potentials in Rat Cortex. In The handbook of multisensory processes, ed. G.A. Calvert, C. Spence, and B.E. Stein, 357–70. Cambridge, MA: MIT Press. Bernstein, L.E., J. Edward, T. Auer, and J.K. Moore. 2004. Audiovisual Speech Binding: Convergence or Association. In Handbook of multisensory processes, ed. G.A. Calvert, C. Spence, and B.E. Stein, 203–23. Cambridge, MA: MIT Press. Burnett, L.R., B.E. Stein, D. Chaponis, and M.T. Wallace. 2004. Superior colliculus lesions preferentially disrupt multisensory orientation. Neuroscience 124:535–47. Burnett, L.R., B.E. Stein, T.J. Perrault Jr., and M.T. Wallace. 2007. Excitotoxic lesions of the superior colliculus preferentially impact multisensory neurons and multisensory integration. Experimental Brain Research 179:325–38. Busse, L., K.C. Roberts, R.E. Crist, D.H. Weissman, and M.G. Woldorff. 2005. The spread of attention across modalities and space in a multisensory object. Proceedings of the National Academy of Sciences of the United States of America 102:18751–6. Calvert, G., C. Spence, and B.E. Stein. 2004a. The handbook of multisensory processes. Cambridge, MA: MIT Press. Calvert, G. A., and J. Lewis, W. 2004b. Hemodynamic Studies of Audiovisual Interactions. In The Handbook of Multisensory Processes, ed. G. A. Calvert, C. Spence, and B.E. Stein, 483–502. Cambridge, MA: MIT Press. Carriere, B.N., D.W. Royal, T.J. Perrault et al. 2007. Visual deprivation alters the development of cortical multisensory integration. Journal of Neurophysiology 98:2858–67. Corneil, B.D., and D.P. Munoz. 1996. The influence of auditory and visual distractors on human orienting gaze shifts. Journal of Neuroscience 16:8193–207. DeGelder, B., J. Vroomen, and G. Pourtois. 2004. Multisensory Perception of Emotion, Its Time Course, and Its Neural Basis. In The handbook of multisensory processes, ed. G.A. Calvert, C. Spence, and B.E. Stein, 581–96. Cambridge, MA: MIT Press. Edwards, S.B., C.L. Ginsburgh, C.K. Henkel, and B.E. Stein. 1979. Sources of subcortical projections to the superior colliculus in the cat. Journal of Comparative Neurology 184:309–29.
The Organization and Plasticity of Multisensory Integration in the Midbrain
295
Ernst, M.O., and M.S. Banks. 2002. Humans integrate visual and haptic information in a statistically optimal fashion. Nature 415:429–33. Fort, A., and M.-H. Giard. 2004. Multiple Electrophysiological Mechanisms of Audiovisual Integration in Human Perception. In The handbook of multisensory processes, ed. G.A. Calvert, C. Spence, and B.E. Stein, 503–13. Cambridge, MA: MIT Press. Frens, M.A., and A.J. Van Opstal. 1995a. A quantitative study of auditory-evoked saccadic eye movements in two dimensions. Experimental Brain Research 107:103–17. Frens, M.A., A.J. Van Opstal, and R.F. Van der Willigen. 1995b. Spatial and temporal factors determine auditoryvisual interactions in human saccadic eye movements. Perception & Psychophysics 57:802–16. Fuentes-Santamaria, V., J.C., Alvarado, B.E., Stein, and J.G. McHaffie. 2008. Cortex contacts both output neurons and nitrergic interneurons in the superior colliculus: Direct and indirect routes for multisensory integration. Cerebral Cortex 18:1640–52. Ghazanfar, A.A., and C.E. Schroeder. 2006. Is neocortex essentially multisensory? Trends in Cognitive Sciences 10:278–285. Ghazanfar, A.A., J.X. Maier, K.L. Hoffman, and N.K. Logothetis. 2005. Multisensory integration of dynamic faces and voices in rhesus monkey auditory cortex. Journal of Neuroscience 25:5004–12. Gingras, G., B.A. Rowland, and B.E. Stein. 2009. The differing impact of multisensory and unisensory integration on behavior. Journal of Neuroscience 29:4897–902. Gondan, M., B., Niederhaus, F. Rosler, and B. Roder. 2005. Multisensory processing in the redundant-target effect: A behavioral and event-related potential study. Perception & Psychophysics 67:713–26. Gori, M., M. Del Viva, G. Sandini, and D.C. Burr. 2008. Young children do not integrate visual and haptic form information. Current Biology 18:694–8. Grant, A.C., M.C. Thiagarajah, and K. Sathian. 2000. Tactile perception in blind Braille readers: A psychophysical study of acuity and hyperacuity using gratings and dot patterns. Perception & Psychophysics 62:301–12. Grantyn, A., and R. Grantyn. 1982. Axonal patterns and sites of termination of cat superior colliculus neurons projecting in the tecto-bulbo-spinal tract. Experimental Brain Research 46:243–56. Groh, J.M., and D.L. Sparks. 1996a. Saccades to somatosensory targets: II. Motor convergence in primate superior colliculus. Journal of Neurophysiology 75:428–38. Groh, J.M., and D.L. Sparks. 1996b. Saccades to somatosensory targets: III. Eye-position-dependent somatosensory activity in primate superior colliculus. Journal of Neurophysiology 75:439–53. Guitton, D., and D.P. Munoz. 1991. Control of orienting gaze shifts by the tectoreticulospinal system in the head-free cat: I. Identification, localization, and effects of behavior on sensory responses. Journal of Neurophysiology 66:1605–23. Gutfreund, Y., and E.I. Knudsen. 2004. Visual Instruction of the Auditory Space Map in the Midbrain. In The handbook of multisensory processes, ed. G.A. Calvert, C. Spence and B.E. Stein, 613–24. Cambridge, MA: MIT Press. Harris, L.R. 1980. The superior colliculus and movements of the head and eyes in cats. Journal of Physiology 300:367–91. Huerta, M.F., and J.K. Harting. 1984. The mammalian superior colliculus: Studies of its morphology and connections. In Comparative neurology of the optic tectum, ed. H. Vanegas, 687–773. New York: Plenum Publishing Corporation. Hughes, H.C., P.A. Reuter-Lorenz, G. Nozawa, and R. Fendrich. 1994. Visual–auditory interactions in sensorimotor processing: Saccades versus manual responses. Journal of Experimental Psychology. Human Perception and Performance 20:131–53. Jay, M.F., and D.L. Sparks. 1984. Auditory receptive fields in primate superior colliculus shift with changes in eye position. Nature 309:345–7. Jay, M.F., and D.L. Sparks. 1987a. Sensorimotor integration in the primate superior colliculus: I. Motor convergence. Journal of Neurophysiology 57:22–34. Jay, M.F., and D.L. Sparks. 1987b. Sensorimotor integration in the primate superior colliculus: II. Coordinates of auditory signals. Journal of Neurophysiology 57:35–55. Jiang, W., and B.E. Stein. 2003. Cortex controls multisensory depression in superior colliculus. Journal of Neurophysiology 90:2123–35. Jiang, W., M.T. Wallace, H. Jiang, J.W. Vaughan, and B.E. Stein. 2001. Two cortical areas mediate multisensory integration in superior colliculus neurons. Journal of Neurophysiology 85:506–22. Jiang, W., H. Jiang, and B.E. Stein. 2002. Two corticotectal areas facilitate multisensory orientation behavior. Journal of Cognitive Neuroscience 14:1240–55. Jiang, H., B.E. Stein, and J.G. McHaffie. 2003. Opposing basal ganglia processes shape midbrain visuomotor activity bilaterally. Nature 423:982–6.
296
The Neural Bases of Multisensory Processes
Jiang, W., H. Jiang, B.A. Rowland, and B.E. Stein. 2007. Multisensory orientation behavior is disrupted by neonatal cortical ablation. Journal of Neurophysiology 97:557–62. Jiang, W., H. Jiang, and B.E. Stein. 2006. Neonatal cortical ablation disrupts multisensory development in superior colliculus. Journal of Neurophysiology 95:1380–96. Kadunce, D.C., J.W. Vaughan, M.T. Wallace, and B.E. Stein. 2001. The influence of visual and auditory receptive field organization on multisensory integration in the superior colliculus. Experimental Brain Research 139:303–10. Kao, C.Q., B.E. Stein, and D.A. Coulter. 1994. Postnatal development of excitatory synaptic function in deep layers of SC. Society of Neuroscience Abstracts. King, A.J., T.P. Doubell, and I. Skaliora. 2004. Epigenetic factors that align visual and auditory maps in the ferret midbrain. In The handbook of multisensory processes, ed. G.A. Calvert, C. Spence, and B.E. Stein, 599–612. Cambridge, MA: MIT Press. King, A.J., and A.R. Palmer. 1985. Integration of visual and auditory information in bimodal neurones in the guinea-pig superior colliculus. Experimental Brain Research. 60:492–500. King, A.J., J.W. Schnupp, S. Carlile, A.L. Smith, and I.D. Thompson. 1996. The development of topographically-aligned maps of visual and auditory space in the superior colliculus. Progress in Brain Research 112:335–50. Lakatos, P., C.M. Chen, M.N. O’Connell, A. Mills, and C.E. Schroeder. 2007. Neuronal oscillations and multisensory interaction in primary auditory cortex. Neuron 53:279–92. Larson, M.A., and B.E. Stein. 1984. The use of tactile and olfactory cues in neonatal orientation and localization of the nipple. Developmental Psychobiology 17:423–36. Laurienti, P.J., J.H. Burdette, M.T. Wallace et al. 2002. Deactivation of sensory-specific cortex by cross-modal stimuli. Journal of Cognitive Neuroscience 14:420–9. Leo, F., N. Bolognini, C. Passamonti, B.E. Stein, and E. Ladavas. 2008. Cross-modal localization in hemianopia: New insights on multisensory integration. Brain 131: 855–65. Liotti, M., K. Ryder, and M.G. Woldorff. 1998. Auditory attention in the congenitally blind: Where, when and what gets reorganized? Neuroreport 9:1007–12. Lippert, M., N.K. Logothetis, and C. Kayser. 2007. Improvement of visual contrast detection by a simultaneous sound. Brain Research 1173:102–9. Lovelace, C.T., B.E. Stein, and M.T. Wallace. 2003. An irrelevant light enhances auditory detection in humans: A psychophysical analysis of multisensory integration in stimulus detection. Cognitive Brain Research 17:447–453. Macaluso, E., and J. Driver. 2004. Functional imaging evidence for multisensory spatial representations and cross-modal attentional interactions in the human brain. In The handbook of multisensory processes, ed. G.A. Calvert, C. Spence, and B.E. Stein, 529–48. Cambridge, MA: MIT Press. Marks, L.E. 2004. Cross-modal interactions in speeded classification. In The handbook of multisensory processes, ed. G.A. Calvert, C. Spence, and B.E. Stein, 85–106. Cambridge, MA: MIT Press. Massaro, D.W. 2004. From multisensory integration to talking heads and language learning. In The handbook of multisensory processes, ed. G.A. Calvert, C. Spence, and B.E. Stein, 153–76. Cambridge, MA: MIT Press. Meredith, M.A., and B.E. Stein. 1983. Interactions among converging sensory inputs in the superior colliculus. Science 221:389–91. Meredith, M.A., and B.E. Stein. 1986a. Spatial factors determine the activity of multisensory neurons in cat superior colliculus. Brain Research 365:350–4. Meredith, M.A., and B.E. Stein. 1986b. Visual, auditory, and somatosensory convergence on cells in superior colliculus results in multisensory integration. Journal of Neurophysiology 56:640–62. Meredith, M.A., and B.E. Stein. 1990. The visuotopic component of the multisensory map in the deep laminae of the cat superior colliculus. Journal of Neuroscience 10:3727–42. Meredith, M.A., and B.E. Stein. 1996. Spatial determinants of multisensory integration in cat superior colliculus neurons. Journal of Neurophysiology 75:1843–57. Meredith, M.A., H.R. Clemo, and B.E. Stein. 1991. Somatotopic component of the multisensory map in the deep laminae of the cat superior colliculus. Journal of Comparative Neurology 312:353–70. Meredith, M.A., M.T. Wallace, and B.E. Stein. 1992. Visual, auditory and somatosensory convergence in output neurons of the cat superior colliculus: Multisensory properties of the tecto-reticulo-spinal projection. Experimental Brain Research 88:181–6. Middlebrooks, J.C., and E.I. Knudsen. 1984. A neural code for auditory space in the cat’s superior colliculus. Journal of Neuroscience 4:2621–34. Miller, J. 1982. Divided attention: Evidence for coactivation with redundant signals. Cognitive Psychology 14:247–79.
The Organization and Plasticity of Multisensory Integration in the Midbrain
297
Morgan, M.L., G.C. Deangelis, and D.E. Angelaki. 2008. Multisensory integration in macaque visual cortex depends on cue reliability. Neuron 59:662–73. Moschovakis, A.K., and A.B. Karabelas. 1985. Observations on the somatodendritic morphology and axonal trajectory of intracellularly HRP-labeled efferent neurons located in the deeper layers of the superior colliculus of the cat. Journal of Comparative Neurology 239:276–308. Munoz, D.P., and R.H. Wurtz. 1993a. Fixation cells in monkey superior colliculus. I. Characteristics of cell discharge. Journal of Neurophysiology 70:559–75. Munoz, D.P., and R.H. Wurtz. 1993b. Fixation cells in monkey superior colliculus: II. Reversible activation and deactivation. Journal of Neurophysiology 70:576–89. Neil, P.A., C. Chee-Ruiter, C. Scheier, D.J. Lewkowicz, and S. Shimojo. 2006. Development of multisensory spatial integration and perception in humans. Developmental Science 9:454–64. Newell, F.N. 2004. Cross-modal object recognition. In The handbook of multisensory processes, ed. G.A. Calvert, C. Spence, and B.E. Stein, 123–39: Cambridge, MA: MIT Press. Partan, S.R. 2004. Multisensory animal communication. In The handbook of multisensory processes, ed. G.A. Calvert, C. Spence, and B.E. Stein, 225–40. Cambridge, MA: MIT Press. Peck, C.K. 1987a. Saccade-related burst neurons in cat superior colliculus. Brain Research 408:329–33. Peck, C.K. 1987b. Visual–auditory interactions in cat superior colliculus: Their role in the control of gaze. Brain Research 420:162–6. Perrault Jr., T.J., J.W. Vaughan, B.E. Stein, and M.T. Wallace. 2003. Neuron-specific response characteristics predict the magnitude of multisensory integration. Journal of Neurophysiology 90:4022–6. Perrault Jr., T.J., J.W. Vaughan, B.E. Stein, and M.T. Wallace. 2005. Superior colliculus neurons use distinct operational modes in the integration of multisensory stimuli. Journal of Neurophysiology 93:2575–86. Putzar, L., I. Goerendt, K. Lange, F. Rosler, and B. Roder. 2007. Early visual deprivation impairs multisensory interactions in humans. Nature Neuroscience 10:1243–5. Recanzone, G.H. 1998. Rapidly induced auditory plasticity: The ventriloquism aftereffect. Proceedings of the National Academy of Sciences of the United States of America 95:869–75. Romanski, L.M. 2007. Representation and integration of auditory and visual stimuli in the primate ventral lateral prefrontal cortex. Cerebral Cortex 17(Suppl 1):i61–9. Rowland, B.A., and B.E. Stein. 2007. Multisensory integration produces an initial response enhancement. Frontiers in Integrative Neuroscience 1:4. Rowland, B.A., and B.E. Stein. 2008. Temporal profiles of response enhancement in multisensory integration. Frontiers in Neuroscience 2:218–24. Rowland, B.A., S. Quessy, T.R. Stanford, and B.E. Stein. 2007a. Multisensory integration shortens physiological response latencies. Journal of Neuroscience 27:5879–84. Rowland, B.A., T.R. Stanford, and B.E. Stein. 2007b. A model of the neural mechanisms underlying multisensory integration in the superior colliculus. Perception 36:1431–43. Royal, D.W., B.N. Carriere, and M.T. Wallace. 2009. Spatiotemporal architecture of cortical receptive fields and its impact on multisensory interactions. Experimental Brain Research 198:127–36. Sathian, K. 2000. Practice makes perfect: Sharper tactile perception in the blind. Neurology 54:2203–4. Sathian, K. 2005. Visual cortical activity during tactile perception in the sighted and the visually deprived. Developmental Psychobiology 46:279–86. Sathian, K., S.C. Prather, and M. Zhang. 2004. Visual cortical involvement in normal tactile perception. In The handbook of multisensory processes, ed. G.A. Calvert, C. Spence, and B.E. Stein, 703–9. Cambridge, MA: MIT Press. Schroeder, C.E., and J.J. Foxe. 2002. The timing and laminar profile of converging inputs to multisensory areas of the macaque neocortex. Brain Research. Cognitive Brain Research 14:187–98. Schroeder, C. E., and J.J. Foxe. 2004. Multisensory convergence in early cortical processing. In The handbook of multisensory processes, ed. G.A. Calvert, C. Spence, and B.E. Stein, 295–309. Cambridge, MA: MIT Press. Schroeder, C.E., R.W. Lindsley, C. Specht et al. 2001. Somatosensory input to auditory association cortex in the macaque monkey. Journal of Neurophysiology 85:1322–7. Senkowski, D., D. Talsma, M. Grigutsch, C.S. Herrmann, and M.G. Woldorff. 2007. Good times for multisensory integration: Effects of the precision of temporal synchrony as revealed by gamma-band oscillations. Neuropsychologia 45:561–71. Shams, L., Y. Kamitani, and S. Shimojo. 2004. Modulations of visual perception by sound. In The handbook of multisensoty processes, ed. G.A. Calvert, C. Spence, and B.E. Stein, 27–33. Cambridge, MA: MIT Press. Sinnett, S., S. Soto-Faraco, and C. Spence. 2008. The co-occurrence of multisensory competition and facilitation. Acta Psychologica 128:153–61.
298
The Neural Bases of Multisensory Processes
Sparks, D.L. 1986. Translation of sensory signals into commands for control of saccadic eye movements: Role of primate superior colliculus. Physiological Reviews 66:118–71. Sparks, D.L., and J.S. Nelson. 1987. Sensory and motor maps in the mammalian superior colliculus. Trends in Neuroscience 10:312–7. Spence, C., and J. Driver. 2004. Crossmodal space and crossmodal attention. Oxford: Oxford Univ. Press. Stanford, T.R., and B.E. Stein. 2007. Superadditivity in multisensory integration: Putting the computation in context. Neuroreport 18:787–92. Stanford, T.R., S. Quessy, and B.E. Stein. 2005. Evaluating the operations underlying multisensory integration in the cat superior colliculus. Journal of Neuroscience 25:6499–508. Stein, B.E. 1984. Development of the superior colliculus. Annual Review of Neuroscience 7:95–125. Stein, B.E., and M.O. Arigbede. 1972. Unimodal and multimodal response properties of neurons in the cat’s superior colliculus. Experimental Neurology 36:179–96. Stein, B.E., and H.P. Clamann. 1981. Control of pinna movements and sensorimotor register in cat superior colliculus. Brain, Behavior and Evolution 19:180–92. Stein, B.E., and H.L. Gallagher. 1981. Maturation of cortical control over superior colliculus cells in cat. Brain Research 223:429–35. Stein, B.E., and M.A. Meredith. 1993. The merging of the senses. Cambridge, MA: MIT Press. Stein, B.E., and B.A. Rowland. 2007. The critical role of cortico-collicular interactions in the development of multisensory integration. Paper presented at the Society for Neuroscience. Stein, B.E., E. Labos, and L. Kruger. 1973. Sequence of changes in properties of neurons of superior colliculus of the kitten during maturation. Journal of Neurophysiology 36:667–79. Stein, B.E., B. Magalhaes-Castro, and L. Kruger. 1976. Relationship between visual and tactile representations in cat superior colliculus. Journal of Neurophysiology 39:401–19. Stein, B.E., R.F. Spencer, and S.B. Edwards. 1984. Efferent projections of the neonatal cat superior colliculus: Facial and cerebellum-related brainstem structures. Journal of Comparative Neurology 230:47–54. Stein, B.E., M.A. Meredith, W.S. Huneycutt, and L. McDade. 1989. Behavioral indices of multisensory integration: Orientation to visual cues is affected by auditory stimuli. Journal of Cognitive Neuroscience 1:12–24. Stein, B.E., M.A. Meredith, and M.T. Wallace. 1993. The visually responsive neuron and beyond: Multisensory integration in cat and monkey. Progress in Brain Research 95:79–90. Stein, B.E., M.W. Wallace, T.R. Stanford, and W. Jiang. 2002. Cortex governs multisensory integration in the midbrain. Neuroscientist 8:306–14. Sugihara, T., M.D. Diltz, B.B. Averbeck, and L.M. Romanski. 2006. Integration of auditory and visual communication information in the primate ventrolateral prefrontal cortex. Journal of Neuroscience 26:11138–47. Sumby, W.H., and I. Pollack. 1954. Visual contribution to speech intelligibillity in noise. Journal of the Acoustical Society of America 26:212–5. Talsma, D., T.J. Doty, R. Strowd, and M.G. Woldorff. 2006. Attentional capacity for processing concurrent stimuli is larger across sensory modalities than within a modality. Psychophysiology 43:541–9. Talsma, D., T.J. Doty, and M.G. Woldorff. 2007. Selective attention and audiovisual integration: Is attending to both modalities a prerequisite for early integration? Cerebral Cortex 17:679–90. Wallace, M.T. 2004. The development of multisensory integration. In The handbook of multisensory processes, ed. G.A. Calvert, C. Spence, and B.E. Stein, 625–42. Cambridge, MA: MIT Press. Wallace, M.T., and B.E. Stein. 1994. Cross-modal synthesis in the midbrain depends on input from cortex. Journal of Neurophysiology 71:429–32. Wallace, M.T., and B.E. Stein. 1997. Development of multisensory neurons and multisensory integration in cat superior colliculus. Journal of Neuroscience 17:2429–44. Wallace, M.T., and B.E. Stein. 2000. Onset of cross-modal synthesis in the neonatal superior colliculus is gated by the development of cortical influences. Journal of Neurophysiology 83:3578–82. Wallace, M.T., and B.E. Stein. 2001. Sensory and multisensory responses in the newborn monkey superior colliculus. Journal of Neuroscience 21:8886–94. Wallace, M.T., and B.E. Stein. 2007. Early experience determines how the senses will interact. Journal of Neurophysiology 97:921–6. Wallace, M.T., M.A. Meredith, and B.E. Stein. 1992. Integration of multiple sensory modalities in cat cortex. Experimental Brain Research 91:484–8. Wallace, M.T., M.A. Meredith, and B.E. Stein. 1993. Converging influences from visual, auditory, and somatosensory cortices onto output neurons of the superior colliculus. Journal of Neurophysiology 69:1797–809.
The Organization and Plasticity of Multisensory Integration in the Midbrain
299
Wallace, M.T., L.K. Wilkinson, and B.E. Stein. 1996. Representation and integration of multiple sensory inputs in primate superior colliculus. Journal of Neurophysiology 76:1246–66. Wallace, M.T., M.A. Meredith, and B.E. Stein. 1998. Multisensory integration in the superior colliculus of the alert cat. Journal of Neurophysiology 80:1006–10. Wallace, M.T., W.D. Hairston, and B.E. Stein. 2001. Long-term effects of dark-rearing on multisensory processing. Paper presented at the Society for Neuroscience. Wallace, M.T., T.J. Perrault Jr., W.D., Hairston, and B.E. Stein. 2004a. Visual experience is necessary for the development of multisensory integration. Journal of Neuroscience 24:9580–4. Wallace, M.T., R. Ramachandran, and B.E. Stein. 2004b. A revised view of sensory cortical parcellation. Proceedings of the National Academy of Sciences of the United States of America 101:2167–72. Wallace, M.T., B.N. Carriere, T.J. Perrault Jr., J.W. Vaughan, and B.E. Stein. 2006. The development of cortical multisensory integration. Journal of Neuroscience 26:11844–9. Weisser, V., R. Stilla, S. Peltier, X. Hu, and K. Sathian. 2005. Short-term visual deprivation alters neural processing of tactile form. Experimental Brain Research 166:572–82. Wilkinson, L.K., M.A. Meredith, and B.E. Stein. 1996. The role of anterior ectosylvian cortex in cross-modality orientation and approach behavior. Experimental Brain Research 112:1–10. Woldorff, M.G., C.J. Hazlett, H.M. Fichtenholtz et al. 2004. Functional parcellation of attentional control regions of the brain. Journal of Cognitive Neuroscience 16:149–65. Woods, T.M., and G.H. Recanzone. 2004a. Cross-modal interactions evidenced by the ventriloquism effect in humans and monkeys. In The handbook of multisensory processes, ed. G.A. Calvert, C. Spence, and B.E. Stein, 35–48. Cambridge, MA: MIT Press. Woods, T.M., and G.H. Recanzone. 2004b. Visually induced plasticity of auditory spatial perception in macaques. Current Biology 14:1559–64. Wurtz, R.H., and J.E. Albano. 1980. Visual–motor function of the primate superior colliculus. Annual Review of Neuroscience 3:189–226. Wurtz, R.H., and M.E. Goldberg. 1971. Superior colliculus cell responses related to eye movements in awake monkeys. Science 171:82–4. Zangaladze, A., C.M. Epstein, S.T. Grafton, and K. Sathian. 1999. Involvement of visual cortex in tactile discrimination of orientation. Nature 401:587–90.
16
Effects of Prolonged Exposure to Audiovisual Stimuli with Fixed Stimulus Onset Asynchrony on Interaction Dynamics between Primary Auditory and Primary Visual Cortex Antje Fillbrandt and Frank W. Ohl
CONTENTS 16.1 Introduction...........................................................................................................................302 16.1.1 Speed of Signal Transmission Is Modality-Specific.................................................. 303 16.1.2 Simultaneity Constancy............................................................................................. 303 16.1.3 Temporal Recalibration............................................................................................. 303 16.1.4 Mechanisms of Temporal Recalibration....................................................................304 16.1.4.1 Are There Any Indications for Recalibration at Early Levels of Stimulus Processing?..................................................................................304 16.1.4.2 To What Extent Does Temporal Recalibration Need Attentional Resources?..................................................................................................304 16.1.4.3 Is Recalibration Stimulus-Specific?............................................................ 305 16.1.4.4 Is Recalibration Modality-Specific?........................................................... 305 16.1.4.5 Does Recalibration Occur at Decision Level?............................................ 305 16.1.5 Outlook on Experiments............................................................................................ 305 16.2 Methods.................................................................................................................................306 16.2.1 Animals.....................................................................................................................306 16.2.2 Electrodes..................................................................................................................306 16.2.3 Animal Preparation and Recording...........................................................................306 16.2.4 Stimuli.......................................................................................................................306 16.2.5 Experimental Protocol...............................................................................................306 16.2.6 Data Preprocessing....................................................................................................307 16.2.7 DTF: Mathematical Definition.................................................................................. 307 16.2.8 Estimation of Autoregressive Models........................................................................ 308 16.2.9 Normalization of DTF...............................................................................................309 16.2.10 Statistical Testing.....................................................................................................309
301
302
The Neural Bases of Multisensory Processes
16.3 Results.................................................................................................................................... 310 16.3.1 Stimulus-Induced Changes in Single-Trial nDTF, Averaged across All Trials from All Sessions....................................................................................................... 310 16.3.1.1 Animals Receiving Light Followed by Tone Stimulus (VA-Animals)....... 311 16.3.1.2 Animals Receiving Tone Followed by Light Stimulus (AV-Animals)........ 312 16.3.2 Development of Amplitude of nDTFA→V and nDTFV→A within Sessions.................. 313 16.3.2.1 VA-Animals................................................................................................ 313 16.3.2.2 AV-Animals................................................................................................ 313 16.3.3 Development of the Amplitude of nDTFA→V and nDTFV→A across Sessions............ 314 16.4 Discussion.............................................................................................................................. 316 16.4.1 Interpretation of DTF-Amplitudes............................................................................ 316 16.4.2 Development of nDTF-Amplitude within Sessions................................................... 317 16.4.3 Audiovisual Stimulus Association as a Potential Cause of Observed Changes in nDTF-Amplitudes...................................................................................................... 318 16.4.4 Changes in Lag Detection as a Potential Cause of Observed Changes in DTFAmplitudes................................................................................................................. 318 16.4.5 Mechanisms of Recalibration: Some Preliminary Restrictions................................ 318 16.4.5.1 Expectation and Lag Detection................................................................... 318 16.4.5.2 Processes after the Second Stimulus.......................................................... 319 16.4.5.3 Speed of Processing.................................................................................... 319 16.5 Conclusions............................................................................................................................ 319 References....................................................................................................................................... 320 Temporal congruity between auditory and visual stimuli has frequently been shown to be an important factor in audiovisual integration, but information about temporal congruity is blurred by the different speeds of transmission in the two sensory modalities. Compensating for the differences in transmission times is challenging for the brain because at each step of transmission, from the production of the signal to its arrival at higher cortical areas, the speed of transmission can be affected in various ways. One way to deal with this complexity could be that the compensation mechanisms remain plastic throughout its lifetime so that they can flexibly adapt to the typical transmission delays of new types of stimuli. Temporal recalibration to new values of stimulus asynchronies has been demonstrated in several behavioral studies. This study seeks to explore the potential mechanisms underlying such recalibration at the cortical level. Toward this aim, tone and light stimuli were presented repeatedly to awake, passively listening, Mongolian gerbils at the same constant lag. During stimulation, the local field potential was recorded from electrodes implanted into the auditory and visual cortices. The interaction dynamics between the auditory and visual cortices were examined using the directed transfer function (DTF; Kaminski and Blinowska 1991). With an increasing number of stimulus repetitions, the amplitude of the DTF showed characteristic changes at specific time points between and after the stimuli. Our findings support the view that repeated presentation of audiovisual stimuli at a constant delay alters the interactions between the auditory and visual cortices.
16.1 INTRODUCTION Listening to a concert is also enjoyable while watching the musicians play. Under normal circumstances, we are not confused by seeing the drumstick movement or the lip movement of the singer after hearing the beat and the vocals. When in our conscious experience of the world, the senses appear as having been united, this also seems to imply that stimulus processing in different modalities must have reached consciousness at about the same time.
Effects of Prolonged Exposure to Audiovisual Stimuli with Fixed Stimulus Onset Asynchrony
303
Apparently, the task to judge which stimuli have appeared simultaneously is quite challenging for the brain: during the past decade, an increasing number of studies have been published indicating that temporal perception remains plastic throughout the lifetime. These studies demonstrated that when stimuli from different sensory modalities are presented repeatedly at a small constant temporal onset asynchrony, after a while, their temporal disparity is perceived as being diminished in the conscious experience. This chapter describes the electrophysiological results of interaction processes between the auditory and visual cortices during constant asynchronous presentation of audiovisual stimuli in a rodent preparation designed to mimic relevant aspects of classic experiments in humans on the recalibration of temporal order judgment.
16.1.1 Speed of Signal Transmission Is Modality-Specific From the point in time a single event causes an auditory and a visual signal, to the point in time a certain brain area is activated by these signals, information about temporal congruity is blurred in various ways by the different speeds of transmission of the two signals. The first temporal disparities in signal propagation arise outside the brain from the different velocities of sound and light. At the receptor level, sound transduction in the ear is faster than phototransduction in the retina (see Fain 2003, for a detailed review). The minimum response latency for a bright flash, approximately 7 ms, is nearly the same in rods and cones (Cobbs and Pugh 1987; Hestrin and Korenbrot 1990; Robson et al. 2003). But with low light intensities, the rod-driven response might take as long as 300 ms (Baylor et al. 1984, 1987). In contrast, transduction by the hair cells of the inner ear is effectively instantaneous via direct mechanic linkage (~10 µs; Corey and Hudspeth 1979, 1983; Crawford and Fettiplace 1985; Crawford et al. 1991). Next, the duration of the transmission of auditory and visual signals depends on the length of the nerves used for their transmission (Von Békésy 1963; Harrar and Harris 2005). The relationship of transmission delays between sensory modalities is further complicated by the fact that, in each modality, processing speed seems to be modulated by detailed physical stimulus characteristics, such as stimulus intensity (Wilson and Anstis 1969) and visual eccentricity (Nickalls 1996; Kopinska and Harris 2004), as well as by subjective factors, such as attention (e.g., Posner et al. 1980).
16.1.2 Simultaneity Constancy The ability to perceive stimuli as simultaneous despite their different transmission delays has been termed simultaneity constancy (Kopinska and Harris 2004). Several studies demonstrated that human beings are able to compensate for temporal lags caused by variances in spatial distance (Engel and Dougherty 1971; Sugita and Suzuki 2003; Kopinska and Harris 2004; Alais and Carlile 2005). Interestingly, the compensation also worked when distance cues were presented only to a single modality. In the study by Sugita and Suzuki (2003), only visual distance cues were used. Alais and Carlile (2005) varied only cues for auditory distance perception. The question of which cues are essential to induce a lag compensation is still a matter of ongoing debate as there are also several studies that failed to find evidence for a similar perceptual compensation (Stone 2001; Lewald and Guski 2004; Arnold et al. 2005; Heron et al. 2007).
16.1.3 Temporal Recalibration Because the transmission delays of auditory and visual signals depend on many factors, they cannot be described by simple rules. One way to deal with this complexity could be that the compensation mechanisms remain plastic throughout its lifetime so that they can flexibly adapt to new sets of stimuli and their typical transmission delays.
304
The Neural Bases of Multisensory Processes
The existence of temporal recalibration to new stimuli has been demonstrated in several studies (Fujisaki et al. 2004; Vroomen et al. 2004; Navarra et al. 2005; Heron et al. 2007; Keetels and Vroomen 2007). In these studies, experimental paradigms typically start with an adaptation phase with auditory and visual stimuli being presented repeatedly over several minutes, and consistently at a slight onset asynchrony of about 0 to 250 ms. In a subsequent behavioral testing phase, auditory and visual stimuli are presented at various temporal delays and their perceived temporal distance is usually assessed by a simultaneity judgment task (subjects have to indicate whether the stimuli are simultaneous or not) or a temporal order judgment task (subjects have to indicate which of the stimuli they perceived first). Using these procedures, temporal recalibration could be demonstrated repeatedly: the average time one stimulus had to lead the other for the two to be judged as occurring simultaneously, the point of subjective simultaneity (PSS), was shifted in the direction of the lag used in the adaptation phase (Fujisaki et al. 2004; Vroomen et al. 2004). For example, if sound was presented before light in the adaptation phase, in the testing phase, the sound stimulus had to be presented earlier in time than before the adaptation to be regarded as having occurred simultaneously with the light stimulus. In addition, in several studies, an increase in the just notable difference (JND) was observed (the smallest temporal interval between two stimuli needed for the participants in a temporal order task to be able to judge correctly which of the stimuli was presented first in 75% of the trials; Fujisaki et al. 2004; Navarra et al. 2005).
16.1.4 Mechanisms of Temporal Recalibration The neural mechanisms underlying temporal recalibration have not yet been investigated in detail. In the following we will review current psychophysical data with respect to cognitive processes hypothetically involved in recalibration to develop first ideas about the neural levels at which recalibration might operate. The idea that temporal recalibration works on an early level of processing is quite attractive: more accurate temporal information is available at the early stages because the different processing delays of later stages have not yet been added. However, there are also reasons to believe that recal ibration works at later levels: recalibration effects are usually observed in the conscious perception. It is plausible to assume that the conscious percept is also shaped by the results of later processing stages. For recalibration to operate correctly, it should also compensate for delays of later processing stages. 16.1.4.1 Are There Any Indications for Recalibration at Early Levels of Stimulus Processing? There are indications that recalibration does not occur at the very periphery. Fujisaki et al. (2004) presented sound stimuli during the testing phase to a different ear than during the adaptation phase and found clear evidence for recalibration. They concluded that recalibration occurs at least at stages of processing where information from both ears had already been combined. To investigate the possible neuronal mechanism of temporal recalibration, the neuronal sites at which temporal onset asynchronies are represented might be of interest. There are indications that neurons are tuned to different onset asynchronies of multimodal stimuli at the level of the superior colliculus (Meredith et al. 1987). But in addition, there are also first findings of neural correlates of onset asynchrony detection at the cortical level (Bushara et al. 2001; Senkowski et al. 2007). 16.1.4.2 To What Extent Does Temporal Recalibration Need Attentional Resources? An increasing number of results indicate that processes of synchrony detection require attentional resources (Fujisaki and Nishida 2005, 2008; Fujisaki et al. 2006). Recalibration is often measured by a change in the perception of synchrony, but preliminary results suggest that the mechanisms
Effects of Prolonged Exposure to Audiovisual Stimuli with Fixed Stimulus Onset Asynchrony
305
of recalibration and attention might be independent: Fujisaki and colleagues found no interaction between the shift in the PSS caused by attention and the shift in PSS caused by adaptation in a recalibration experiment (Fujisaki et al. 2004). 16.1.4.3 Is Recalibration Stimulus-Specific? Several studies demonstrated that the lag adaptation can easily generalize to stimuli not presented during the adaptation phase, suggesting that temporal recalibration occurs at a level processing abstracts from the details of the specific stimuli (Fujisaki et al. 2004; Navarra et al. 2005; Vatakis et al. 2007, 2008). 16.1.4.4 Is Recalibration Modality-Specific? Also fundamental for understanding the basics of recalibration is the question of whether it is a supramodal process. As in the conscious experience, the information from all senses usually appears to be temporally aligned, a hypothetical compensatory process should take into account the various temporal delays of all modalities. If there were separate compensatory mechanisms for all combinations of modality pairs, this might cause conflicts between the different compensatory mechanisms. Results from recalibration experiments invoking modality pairs other than the audiovisual have yielded variable results (Miyazaki et al. 2006; Navarra et al. 2006; Hanson et al. 2008; Harrar and Harris 2008). If there was a single compensatory mechanism, we should be able to observe a transfer of recal ibration across modality pairings. In the study of Harrar and Harris (2008), an exposure to visuo tactile asynchronous stimuli in the adaptation phase shifted the PSS when participants had to do an audiovisual temporal order judgment task, an adaptation to audiotactile asynchronous stimuli caused an increase in the JND in an audiovisual temporal order judgment task. However, the effects do not seem to be simple because, in this study, no recalibration effects were found when audiotactile and visuotactile pairings were used in the testing phase. 16.1.4.5 Does Recalibration Occur at Decision Level? Fujisaki et al. (2004) advanced the hypothesis that recalibration might occur as late as at the decision level. According to this hypothesis, the effect of recalibration could be explained by a change in the response bias in the temporal order task. Fujisaki tested this hypothesis by testing the perception of simultaneity of his participants indirectly by presenting an auditory-induced visual illusion. As the perception of this illusion changed after the lag adaptation phase, he concluded that recalibration does not occur at the response level.
16.1.5 Outlook on Experiments This short review on studies addressing questions about the mechanisms of recalibration makes it clear that it is still too early to deduce any precise hypothesis at which neural level recalibration might operate. In the current explorative study, we began by searching for neural mechanisms of recalibration at the level of the primary sensory cortex. In the past decade, the primary sensory cortices have repeatedly been demonstrated to be involved in multisensory interactions (e.g., Cahill et al. 1996; Brosch et al. 2005; Bizley et al. 2007; Kayser et al. 2008; Musacchia and Schroeder 2009). The experimental paradigm for rodents resembled the previously described human studies on temporal recalibration: auditory and visual stimuli were presented repeatedly at a constant intermodal temporal onset asynchrony of 200 ms. We implanted one electrode into the primary auditory cortex and one electrode into the visual cortex of Mongolian gerbils, and during stimulation, local field potentials were recorded in the awake animal. Our main question of interest was whether the interaction patterns between auditory
306
The Neural Bases of Multisensory Processes
and visual cortices change during the course of continuous asynchronous presentation of auditory and visual stimuli. There is accumulating evidence that the synchronization dynamics between brain areas might reflect their mode of interaction (Bressler 1995, 1996). We examined directional influences between auditory and visual cortices by analyzing the local field potential data using the DTF (Kaminski and Blinowska 1991).
16.2 METHODS 16.2.1 Animals Data were obtained from eight adult male Mongolian gerbils (Meriones unguiculatus). All animal experiments were surveyed and approved by the animal care committee of the Land SachsenAnhalt.
16.2.2 Electrodes Electrodes were made of stainless steel wire (diameter, 185 µm) and were deinsulated only at the tip. The tip of the reference electrodes was bent into a small loop (diameter, 0.6 mm). The impedance of the recording electrodes was 1.5 MΩ (at 1 kHz).
16.2.3 Animal Preparation and Recording Electrodes were chronically implanted under deep ketamine anesthesia (xylazine, 2 mg/100 g body weight, i.p.; ketamine, 20 mg/100 g body weight, i.p.). One recording electrode was inserted into the right primary auditory cortex and one into the right visual cortex, at depths of 300 µm, using a microstepper. Two reference electrodes were positioned onto the dura mater over the region of the parietal and the frontal cortex, electrically connected, and served as a common frontoparietal reference. After the operation, animals were allowed to recover for 1 week before the recording sessions began. During the measurements, the animal was allowed to move freely in the recording box (20 × 30 cm). The measured local field potentials from auditory and visual cortices were digitized at a rate of 1000 Hz.
16.2.4 Stimuli Auditory and visual stimuli were presented at a constant intermodal stimulus onset asynchrony of 200 ms. The duration of both the auditory and the visual stimuli was 50 ms and the intertrial interval varied randomly between 1 and 2 s with a rectangular distribution of intervals in that range. Acoustic stimuli were tones presented from a loudspeaker located 30 cm above the animal. The tone frequency was chosen for each individual animal to match the frequency that evoked, in preparatory experiments, the strongest amplitude of local field potential at the recording site within the tonotopic map of primary auditory cortex (Ohl et al. 2000, 2001). The frequencies used ranged from 250 Hz to 4 kHz with the peak level of the tone stimuli varying between 60 dB (low frequencies) and 48 dB (high frequencies), measured by a Bruel und Kjaer sound level meter type). Visual stimuli were flashes presented from an LED lamp (9.6 cd/m2) located at the height of the eyes of the animal.
16.2.5 Experimental Protocol To be able to examine both short-term and long-term adaptation effects, animals were presented with asynchronous stimuli for 10 sessions with 750 stimulus presentations at each session. For four animals, the auditory stimuli were presented first, for the remaining four animals, the visual stimuli were presented first.
307
Effects of Prolonged Exposure to Audiovisual Stimuli with Fixed Stimulus Onset Asynchrony
16.2.6 Data Preprocessing The local field potential of each trial was analyzed from 1 s before to 1 s after the first stimulus. The data of this time period were detrended separately for each trial and each channel. In addition, the temporal mean and the temporal standard deviation of the time period were determined for each trial and for each channel, and used for z-standardization. Amplifier clippings as they resulted from movement of the animals were identified by visual inspection. Only artifact-free trials were included into the analysis (~70–90% of the trials).
16.2.7 DTF: Mathematical Definition Directional influences between the auditory and the visual cortex were analyzed in single trials by estimating the DTF (Kaminski and Blinowska 1991; Kaminski et al. 2001; for comparison of the performance of the DTF with other spectral estimators, see Kus et al. 2004; Astolfi et al. 2007). The DTF is based on the concept of Granger causality. According to this concept, one time series can be called causal to a second one if its values can be used for the prediction of values of the second time series measured at later time points. This basic principle is typically mathematically represented in the formalism of autoregressive models (AR models). Let X1(t) be the time series data from a selectable channel 1, and X2(t) the data from a selectable channel 2: p
X1 (t ) =
∑A
1→1
p
( j) X1 (t − j) +
j =1
∑A
1→2
2→1
( j) X 2 (t − j) + E
(16.1)
( j) X 2 (t − j) + E
(16.2)
j =1
p
X 2 (t ) =
∑A p
( j)X1 (t − j) +
j =1
∑A
2→2
j =1
Here, the A(j) are the autoregressive coefficients at time lag j, p is the order of the autoregressive model, and E the prediction error. According to the concept of Granger causality, in Equation 16.1, the channel X2 is said to have a causal influence on channel X1 if the prediction error E can be reduced by including past measurements of channel X2 (for the influence of the channel X1 on the channel X2, see Equation 16.2). To investigate the spectral characteristics of interchannel interaction, the autoregressive coefficients in Equation 16.1 were Fourier-transformed; the transfer matrix was then obtained by matrix inversion: A1→1 ( f ) A2→1 ( f )
−1
=
A1→2 ( f ) A2→2 ( f )
H1→1 ( f ) H 2→1 ( f )
(16.3)
( j )e − i 2π fj when l = m
(16.4)
H1→2 ( f ) H 2→2 ( f )
where the components of the A(f) matrix are p
Al→m ( f ) = 1 −
∑A
l→m
j =1
with l being the number of the transmitting channel and m the number of the receiving channel
308
The Neural Bases of Multisensory Processes p
Al→m ( f ) = 0 −
∑A
l→m
( j )e − i 2π fj otherwise.
(16.5)
j =1
The DTF for the influence from a selectable channel 1 to a selectable channel 2, DTF1→2, is defined as
nDTF1→2 ( f ) = H1→2 ( f )2
(16.6)
In the case of only two channels, the DTF measures the predictability of the frequency response of a first channel from a second channel measured earlier in time. When, for example, X1 describes the local field potential from the auditory cortex, X2 the local field potential from the visual cortex, and the amplitude of the nDTF1→2 has high values in the beta band, this means that we are able to predict the beta response of the visual cortex from the beta response of the auditory cortex measured earlier in time. There are several possible situations of cross-cortical interaction that might underlie the modulation of DTF amplitudes (see, e.g., Kaminski et al. 2001; Cassidy and Brown 2003; Eichler 2006). See Section 16.4 for more details.
16.2.8 Estimation of Autoregressive Models We fitted bivariate autoregressive models to local field potential time series from auditory and visual cortices using the Burg method as this algorithm has been shown to provide accurate results (Marple 1987; Kay 1988; Schlögl 2003). We partitioned the time series data of single trials into 100-ms time windows that were stepped at intervals of 5 ms through each trial from 1 s before the first stimulus to 1 s after the first stimulus. Models were estimated separately for each time window of the single trials. Occasionally, the covariance matrix used for estimation of the AR coefficients turned out to be singular or close to singular, in these rare cases, the whole trial was not analyzed any further. In the present study, we used a modal order of 8, the sampling rate of 1000 Hz was used for model estimation. The model order was determined by the Akaike Information Criterion (Akaike 1974). After model estimation, the adequacy of the model was tested by analyzing the residuals (Lütkepohl 1993). Using this model order, the auto- and crosscovariance of the residuals was found to have values between 0.001% and 0.005% of the auto- and crosscovariance of the original data (data averaged from two animals here). In other words, the model was able to capture most of the covariance structure contained in the data. When DTFs were computed from the residuals, the single-trial spectra were almost flat, indicating that the noise contained in the residuals was close to white noise. The estimation of AR models requires the normality of the process. To analyze the extent to which normality assumption was fulfilled in our data, the residuals were inspected by plotting them as histograms and, in addition, a Lillie test was computed separately for the residuals of the single data windows. In about 80% of the data windows, the Lillie test confirmed the normality assumption. A second requirement for the estimation of the autoregressive models is the stationarity of the time series data. Generally, this assumption is better fulfilled with small data windows (Ding et al. 2000), although it is impossible to tell in advance at which data window a complex system like the brain will move to another state (Freeman 2000). A further reason why the use of small data windows is recommendable is that changes in the local field potential are captured at a higher temporal resolution. The spectral resolution of low frequencies does not seem to be a problem for small data windows when the spectral estimates are based on AR models (for a mathematical treatment of this issue, see, e.g., Marple 1987, p. 199f).
Effects of Prolonged Exposure to Audiovisual Stimuli with Fixed Stimulus Onset Asynchrony
309
Using a high sampling rate ensures that the number of data points contained in the small time windows is sufficient for model estimation. For example, when we used a sampling rate of 500 Hz instead of 1000 Hz to estimate models from our time windows of 100 ms, the covariance of the residuals increased, signaling that the estimation had become worse (the autocovariance of the residuals of the auditory and visual channels at 1000 Hz were about 10% of the auto- and crosscovariance of the auditory and visual channels at 500 Hz). Importantly, when inspecting the spectra visually, they seemed to be quite alike, indicating that AR models were robust, to an extent, to a change in sampling rate. When using a data window of 200 ms with the same sampling rate of 500 Hz, the model estimation improved (the covariance of the residuals was 20–40% of the covariance of a model with a window of 100 ms), but at the expense of the temporal resolution.
16.2.9 Normalization of DTF Kaminski and Blinowska (1991) suggested normalization of the DTF relative to the structure that sends the signal, i.e., for the case of the directed transfer from the auditory channel to the visual channel: nDTFA→V ( f ) =
H A→ V ( f ) 2 k
∑H
M→V
(f)
(16.7)
2
M =1
In the two-channel case, the DTFA→V is divided by the sum of itself and the spectral autocovariance of the visual channel. Thus, when using this normalization, the amplitude of the nDTFA→V depends on the influence of the auditory channel on itself and, reciprocally, the amplitude of the nDTFV→A is dependent on the influence of the visual channel on itself. This is problematic in two ways: first, we cannot tell whether differences between the amplitude of the nDTFA→V and the amplitude of the nDTFV→A are because of differences in normalization or to differences in the strengths of crosscortical influences. Second, analysis of our data has shown that the auditory and the visual stimuli influenced both the amplitude of the local field potential and the spectral autocovariance of both auditory and visual channels. Thus, it is not clear whether changes in the amplitude of the nDTF after stimulation signal changes in the crosscortical interaction or changes in spectral autocovariance of the single channels. As the nonnormalized DTF is difficult to handle because of large differences in the amplitudes at different frequencies, we normalized the DTF in the following way: nDTFA→V ( f ) =
DTFA→V ( f ) n _ session n _ trials n _ windows
∑ ∑ ∑ 1
1
(
DTFA→V ( f ) / n _ windows * n _ trials * n _ session
(16.8)
)
1
with n_windows being the number of time windows of the prestimulus interval per trial, n_trials the number of trials per session, and n_session the number of sessions. Hence, the amplitude of the DTF estimated for each single time window of the single trials was divided by the average of the DTF of all time windows taken from the 1 s prestimulus interval of the single trials of all sessions.
16.2.10 Statistical Testing We assessed the statistical significance of differences in the amplitude of the nDTF using the bootstrap technique (e.g., Efron and Tibshirani 1993) to avoid being bound to assumptions about the
310
The Neural Bases of Multisensory Processes
empirical statistical error distribution of the nDTF (but see Eichler 2006, for an investigation of the statistical properties of the DTF). The general procedure was as follows: first, bootstrap samples were drawn from real data under the assumption that the null hypothesis was true. Then for each bootstrap sample, a chosen test statistic was computed. The values of the test statistic from all bootstrap samples formed a distribution of values of the test statistic under the assumption of the null hypothesis. Next, we determined from the bootstrap distribution of the test statistic the probability of finding values equal to or larger than the empirically observed one by chance. If this value was less than the preselected significance level, the null hypothesis was rejected. More specifically, in our first bootstrap test, we wanted to test the hypothesis of whether the nDTF has higher amplitude values in the poststimulus interval than in the prestimulus interval. Under the assumption of the null hypothesis, the nDTF amplitude values of the prestimulus and the poststimulus interval should not be different from each other. Thus, pairs of bootstrap samples were generated by taking single-trial nDTF amplitude values at random but with replacement from the prestimulus and from the poststimulus interval. For each of the sample pairs, the amplitudes were averaged across trials and the difference between the averages was computed separately for each pair. This procedure of drawing samples was repeated 1000 times, getting a distribution of differences between the average amplitudes. The resulting bootstrap distribution was then used to determine the probability of the real amplitude difference of the averages between the prestimulus and the poststimulus interval under the assumption of the null hypothesis. In a second bootstrap test, we assessed the significance of the slope of a line fitted to the data by linear regression analysis. We used the null hypothesis that the predictor variable (here, the number of stimulus presentations) and the response variable (here, the nDTF amplitude) are independent from each other. We generated bootstrap samples by randomly pairing the values of the predictor and observer variables. For each of these samples, a line was fitted by linear regression analysis and the slope was computed obtaining a distribution of slope values under the null hypothesis.
16.3 RESULTS 16.3.1 S timulus-Induced Changes in Single-Trial nDTF, Averaged across All Trials from All Sessions For a first inspection of the effect the audiovisual stimulation had on the nDTF, from the auditory to the visual cortex (nDTFA→V) and from the visual to the auditory cortex (nDTFV→A), we averaged nDTF amplitudes across all single trials of all sessions, separately for each time window from 1 s before to 1 s after the first stimulus. Figure 16.1 shows time-frequency plots of the nDTFA→V (left), which describes the predictability of the frequency response of the visual cortex based on the frequency response of the auditory cortex, and the nDTF V→A (right), which describes the predictability of the frequency response of the auditory cortex based on the frequency response of the visual cortex. Results from animals receiving the light stimulus first are presented in the upper two graphs and results from animals receiving the tone stimulus first are shown in the lower two graphs. Data from 200 ms before the first stimulus to 1 s after the first stimulus is shown here. Note that the abscissa indicates the start of a time window (window duration: 100 ms), so the data from time windows at 100 ms before the first stimulus are already influenced by effects occurring after the presentation of the first stimulus. The significance of the observed changes in the nDTF amplitude was assessed separately for each animal using Student’s t-test based on the bootstrap technique (see Methods). More precisely, we tested whether the amplitudes of the nDTF averaged across trials at different time points after the presentation of the first stimulus were significantly different from the nDTF amplitude of the prestimulus interval, averaged across trials and time from –1000 to 100 ms before the first stimulus. To compare the relative amplitudes of the nDTFA→V and the nDTFV→A, we tested whether the difference of the amplitudes of nDTFA→V and nDTFV→A averaged across trials at different time points
Effects of Prolonged Exposure to Audiovisual Stimuli with Fixed Stimulus Onset Asynchrony A V–DTF
100
1.6
Frequency [Hz]
80
1.4
60 40
0
(a)
80
1.4
80
60
1.2
60
40
1
40
Frequency [Hz]
0.8
100
0
0.2 0.4 0.6 0.8
A V–DTF – V A–DTF
80
0.6
0.5 0
60
100
1.8 1.6 1.4 1.2 1 0.8 0.2 0.4 0.6 0.8
A V–DTF – V A–DTF 0.2
80
0
60 40
20
–1
20
Time [s]
V A–DTF
0
–0.5
0.2 0.4 0.6 0.8
0.2 0.4 0.6 0.8
20
40
0
1 0
100
(b)
1.5
20
1.6
20
Frequency [Hz]
40
0.2 0.4 0.6 0.8
A V–DTF
100
(c)
1 0.6
2
80 60
0.8
20
V A–DTF
100
1.2
311
–0.2 –0.4 0
0.2 0.4 0.6 0.8 Time [s]
FIGURE 16.1 (a and b) nDTFA→V (left) and nDTF V→A (right), averaged across all trials from all sessions, separately for time windows from –0.2 to 0.9 s after start of first stimulus. (a) Animal receiving light first. (b) Animal receiving tone first. (c) Difference between averages (nDTFA→V – nDTFV→A). Animal receiving light first (left). Animal receiving tone first (right).
after the presentation of the first stimulus were significantly different from the difference of the amplitudes of nDTFA→V and nDTFV→A of the prestimulus interval. In the following we will describe only peaks of the amplitudes of nDTF, which deviated significantly (P < 0.01) from the average amplitude of prestimulus interval. 16.3.1.1 Animals Receiving Light Followed by Tone Stimulus (VA-Animals) At first sight, the response of the nDTFA→V closely resembled the response of the nDTFV→A. In animals receiving first the light stimulus and then the tone stimulus we observed two prominent positive peaks in both the nDTFA→V (Figure 16.1a, left) and the nDTFV→A (Figure 16.1a, right), the first one after the light stimulus started at about –20 ms and the second one after the tone stimulus began at about 151 ms. After the second peak, the amplitude of the nDTFA→V and the nDTFV→A dropped to slightly less than the prestimulus baseline values and returned very slowly to the prestimulus values within the next second.
312
The Neural Bases of Multisensory Processes
Even though the temporal development and the frequency spectra were roughly similar in the nDTFA→V and the nDTFV→A, there were small but important differences. First, there were stimulus-evoked differences in the amplitudes of the nDTFA→V and the nDTF V→A (Figure 16.1c, left, and the line plots in Figure 16.2, top). After the visual stimulus, the nDTF amplitude was significantly higher in the nDTFV→A than in the nDTFA→V, whereas after the auditory stimulus, the nDTFA→V reached higher values, but only at frequencies exceeding 30 Hz. Second, even though the peaks could be found at all frequency bands in the nDTF V→A, the first peak was strongest at a frequency of 1 Hz and at about 32 Hz, and the second peak at frequencies of 1 Hz and at about 40 Hz. In the nDTFA→V, the highest amplitude values after the first peak could be observed at 1 Hz and at about 35 Hz and after the second peak at 1 Hz and at about 45 Hz.
nDTF
16.3.1.2 Animals Receiving Tone Followed by Light Stimulus (AV-Animals) In animals receiving first the light stimulus and then the tone stimulus, three positive peaks developed after stimulation. As in the VA animals, the nDTFA→V and nDTFV→A were similar to each other (Figure 16.1b and the line plots in Figure 16.2, bottom). The first peak could be found between
95 Hz 85 Hz 75 Hz 65 Hz 55 Hz 45 Hz 35 Hz 25 Hz 15 Hz 5 Hz –200
0
200
400
600
800
600
800
nDTF
Time [ms]
95 Hz 85 Hz 75 Hz 65 Hz 55 Hz 45 Hz 35 Hz 25 Hz 15 Hz 5 Hz –200
0
200
400 Time [ms]
FIGURE 16.2 Top: representative nDTFV→A (dashed) and nDTFA→V (solid), averaged across all trials from all sessions, separately for all time windows from –200 to 900 ms after start of first stimulus, from an animal receiving light first, followed by tone stimulus. Bottom: data from an animal receiving tone first, followed by light stimulus.
Effects of Prolonged Exposure to Audiovisual Stimuli with Fixed Stimulus Onset Asynchrony
313
the tone and the light stimulus, at about –40 ms. The second and the third peaks occurred after the light stimulus at about 170 ms and 330 ms, respectively. And as in the VA animals, after the auditory stimulus (here the first stimulus), the amplitude of the nDTFA→V significantly exceeded the amplitude of the nDTF V→A for frequencies above 20 Hz in the AV animals, whereas after the visual stimulus, amplitudes were significantly higher in the nDTFV→A (Figure 16.1c, right). Thus, the sign of the difference between the nDTFA→V and the nDTFV→A depended on the type of the stimulus (auditory or visual) and not on the order of stimulus presentation. The peaks ran through all frequencies from 0 to 100 Hz. The first peak of the nDTFA→V was most pronounced at 1 Hz and at about 42 Hz, the second peak at 1 Hz, at about 32 Hz, and at 100 Hz. The first peak of the nDTFV→A reached their highest values at 1 Hz and at 35 Hz, the second peak had its highest amplitude at 1 Hz and at 28 Hz. For the third peak, the amplitude was most prominent at 1 Hz.
16.3.2 Development of Amplitude of nDTFA→V and nDTFV→A within Sessions To investigate the development of the effects within the sessions, we divided the 750 trials of each session into windows of 125 trials from the start to the end of each session. Averaging was done across the trials of each trial window, but separately for the time windows within the course of each trial. Trials from all sessions were included in the average. As for the majority of the animals, the nDTF amplitude increased or decreased fairly smoothly within the sessions, and we decided to characterize the effects by linear regression analysis. The slope of the regression line fitted to the observed data points was subjected to statistical testing using the bootstrap technique (for details, see Methods). 16.3.2.1 VA-Animals In Figure 16.3a and b, the development of the nDTF amplitude of the first and the second peaks within the sessions is depicted and averaged across all four animals that received the light stimulus first. Most of the effects could roughly be observed over the whole range of frequencies tested (in Figure 16.3, we selected nDTF peaks at a frequency of 40 Hz for illustration). Nevertheless, effects did not always reach significance at all frequencies tested (see Tables 16.1 and 16.2 for more detailed information on the development of peaks at other frequencies). After the first (visual) stimulus, the amplitude of the first peak increased in the nDTFA→V and decreased in the nDTFV→A (Figure 16.3a, left). At the beginning of the session, the amplitude was higher in the nDTFV→A than in the nDTFA→V, thus the amplitude difference between the nDTFA→V and the nDTFV→A decreased significantly over the session (Figure 16.3a, right). After the second (auditory) stimulus, the amplitude of the second peak increased both in the nDTFA→V and the nDTFV→A (Figure 16.3b, left). Importantly, the increase in the nDTFA→V exceeded the increase in the nDTFV→A, gradually increasing the difference between the nDTFA→V and the nDTFV→A (Figure 16.3b, right). 16.3.2.2 AV-Animals Similar to the nDTF development in VA-animals after the second (auditory) stimulus, in the AVanimals after the first (auditory) stimulus, the amplitude increased both in the nDTFA→V and the nDTFV→A (Figure 16.3c, left). The increase was more pronounced in nDTFA→V, further increasing the difference between the nDTFA→V and the nDTFV→A (Figure 16.3c, right). Interestingly, after the second (visual) stimulus, the behavior of the nDTF in the AV-animals did not resemble the behavior of the nDTF after the first (visual) stimulus in the VA-animals. In the AV-animals, the amplitude of the nDTFV→A increased after the visual stimulus, the amplitude of the nDTFA→V decreased slightly in some animals, whereas in other animals, an increase could be observed (Figure 16.3d, left; Table 16.1). After the visual stimulus, the amplitude of the nDTFV→A was already higher than the amplitude of the nDTFA→V at the beginning of the sessions,
314
The Neural Bases of Multisensory Processes VA animals: peak 1
1.4
A V–DTF V A–DTF
nDTF
1.3
0.1
1.2 1.1
(b)
0
A V–DTF V A–DTF
8
Difference (A V–DTF – V A–DTF)
0.1
–0.1
8
0
A V–DTF V A–DTF
2 4 6 Number of trial interval
8
Difference (A V–DTF – V A–DTF)
0.3 0.2 nDTF
nDTF
2 4 6 Number of trial interval AV animals: peak 1
0.9 0.8 0.7
0.1 0
0
2 4 6 Number of trial interval
–0.1
8
AV animals: peak 2
0.2
A V–DTF V A–DTF
0.15
0
2 4 6 Number of trial interval
8
Difference (A V–DTF – V A–DTF)
0 –0.05
nDTF
nDTF
2 4 6 Number of trial interval
0
1
0.1
–0.1
–0.15
0.05 0
0
0.2
1.2
0
(d)
–0.1
0.3
1.3
(c)
0
–0.2
8
VA animals: peak 2
1.6 1.5
nDTF
2 4 6 Number of trial interval
nDTF
1
Difference (A V–DTF – V A–DTF)
0.2 nDTF
(a)
0
2 4 6 Number of trial interval
8
–0.2
0
2 4 6 Number of trial interval
8
FIGURE 16.3 Development of nDTF peaks at 40 Hz within sessions averaged across nonoverlapping windows of 125 trials stepped through all sessions. (a and b) Animals receiving light first. (c and d) Animals receiving tone first. Left: development of average amplitude peak after first stimulus in nDTFA→V and nDTF V→A (a and c). Development of average amplitude peak after second stimulus in nDTFA→V and nDTFV→A (b and d). Right: amplitude of nDTFV→A peak subtracted from amplitude of nDTFA→V peak shown in left. Error bars denote standard error of mean, averaged across animals.
the difference between the nDTFA→V and the nDTFV→A further increased during the course of the sessions (Figure 16.3d, right).
16.3.3 Development of the Amplitude of nDTFA→V and nDTFV→A across Sessions To examine the effects of long-term adaptation, the nDTF amplitude of the first 100 trials was averaged separately for each session. The development of the amplitude averages across sessions
Effects of Prolonged Exposure to Audiovisual Stimuli with Fixed Stimulus Onset Asynchrony
315
TABLE 16.1 P Values of Slope of a Regression Line Fitted to Peak Amplitudes of nDTF Averaged across Nonoverlapping Windows of 125 Trials Stepped through All Sessions A→V nDTF peak 1 Animals AV090 AV091 AV106 AV125 VA099 VA100 VA107 VA124
1 Hz 0.05c >0.05c 0.03b 0.002a 0.001a >0.05c
>0.05c 0.01a >0.05c 0.01b 0.002a 0.001a >0.05c
a
a
a
a
a
b
b
b
c
Note: Left, first nDTF peak. Right, second nDTF peak. Animal notation: AV, animals receiving tone first; VA, animals receiving the light first. a Slope is positive. b Slope is negative. c Nonsignificant results.
16.4 DISCUSSION The repeated presentation of pairs of auditory and visual stimuli, with random intervals between stimulus pairs but constant audiovisual stimulus onset asynchrony within each pair, led to robust changes in the interaction dynamics between the primary auditory and the primary visual cortex. Independent of the stimulus order, when an auditory stimulus was presented, the amplitude of the nDTFA→V exceeded the amplitude of the nDTF V→A, whereas after the visual stimulus, the amplitude of the nDTFV→A reached higher values. Moreover, within adaptation sessions, some of the observed changes in nDTF amplitudes showed clear dynamic trends, whereas across adaptation sessions, no coherent development could be observed. In the following we will discuss which processes might be evoked by the repeated asynchronous presentation of audiovisual stimuli and whether they might offer suitable explanations for the amplitude changes in the nDTF we observed. As paired-stimulus adaptation protocols, similar to the one used in the present study, have been shown to induce recalibration of temporal order judgment in humans (e.g., Fujisaki et al. 2004; Vroomen et al. 2004), we want to discuss whether some of the described effects on the directed information transfer could possibly underlie such recalibration functions. To prepare the discussion, some general considerations of the interpretation of nDTF amplitudes seem appropriate.
16.4.1 Interpretation of DTF-Amplitudes Long-range interaction processes have been frequently associated with coherent oscillatory activity between the cortices (Bressler 1995; Bressler et al. 1993; Roelfsema et al. 1997; Rodriguez et al. 1999; Varela et al. 2001). Moreover, it has been shown that the oscillatory activity in one cortical area can be predicted by earlier measurement of another cortical area using the DTF (Kaminski et al. 1997, 2001; Korzeniewska et al. 1997, 2003; Franaszczuk and Bergey 1998; Medvedev and Willoughby 1999; Liang et al. 2000), indicating that the oscillatory activity might signal directional influences between the cortices.
Effects of Prolonged Exposure to Audiovisual Stimuli with Fixed Stimulus Onset Asynchrony
317
However, as Cassidy and Brown (2003) have demonstrated in a series of simulation studies, there is no straightforward way to conclude from the information provided by the DTF to cross-cortical interactions. Specifically, from DTF amplitudes alone, we cannot tell whether the information flow is unidirectional, bidirectional, or even multidirectional, including additional brain areas. Let us consider the situation after the presentation of the auditory stimulus when the amplitude of the nDTFA→V attains higher values than the amplitude of the nDTFV→A. First, this result might indicate that there is unidirectional influence from the auditory to the visual cortex, with the size of the amplitude difference positively correlating with the delay in the information transfer. Second, this finding could also reflect a reciprocal influence between the auditory and visual cortices, but with the influence from the auditory cortex either larger in amplitude or lagged relative to the influence from the visual cortex. Third, additional unobserved structures might be involved, sending input slightly earlier to the auditory cortex than to the visual cortex.
16.4.2 Development of nDTF-Amplitude within Sessions The development of the nDTF after the auditory stimulus did not seem to depend strongly on the order of stimulus presentation. Independent of whether an auditory or a visual stimulus was presented first, after the auditory stimulus, the peak amplitude of both the nDTFA→V and nDTFV→A increased. Noteworthy, the increase was more pronounced in the nDTFA→V than in the nDTF V→A, further increasing the difference between the amplitudes of the nDTFA→V and the nDTFV→A. Using the interpretation scheme introduced above, under the assumption of unidirectional interaction, the influence from the auditory to the visual cortex not only increased in strength but also the lag with which the input is sent became larger with increasing number of stimulus repetitions. In case of bidirectional interaction, influences from both sides increased, but the influence from the auditory cortex became stronger relative to the influence from the visual cortex. Finally, in case of multidirectional interaction, the influence of a third structure in both the auditory and the visual cortex might become more pronounced, but at the same time, the temporal delay of input sent to the visual cortex relatively to the delay input sent to the auditory cortex is increased even further. All three interpretations have in common that not only the interaction gathered in but also the mode of the interaction changed. In contrast to the development of the nDTF after the auditory stimulus, the development of the nDTF after the visual stimulus clearly depended on the order of stimulus presentation. When the visual stimulus was presented first, contrary to expectations, the amplitude of the nDTF V→A decreased with increasing number of stimulus repetitions, whereas the amplitude of the nDTFA→V increased in the majority of the animals. Thus, assuming that unidirectional influence underlies our data, this finding might reflect that the visual cortex sends influences to the auditory cortex at increasingly shorter delays. In case of bidirectional interaction, the input from the visual cortex decreases whereas the input from the auditory cortex increases. Finally, under the assumption of multidirectional interaction, a hypothetical third structure might still send its input earlier to the visual cortex, but the delay becomes diminished with increasing number of stimulus repetitions. When the visual stimulus was presented as the second stimulus, the nDTF behaved similarly as after the auditory stimulus. More precisely, both the peak amplitude of the nDTFA→V and the nDTFV→A increased within the sessions. But importantly, now the increase was stronger in the nDTFV→A. To summarize, the characteristic developmental trend after the second stimulus was an increase in both nDTFA→V and nDTFV→A, with the increase stronger in the nDTF sending information from the structure the stimulus had been presented to, namely in the nDTF V→A after the visual stimulus and in the nDTFA→V after the auditory stimulus. After the first stimulus, no typical development of the nDTF can be outlined: the behavior of the nDTF clearly depended on the stimulus modality as the difference in nDTFA→V and nDTFV→A amplitudes increased for an auditory stimulus, but decreased for a visual stimulus.
318
The Neural Bases of Multisensory Processes
16.4.3 Audiovisual Stimulus Association as a Potential Cause of Observed Changes in nDTF-Amplitudes The cross-cortical interaction between auditory and visual cortices reflected in the peaks of the nDTF could simply be an indication that information is spread among the sensory cortices during the course of stimulus processing. However, we also have to take into account that the nDTF amplitudes increased within the sessions, signaling that the interaction between the auditory and the visual cortex intensified. In addition, after the visual stimulus, the behavior of the DTF differed strongly with the order of stimulus presentation. Each of these observations might be a sign that the auditory and the visual information became associated. This hypothesis is in accordance with the unity assumption (e.g., Bedford 2001; Welch 1999; Welch and Warren 1980), which states that two stimuli from different sensory modalities will be more likely regarded as deriving from the same event when they are presented, for example, in close temporal congruence. The increase in the nDTF after the second stimulus might indicate that stimuli are integrated after the second stimulus has been presented. The increase in the nDTF before the second stimulus might indicate the expectation of the second stimulus. Several other studies have demonstrated increases in coherent activity associated with anticipatory processing (e.g., Roelfsema et al. 1998; Von Stein et al. 2000; Fries et al. 2001; Liang et al. 2002). But on the other hand, our results on the development of the nDTF after the first stimulus varied strongly with the stimulus order, and it seems strange that the effect the expectation of an auditory stimulus has on the nDTF is quite different from the effect the expectation of a visual stimulus might have on the nDTF. To clarify whether the observed changes might have something to do with stimulus association or expectation processes, the repetition of this experiment with anesthetized animals might be helpful. To explore whether the nDTF amplitude is influenced by anticipatory processing, it might also be interesting to vary the likelihood with which a stimulus of a first modality is followed by a stimulus of a second modality (see Sutton et al. 1965, for an experiment examining the effect of stimulus uncertainty on local field potentials).
16.4.4 Changes in Lag Detection as a Potential Cause of Observed Changes in DTF-Amplitudes As we presented our stimuli constantly at the same lag, it does not seem far-fetched to assume that our stimulation alerted hypothetical lag detectors. There are already some studies on the neural correlates of synchronous and asynchronous stimulus presentation (Meredith et al. 1987; Bushara et al. 2001; Senkowski et al. 2007). Senkowski et al. (2007) examined the oscillatory gamma-band responses in the human EEG for different stimulus onset asynchronies of auditory and visual stimuli. They found clear evidence for multisensory interactions in the gamma-band response when stimuli were presented in very close temporal synchrony. In addition, they also found a very specific interaction effect over occipital areas when auditory inputs were leading visual input by 100 ± 25 ms, indicating that cortical responses could be specific for certain asynchronies.
16.4.5 Mechanisms of Recalibration: Some Preliminary Restrictions 16.4.5.1 Expectation and Lag Detection Experiments on the recalibration of temporal order judgment typically demonstrate a shift of the entire psychometric function (at many stimulus onset synchronies), despite the fact that only a single intermodal lag value has been used in the adaptation phase. In other words, a specific stimulus order does not seem to be necessary to be able to observe the change in temporal order perception, indicating that expectation processes are unlikely to play a major role in evoking recalibration
Effects of Prolonged Exposure to Audiovisual Stimuli with Fixed Stimulus Onset Asynchrony
319
effects. In a similar way, a specific stimulus onset asynchrony between the stimuli does not seem to be required, speaking against a dominant role for lag-specific detection processes underlying the recalibration effect. 16.4.5.2 Processes after the Second Stimulus Even though the presentation of stimuli at a specific lag or in a specific order does not seem to be necessary to make the recalibration of temporal perception observable in behavior, it is still possible that the presentation of both an auditory and a visual stimulus is required. Under this hypothesis, the mechanisms of recalibration should come into play only after stimuli of both modalities have been presented. After the second stimulus, we could observe an increase in the difference of the amplitudes of the nDTFs in both AV and VA animals. We hypothesized that this increase might reflect an ongoing stimulus association. Vatakis and Spence (2007) demonstrated that subjects showed decreased temporal sensitivity, as measured by the JND, when an auditory and a visual speech stimulus belonged to the same speech event. Also, in some experiments on recalibration, an increase in the JND was observed (Navarra et al. 2005, 2006; Fujisaki et al. 2004). However, it is premature to conclude that stimulus association plays a role in recalibration experiments. First, an increase in JND after stimulus association could not be observed with different experimental conditions (Vatakis and Spence 2008). Second, as already discussed in the Introduction, recalibration does not seem to be stimulus-specific (Fujisaki et al. 2004; Navarra et al. 2005). 16.4.5.3 Speed of Processing The observation that neither a specific lag nor a specific stimulus order seemed to be required to observe a recalibration effect supports a further possibility: to observe a change in temporal perception, the presentation of a second stimulus might not be necessary at all. Temporal perception in different modalities is probably not recalibrated relative to each other but perception is simply speeded up or slowed down in one modality. In our data, we did not find any indication for an increase in the speed of stimulus processing. The latencies of the nDTF peaks did not change with increasing number of stimulus presentations, but one has to keep in mind here that there might not be a direct relationship between the speed of processing and the speed of perception measured in recalibration experiments. Fujisaki et al. (2004) investigated the role of the speed of sensory processing in recalibration. Specifically, they advanced the hypothesis that processing in one modality might be speeded up by drawing attention to that modality, but based on the results of their experiments, they concluded that attention and recalibration were independent. If there was a general increase in the speed of perception, a transfer of recalibration effects to modality pairs not presented in the adaptation phase should be easy to detect. Preliminary results indicate that the effects and mechanisms of recalibration are not that simple. In the study by Harrar and Harris (2008), after the adaptation with visuotactile stimulus pairs, the visual stimulus was perceived to occur later relative to an auditory stimulus, but surprisingly, there were no changes in the perception of temporal disparities when the visual stimulus was presented with a tactile stimulus during the testing phase.
16.5 CONCLUSIONS The repeated presentation of paired auditory and visual stimuli with constant intermodal onset asynchrony is known to recalibrate audiovisual temporal order judgment in humans. The aim of this study was to identify potential neural mechanisms that could underlie this recalibration in an animal model amenable to detailed electrophysiological analysis of neural mass activity. Using Mongolian gerbils, we found that prolonged presentation of paired auditory and visual stimuli
320
The Neural Bases of Multisensory Processes
caused characteristic changes in the neuronal interaction dynamics between the primary auditory cortex and the primary visual cortex, as evidenced by changes in the amplitude of the nDTF estimated from local field potentials recorded in both cortices. Specifically, changes in both the DTF from auditory to visual cortex (nDTFA→V) and from visual to auditory cortex (nDTF V→A) dynamically developed over the course of the adaptation trials. We discussed three types of processes that might have been induced by the repeated stimulation: stimulus association processes, lag detection processes, and changes in the speed of stimulus processing. Although all three processes could potentially have contributed to the observed changes in nDTF amplitudes, their relative roles for mediating psychophysical recalibration of temporal order judgment must remain speculative. Further clarification of this issue would require a behavioral test of the recalibration of temporal order judgment in combination with the electrophysiological analysis.
REFERENCES Akaike, H. 1974. A new look at statistical model identification. Transactions on Automatic Control 19:716–723. Alais, D. and S. Carlile. 2005. Synchronizing to real events: Subjective audiovisual alignment scales with perceived auditory depth and speed of sound. Proceedings of the National Academy of Science of the United States of America 102(6):2244–2247. Arnold, D.H., A. Johnston, and S. Nishida. 2005. Timing sight and sound. Vision Research 45:1275–1284. Astolfi, L., F. Cincotti, D. Mattia, M.G. Marciani, L.A. Baccala, F. de Vico Fallani, S. Salinari, M. Ursino, M. Zavaglia, L. Ding, J.C. Edgar, G.A. Miller, B. He, and F. Babiloni. 2007. Comparison of different cortical connectivity estimators for high-tech resolution EEG Recordings. Human Brain Mapping 28:143–157. Baylor, D.A., B.J. Nunn, and J.L. Schnapf. 1984. The photocurrent, noise and spectral sensitivity of rods of the monkey Macaca fascicularis. Journal of Physiology 357:575–607. Baylor, D.A., B.J. Nunn, and J.L. Schnapf. 1987. Spectral sensitivity of cones of the monkey Macaca fascicularis. Journal of Physiology 390:124–160. Bedford, F.L. 2001. Toward a general law of numerical/object identity. Current Psychology of Cognition 20(3–4):113–175. Bizley, J.K., F.R. Nodal, V.M. Bajo, I. Nelken, and A.J. King. 2007. Physiological and anatomical evidence for multisensory interactions in auditory cortex. Cerebral Cortex 17:2172–2198. Bressler, S.L. 1995. Large scale cortical networks and cognition. Brain Research Reviews 20:288–304. Bressler, S.L. 1996. Interareal synchronization in the visual cortex. Behavioral Brain Research 76:37–49. Bressler, S.L., R. Coppola, and R. Nakamura. 1993. Episodic multiregional cortical coherence at multiple frequencies during visual task performance. Nature 366:153–156. Brosch, M., E. Selezneva, and H. Scheich. 2005. Nonauditory events of a behavioral procedure activate auditory cortex of highly trained monkeys. Journal of Neuroscience 25(29):6796–6806. Bushara, K.O., J. Grafman, and M. Hallet. 2001. Neural correlates of audio-visual stimulus onset asynchrony detection. The Journal of Neuroscience 21(1):300–304. Cahill, L., F.W. Ohl, and H. Scheich. 1996. Alternation of auditory cortex activity with a visual stimulus through conditioning: A 2-deoxyglucose analysis. Neurobiology of Learning and Memory 65(3):213–222. Cassidy, M., and P. Brown. 2003. Spectral phase estimates in the setting of multidirectional coupling. Journal of Neuroscience Methods 127:95–103. Cobbs, E.H., and E.N. Pugh Jr. 1987. Kinetics and components of the flash photocurrent of isolated retinal rods of the larval salamander, Ambystoma tigrinum. Journal of Physiology 394:529–572. Corey, D.P., and A.J. Hudspeth. 1979. Response latency of vertebrate hair cells. Biophysical Journal 26:499–506. Corey, D.P., and A.J. Hudspeth. 1983. Analysis of the microphonic potential of the bullfrog’s sacculus. Journal of Neuroscience 3:942–961. Crawford, A.C., and R. Fettiplace. 1985. The mechanical properties of ciliary bundles of turtle cochlear hair cells. Journal of Physiology 364:359–379. Crawford, A.C., M.G. Evans, and R. Fettiplace. 1991. The actions of calcium on the mechanoelectrical transducer current of turtle hair cells. Journal of Physiology 491:405–434. Ding, M., S.L. Bressler, W. Yang, and H. Liang. 2000. Short-window spectral analysis of cortical event-related potentials by adaptive autoregressive modelling: Data preprocessing, model validation, variability assessment. Biological Cybernetics 83:35–45.
Effects of Prolonged Exposure to Audiovisual Stimuli with Fixed Stimulus Onset Asynchrony
321
Eichler, M. 2006. On the evaluation of information flow in multivariate systems by the directed transfer function. Biological Cybernetics 94:469–482. Engel, G.R., and W.G. Dougherty 1971. Visual-auditory distance constancy. Nature 234:308. Efron, B., and R.J. Tibshirani 1993. An Introduction to the Bootstrap. Boca Raton, FL: Chapman and Hall/ CRC. Fain, G.L. 2003. Sensory Transduction. Sunderland: Sinauer Associates. Franaszczuk, P.J., and G.K. Bergey. 1998. Application of the directed transfer function method to mesial and lateral onset temporal lobe seizures. Brain Topography 11:13–21. Freeman,W.J. 2000. Neurodynamics: An Exploration in Mesoscopic Brain Dynamics. London: Springer Verlag. Fries, P., J.H. Reynolds, A.E. Rorie, and R. Desimone. 2001. Modulation of oscillatory neuronal synchronization by selective visual attention. Science 291:1560–1563. Fujisaki, W., and S. Nishida. 2005. Temporal frequency characteristics of synchrony-asynchrony discrimination of audio-visual signals. Experimental Brain Research 166:455–464. Fujisaki, W., and S. Nishida. 2008. Top-down feature based selection of matching feature for audio-visual synchrony discrimination. Neuroscience Letters 433:225–230. Fujisaki, W., S. Shinsuke, K. Makio, and S. Nishida. 2004. Recalibration of audiovisual simultaneity. Nature Neuroscience 7(7):773. Fujisaki, W., A. Koene, D. Arnold, A. Johnston and S. Nishida. 2006. Visual search for a target changing in synchrony with an auditory signal. Proceedings of the Royal Society of London. Series B. Biological Sciences 273:865–874. Harrar, V., and L.R. Harris. 2005. Simultaneity constancy: Detecting events with touch and vision. Experimental Brain Research 166:465–473. Harrar, V., and L.R. Harris. 2008. The effects of exposure to asynchronous audio, visual, and tactile stimulus combination on the perception of simultaneity. Experimental Brain Research 186:517–524. Hanson, J.V.M., J. Heron, and D. Whitaker. 2008. Recalibration of perceived time across sensory modalities. Experimental Brain Research 185:347–352. Heron, J., D. Whitaker, P. McGraw, and K.V. Horoshenkov. 2007. Adaptation minimizes distance-related audiovisual delays. Journal of Vision 7(13):1–8. Hestrin, S., and J.I. Korenbrot. 1990. Activation kinetics of retinal cones and rods: Response to intense flashes of light. Journal of Neuroscience 10:1967–1973. Kaminski, M., and K.J. Blinowska. 1991. A new method for the description of the information flow in the brain structures. Biological Cybernetics 65:203–210. Kaminski, M., K.J. Blinowska, and W. Szelenberger. 1997. Topographic analysis of coherence and propagation of EEG activity during sleep and wakefulness. Electroencephalography Clinical Neurophysiology 102:216–277. Kaminski, M., M. Ding, W.A. Trucculo, and S.L. Bressler. 2001. Evaluating causal relations in neural systems: Granger causality, directed transfer function and statistical assessment of significance. Biological Cybernetics 85:145–157. Kay, S.M. 1987. Modern Spectral Estimation. Englewood Cliffs, NJ: Prentice Hall. Kayser, C., C. Petkov, and N.K. Logothetis. 2008. Visual modulation of neurons in auditory cortex. Cerebral Cortex 18:1560–1574. Keetels, M., and J. Vroomen. 2007. No effect of auditory–visual spatial disparity on temporal recalibration. Experimental Brain Research 182:559–565. Kopinska, A., and L.R. Harris. 2004. Simultaneity constancy. Perception 33:1049–1060. Korzeniewska, A., S. Kasicki, M. Kaminski, and K.J. Blinowska. 1997. Information flow between hippocampus and related structures during various types of rat’s behavior. Journal of Neuroscience Methods 73:49–60. Korzeniewska, A., M. Manczak, M. Kaminski, K.J. Blinowska, and S. Kasicki. 2003. Determination of information flow direction among brain structures by a modified directed transfer function (dDTF) method. Journal of Neuroscience Methods 125:195–207. Kus, R., M. Kaminski, and K.J. Blinowska. 2004. Determination of EEG activity propagation: Pairwise versus multichannel estimate. IEEE Transactions on Bio-Medical Engineering 51:1501–1510. Lewald, J., and R. Guski. 2004. Auditory-visual temporal integration as a function of distance: No compensation for sound-transmission time in human perception. Neuroscience Letters 357:119–122. Liang, H., M. Ding, R. Nakamura, and S.L. Bressler. 2000. Causal influences in primate cerebral cortex. Neuroreport 11(13):2875–2880.
322
The Neural Bases of Multisensory Processes
Liang, H., S.L. Bressler, M. Ding, W.A. Truccolo, and R. Nakamura. 2002. Synchronized activity in prefrontal cortex during anticipation of visuomotor processing. Neuroreport 13(16):2011–2015. Lütkepohl, H. 1993. Introduction to Multiple Time Series Analysis, 2nd ed. Berlin: Springer. Marple, S.L. 1987. Digital Spectral Analysis with Applications. Englewood Cliffs, NJ: Prentice Hall. Medvedev, A., and J.O. Willoughby. 1999. Autoregressive modeling of the EEG in systemic kainic acid-induced epileptogenesis. International Journal of Neuroscience 97:149–167. Meredith, M.A., J.W. Nemitz, and B.E. Stein. 1987. Determinants of multisensory integration in superior colliculus neurons. I. Temporal factors. The Journal of Neuroscience 7(10):3212–3229. Miyazaki, M., S. Yamamoto, S. Uchida, and S. Kitazawa. 2006. Bayesian calibration of simultaneity in tactile temporal order judgment. Nature Neuroscience 9:875–877. Musacchia, G., and C.E. Schroeder. 2009. Neural mechanisms, response dynamics and perceptual functions of multisensory interactions in auditory cortex. Hearing Research 285:72–79. Navarra, J., A. Vatakis, M. Zampini, S. Soto-Faraco, W. Humphreys, and C. Spence. 2005. Exposure to asynchronous audiovisual speech extends the temporal window for audiovisual integration. Cognitive Brain Research 25:499–507. Navarra, J., S. Soto-Faraco, and C. Spence. 2006. Adaptation to audiovisual asynchrony. Neuroscience Letters 431:72–76. Nickalls, R.W.D. 1996. The influences of target angular velocity on visual latency difference determined using the rotating Pulfirch effect. Vision Research 36:2865–2872. Ohl, F.W., H. Scheich, and W.J. Freeman. 2000. Topographic analysis of epidural pure-tone-evoked potentials in gerbil auditory cortex. Journal of Neurophysiology 83:3123–3132. Ohl, F.W., H. Scheich, and W.J. Freeman. 2001. Change in pattern of ongoing cortical activity with auditory learning. Nature 412:733–736. Posner, M.I., C.R.R. Snyder, and B.J. Davidson. 1980. Attention and the detection of signals. Journal of Experimental Psychology: General 109(2):160–174. Robson, J.G., S.M. Saszik, J. Ahmed, and L.J. Frishman. 2003. Rod and cone contributions to the a-wave of the electroretinogram of the macaque. Journal of Physiology 547:509–530. Rodriguez, E., N. Georg, J.P. Lachaux, J. Martinerie, B. Renault, and F.J. Varela. 1999. Perception’s shadow: Long-distance synchronization of neural activity. Nature 397:430–433. Roelfsema, P.R., A.K. Engel, P. König, and W. Singer. 1997. Visuomotor integration is associated with zero time-lag synchronization among cortical areas. Nature 385:157–161. Roelfsema, P.R., V.A.F. Lamme, and H. Spekreijse. 1998. Object based attention in the primary auditory cortex of the macaque monkey, Nature 395:377–381. Schlögl, A. 2006. A comparison of multivariate autoregressive estimators. Signal Processing 86:2426–2429. Senkowski, D., D. Talsma, M. Grigutsch, C.S. Herrmann, and M.G. Woldorff. 2007. Good times for multisensory integration: Effects of the precision of temporal synchrony as revealed by gamma band oscillations. Neuropsychologica 45:561–571. Stone, J.V. 2001. Where is now? Perception of simultaneity. Proceedings of the Royal Society of London. Series B. Biological Sciences 268:31–38. Sugita, Y., and Suzuki, Y. 2003. Audiovisual perception. Implicit evaluation of sound arrival time. Nature 421:911. Sutton, S., M. Braren, J. Subin, and E.R. John. 1965. Evoked potential correlates of stimulus uncertainty. Science 150:1178–1188 Varela, F., J. Lacheaux, E. Rodriguez, and J. Martinerie. 2001. The brain-web: Phase synchronization and largescale integration. Nature Reviews Neuroscience 2:229–239. Vatakis, A., and C. Spence. 2007. Crossmodal binding: Evaluating the influence of the ‘unity assumption’ using audiovisual speech stimuli. Perception & Psychophysics 69(5):744–56. Vatakis, A., and Spence, C. 2008. Evaluating the influence of the ‘unity assumption’ on the temporal perception of realistic audiovisual stimuli. Acta Psychologica 127:12–23. Vatakis, A., J. Navarra, S. Soto-Faraco, and C. Spence. 2007. Temporal recalibration during asynchronous audiovisual speech perception. Experimental Brain Research 181:173–181. Vatakis, A., J. Navarra, S. Soto-Faraco, and C. Spence. 2008. Audiovisual temporal adaptation of speech: Temporal order versus simultaneity judgments. Experimental Brain Research 185:521–529. Von Békésy, G. 1963. Interaction of paired sensory stimuli and conduction of peripheral nerves. Journal of Applied Physiology 18:1276–1284. Von Stein, A., C. Chiang, and P. König. 2000. Top-down processing mediated by interarea synchronization. Proceedings of the National Academy of Science of the United States of America 97:147148–147153.
Effects of Prolonged Exposure to Audiovisual Stimuli with Fixed Stimulus Onset Asynchrony
323
Vroomen, J., M. Keetels, B. de Gelder, and P. Bertelson. 2004. Recalibration of temporal order perception by exposure to audio-visual asynchrony. Cognitive Brain Research 22:32–35. Welch, R.B. 1999. Meaning, attention and the unity assumption in the intersensory bias of spatial and temporal perceptions. In Cognitive contributions to the perception of spatial and temporal events, ed. G. Achersleben, T. Bachmann, and J. Müsseler, 371–387. Amsterdam: Elsevier. Welch, R.B., and D.H. Warren. 1980. Immediate perceptual response to intersensory discrepancy. Psychological Bulletin 88:638–667. Wilson, J.A., and S.M. Anstis. 1996. Visual delay as a function of luminance. American Journal of Psychology 82:350–358.
17
Development of Multisensory Temporal Perception David J. Lewkowicz
CONTENTS 17.1 Introduction........................................................................................................................... 325 17.2 Perception of Multisensory Temporal Information and Its Coherence................................. 326 17.3 Developmental Emergence of Multisensory Perception: General Patterns and Effects of Experience......................................................................................................................... 327 17.4 Perception of Temporal Information in Infancy.................................................................... 330 17.5 Perception of A–V Temporal Synchrony............................................................................... 331 17.5.1 A–V Temporal Synchrony Threshold........................................................................ 331 17.5.2 Perception of A–V Speech Synchrony and Effects of Experience............................ 332 17.5.3 Binding of Nonnative Faces and Vocalizations......................................................... 334 17.6 Perception of Multisensory Temporal Sequences in Infancy................................................ 336 17.7 Speculations on Neural Mechanisms Underlying the Development of Multisensory Perception.............................................................................................................................. 338 References....................................................................................................................................... 339
17.1 INTRODUCTION The objects and events in our external environment provide us with a constant flow of multisensory information. Such an unrelenting flow of information might be potentially confusing if no mechanisms were available for its integration. Fortunately, however, sophisticated multisensory integration* mechanisms have evolved across the animal kingdom to solve this problem (Calvert et al. 2004; Ghazanfar and Schroeder 2006; Maier and Schneirla 1964; Marks 1978; Partan and Marler 1999; Rowe 1999; Stein and Meredith 1993; Stein and Stanford 2008; Welch and Warren 1980). These mechanisms enable mature organisms to integrate multisensory inputs and, in the process, make it possible for them to perceive the coherent nature of their multisensory world. The other chapters in this volume discuss the structural and functional characteristics of multisensory processing and integration mechanisms in adults. Here, I address the developmental question by asking (1) when do multisensory response mechanisms begin to emerge in development, and (2) what specific processes underlie their emergence? To answer these questions, I discuss our work on the development of multisensory processing of temporal information and focus primarily on human infants. I show that processing of multisensory temporal information, as well as the * Historically, the term “integration,” when used in the context of work on multisensory processing, has been used to refer to different processes by different researchers (Stein et al. 2010). For some, this term is reserved for cases in which sensory input in one modality changes the qualitative experience that one has in response to stimulation in another modality, as is the case in the McGurk effect (McGurk and MacDonald 1976). For others, it has come to be associated with neural and behavioral responsiveness to near-threshold stimulation in one modality either being enhanced or suppressed by stimulation in another modality (Stein and Stanford 2008). Finally, for some investigators, integration has simply meant the process that enables perceivers to detect and respond to the relational nature of multisensory stimulation and no assumptions were made about underlying perceptual or neural mechanisms. It is this last meaning that is used here.
325
326
The Neural Bases of Multisensory Processes
processing of other types of multisensory information, emerges gradually during the first year of life, argue that the rudimentary multisensory processing abilities found at the beginning of life reflect neural/behavioral immaturities and the relative lack of perceptual and sensorimotor experience, and provide evidence that the gradual improvement in multisensory processing ability reflects the interaction between behavioral and (implied) neural maturation and perceptual experience.
17.2 PERCEPTION OF MULTISENSORY TEMPORAL INFORMATION AND ITS COHERENCE The temporal dimension of our everyday experience is an inescapable and fundamental part of our perceptual and cognitive existence (Fraisse 1982; Greenfield 1991; Lashley 1951; Martin 1972; Nelson 1986, 2007). The temporal flow of stimulation provides a host of perceptual cues that observers can use to detect the coherence, global structure, and even the hidden meanings inherent in multisensory events. For example, when people speak, they produce a series of mouth movements and vocalizations. At a basic level, the onsets and offsets of mouth movements and the accompanying vocalizations are precisely synchronized. This allows observers to determine that the movements and vocalizations are part of a coherent multisensory event. Of course, detecting that multisensory inputs correspond in terms of their onsets and offsets is not terribly informative because it does not provide any information about the other key and overlapping characteristics of multisensory events. For example, in the case of audiovisual speech, synchrony does not provide information about the invariant durations of the audible and visible utterances nor about the correlated dynamic temporal and spectral patterns across audible and visible articulations that are normally available and used by adults (Munhall and Vatikiotis-Bateson 2004; Yehia et al. 1998). These latter perceptual cues inform the observer about the amodal invariance of the event and, thus, serve as another important basis for the perception of multisensory coherence.* Finally, at an even more global level, the temporal patterning (i.e., rhythm) of the audible and visible attributes of a continuous utterance (i.e., a string of words) not only can provide another important basis for the perception of multisensory coherence but can also provide cues to “hidden” meanings. The hidden meanings derive from the particular ordering of the different constituents (e.g., syllables, words, and phrases) and when those constituents are specified by multisensory attributes, their extraction can be facilitated by multisensory redundancy effects (Bahrick et al. 2004; Lewkowicz and Kraebel 2004). As might be expected, adults are highly sensitive to multisensory temporal information. This is evident from the results of studies showing that adults can perceive temporally based multisensory coherence (Gebhard and Mowbray 1959; Handel and Buffardi 1969; Myers et al. 1981; Shipley 1964; Welch et al. 1986). It is also evident from the results of other studies showing that adults’ responsiveness to the temporal aspects of stimulation in one sensory modality can influence their responsiveness to the temporal aspects of stimulation in another modality. For example, when adults hear a fluttering sound, their perception of a flickering light changes as a function of the frequency of the flutter; the flutter “drives” the flicker (Myers et al. 1981). Particularly interesting are findings showing that some forms of temporal intersensory interaction can produce illusions (Sekuler et al. 1997; Shams et al. 2000) or can influence the strength of illusions (Slutsky and Recanzone 2001). For * There is a functionally important distinction between intersensory cues such as duration, tempo, and rhythm, on the one hand, and intersensory temporal synchrony cues, on the other. The former are all amodal stimulus attributes because they can be specified independently in different modalities and, as a result, can be perceived even in the absence of temporal synchrony cues (e.g., even if the auditory and visual attributes of a speech utterance are not presented together, their equal duration can be perceived). In contrast, temporal synchrony is not an amodal perceptual cue because it cannot be specified independently in a single sensory modality; an observer must have access to the concurrent information in the different modalities to perceive it. Moreover, and especially important for developmental studies, infants might be able to perceive intersensory synchrony relations without being able to perceive the amodal cues that characterize the multisensory attributes (e.g., an infant might be able to perceive that a talking face and the vocalizations that it produces belong together but may not be able to detect the equal duration of the visible and audible articulations).
Development of Multisensory Temporal Perception
327
example, when adults see a single flash and hear two tones, they report two flashes even though they know that there is only a single flash (Shams et al. 2000). Similarly, when two identical objects are seen moving toward and then through each other and a brief sound is presented at the point of their coincidence, adults report that the objects bounce against each other rather than pass through one another (Sekuler et al. 1997). This “bounce” illusion emerges in infancy in that starting at 6 months of age, infants begin to exhibit evidence that they experience it as well (Scheier et al. 2003). Even though the various amodal and invariant temporal attributes are natural candidates for the perception of multisensory coherence, there are good a priori theoretical reasons to expect that intersensory temporal synchrony might play a particularly important role during the earliest stages of development (Gibson 1969; Lewkowicz 2000a; Thelen and Smith 1994) and that young infants may not perceive the kinds of higher-level amodal invariants mentioned earlier. One reason for this may be the fact that, unlike in the case of the detection of higher-level amodal invariants, it is relatively easy to detect multisensory temporal synchrony relations. All that is required is the detection of the concurrent onsets and offsets of stimulus energy across modalities. In contrast, the detection of amodal cues requires the ability to perceive the equivalence of some of the higher-level types of correlated patterns of information discussed earlier. Moreover, sometimes observers are even required to detect such patterns when they are not available concurrently and can do so too (Kamachi et al. 2003). Infants also exhibit this ability but, thus far, evidence indicates that they can do so only starting at 6 months of age (Pons et al. 2009) and no studies have shown that they can perform this kind of task earlier. Although young infants’ presumed inability to perceive amodal cues might seem like a serious limitation, it has been argued by some that developmental limitations actually serve an important function (Oppenheim 1981). With specific regard to multisensory functions, Turkewitz has argued that sensory limitations help infants organize their perceptual world in an orderly fashion while at the same time not overwhelming their system (Turkewitz 1994; Turkewitz and Kenny 1982). From this perspective, the ability to detect temporal synchrony cues very early in life makes it possible for young, immature, and inexperienced infants to first discover that multisensory inputs cohere together, albeit at a very low level. This, in turn, gives them an entrée into a multisensory world composed not only of the various higher-level amodal invariants mentioned earlier but other higherlevel nontemporal multisensory attributes such as gender, affect, and identity. Most theorists agree that the general processes that mediate this gradual improvement in multisensory processing ability are perceptual learning and differentiation in concert with infants’ everyday experience and sensorimotor interactions with their multisensory world. Extant empirical findings are generally consistent with the theoretical developmental pattern described above. For instance, young infants can detect the synchronous onsets of inanimate visual and auditory stimuli (Lewkowicz 1992a, 1992b, 1996) and rely on synchrony cues to perceive the amodal property of duration (Lewkowicz 1986). Likewise, starting at birth and thereafter, infants can detect the synchronous relationship between the audible and visible attributes of vocalizing faces (Lewkowicz 2000b, 2010; Lewkowicz and Ghazanfar 2006; Lewkowicz et al. 2010). Interestingly, however, when the multisensory temporal task is too complex (i.e., when it requires infants to detect which of two objects that are moving at different tempos corresponds to a synchronous sound) synchrony cues are not sufficient for the perception of multisensory coherence (Lewkowicz 1992a, 1994). Similarly, when the relationship between two moving objects and a sound that occurs during their coincidence is ambiguous (as is the case in the bounce illusion), 6- and 8-month-old infants perceive this relationship but 4-month-olds do not.
17.3 DEVELOPMENTAL EMERGENCE OF MULTISENSORY PERCEPTION: GENERAL PATTERNS AND EFFECTS OF EXPERIENCE As indicated above, data from studies of infant response to multisensory temporal information suggest that multisensory processing abilities improve during the first year of life. If that is the case,
328
The Neural Bases of Multisensory Processes
do these findings reflect a general developmental pattern? The answer is that the same pattern holds for infant perception of other types of multisensory perceptual cues. To make theoretical sense of the overall body of findings on the developmental emergence of multisensory perceptual abilities in infancy, it is helpful to first ask what the key theoretical questions are in this area. If, as indicated earlier, infants’ initial immaturity and relative lack of experience imposes serious limitations on their ability to integrate the myriad inputs that constantly bombard their perceptual systems, how do they go about integrating those inputs and how does this process get bootstrapped at the start of postnatal life? As already suggested, one possible mechanism is a synchrony detection mechanism that simply detects synchronous stimulus onsets and offsets across different modalities. This, in turn, presumably provides developing infants with the opportunity to gradually discover increasingly more complex multisensory coherence cues. Although the detection of multisensory synchrony is one possible specific mechanism that can mediate developmental change, other more general processes probably contribute to developmental change as well. Historically, these more general processes have been proposed in what appear to be two diametrically opposed theoretical views concerning the development of multisensory functions. One of these views holds that developmental differentiation is the process underlying developmental change, whereas the other holds that developmental integration is the key process. More specifically, the first, known as the developmental differentiation view, holds that infants come into the world prepared to detect certain amodal invariants and that this ability improves and broadens in scope as they grow (Gibson 1969; Thelen and Smith 1994; Werner 1973). According to the principal proponent of this theoretical view (Gibson 1969), the improvement and broadening is mediated by perceptual differentiation, learning, and the emergence of increasingly better stimulus detection abilities. The second, known as the developmental integration view, holds that infants come into the world with their different sensory systems essentially disconnected and that the senses gradually become functionally connected as a result of children’s active interaction with their world (Birch and Lefford 1963, 1967; Piaget 1952). One of the most interesting and important features of each of these theoretical views is that both assign central importance to developmental experience. A great deal of empirical evidence has been amassed since the time that the two principal theoretical views on the development of multisensory functions were proposed. It turns out that some of this evidence can be interpreted as consistent with the developmental differentiation view whereas some of it can be interpreted as consistent with the developmental integration view. Overall, then, it seems that both processes play a role in the developmental emergence of multisensory functions. The evidence that is consistent with the developmental differentiation view comes from studies showing that despite the fact that the infant nervous system is highly immature and, despite the fact that infants are perceptually inexperienced, infants exhibit some multisensory perceptual abilities from birth onward (Gardner et al. 1986; Lewkowicz et al. 2010; Lewkowicz and Turkewitz 1980, 1981; Slater et al. 1997, 1999). Importantly, however, and as indicated earlier, these abilities are relatively rudimentary. For example, newborns can detect multisensory synchrony cues and do so by detecting nothing more than stimulus energy onsets and offsets (Lewkowicz et al. 2010). In addition, newborns are able to detect audiovisual (A–V) intensity equivalence (Lewkowicz and Turkewitz 1980) and can associate arbitrary auditory and visual object attributes on the basis of their synchronous occurrence (Slater et al. 1997, 1999). Although impressive, these kinds of findings are not surprising given that there are ample opportunities for intersensory interactions—especially those involving the co-occurrence of sensations in different modalities—during fetal life and that these interactions are likely to provide the foundation for the kinds of rudimentary multisensory perceptual abilities found at birth (Turkewitz 1994). Other evidence from the body of empirical work amassed to date is consistent with the developmental integration view by indicating that multisensory perceptual abilities improve as infants grow and acquire perceptual experience (Bremner et al. 2008; Lewkowicz 1994, 2000a, 2002; Lickliter and Bahrick 2000; Walker-Andrews 1997). This evidence shows that older infants possess more sophisticated multisensory processing abilities than do younger infants. For example, young infants
Development of Multisensory Temporal Perception
329
can perceive multisensory synchrony cues (Bahrick 1983; Bahrick and Lickliter 2000; Lewkowicz 1992a,b, 1996, 2000b, 2003, 2010), amodal intensity (Lewkowicz and Turkewitz 1980), amodal duration (Lewkowicz 1986), and the multisensory invariance of isolated audible and visible phonemes (Brookes et al. 2001; Kuhl and Meltzoff 1982, 1984; Patterson and Werker 2003). In contrast, however, whereas younger infants do not, older infants (roughly older than 6 months of age) also exhibit the ability to perceive amodal affects produced by strangers (Walker-Andrews 1986) and amodal gender (Patterson and Werker 2002; Walker-Andrews et al. 1991), bind arbitrary modalityspecific cues (Bahrick 1994; Reardon and Bushnell 1988), integrate auditory and visual spatial cues in an adult-like manner (Neil et al. 2006), and integrate multisensory spatial bodily and external cues (Bremner et al. 2008). Considered together, this latter body of findings clearly shows that multisensory perceptual abilities improve over the first year of life. Thus, when all the extant empirical evidence is considered together, it is clear that developmental differentiation and developmental integration processes operate side-by-side in early human development and that both contribute to the emergence of multisensory perceptual abilities in infancy and probably beyond. If developmental differentiation and integration both contribute to the development of multisensory perception, what role might experience play in this process? As might be expected (Gibson 1969), evidence from studies of human infants indicates that experience plays a critical role in the development of multisensory functions. Until now, however, very little direct evidence for the effects of early experience was available at the human level except for two studies that together demonstrated that infant response to amodal affect information depends on the familiarity of the information. Thus, in the first study, Walker-Andrews (1986) found that 7-month-olds but not 5-month-olds detected amodal affect when the affect was produced by a stranger. In the second study, KahanaKalman and Walker-Andrews (2001) found that when the affect was produced by the infant’s own mother, infants as young as 3.5 months of age detected it. More recently, my colleagues and I have discovered a particularly intriguing and seemingly paradoxical effect of experience on the development of multisensory responsiveness. We have discovered that some multisensory perceptual functions are initially present early in life and then decline as infants age. This multisensory perceptual narrowing phenomenon was not predicted by either the developmental differentiation or the developmental integration view. In these recent studies, we have found that infants between birth and 6 months of age can match monkey faces and the vocalizations that they produce but that older infants no longer do so (Lewkowicz and Ghazanfar 2006; Lewkowicz et al. 2008, 2010). In addition, we have found that 6-month-old infants can match visible and audible phonemes regardless of whether these phonemes are functionally relevant in their own language or in other languages (Pons et al. 2009). Specifically, we found that 6-month-old Spanish-learning infants can match a visible /ba/ to an audible /ba/ and a visible /va/ to an audible /va/, whereas 11-month-old Spanish-learning infants no longer do so. In contrast, English-learning infants can make such matches at both ages. The failure of the older Spanish-learning infants to make the matches is correlated with the fact that the /ba/ – /va/ phonetic distinction is not phonemically functional in Spanish. This means that when older Spanish-learning infants have to choose between a face mouthing a /ba/ and a face mouthing a /va/ after having listened to one of these phonemes, they cannot choose the matching face because the phonemes are no longer distinct for them. Together, our findings on multisensory perceptual narrowing indicate that as infants grow and gain experience with vocalizing human faces and with native language audiovisual phonology, their ability to perceive cross-species and cross-language multisensory coherence declines because nonnative multisensory information is not relevant for everyday functioning. We have also explored the possible evolutionary origins of multisensory perceptual narrowing and, thus far, have found that it seems to be restricted to the human species. We tested young vervet monkeys, at ages when they are old enough to be past the point of narrowing, with the same vocalizing rhesus monkey faces that we presented in our initial infant studies and found that vervets do not exhibit multisensory perceptual narrowing (Zangenehpour et al. 2009). That is, the vervets matched rhesus monkey visible and audible vocalizations even though they were past the point when
330
The Neural Bases of Multisensory Processes
narrowing should have occurred. We interpreted this finding as reflecting the fact that monkey brains mature four times as fast as human brains do and that, as a result, young vervets are less open to the effects of early experience than are human infants. This interpretation suggests that experience interacts with the speed of neural growth and differentiation and that slower brain growth and differentiation is highly advantageous because it provides for greater developmental plasticity. The vervet monkey study demonstrates that the rate of neural growth plays an important role in the development of behavioral functions and provides yet another example illustrating this key developmental principle (Turkewitz and Kenny 1982). What about neural and experiential immaturity, especially at the beginning of postnatal and/or posthatching life? Do other organisms, besides humans, manifest relatively poor and immature multisensory processing functions? The answer is that they do. A number of studies have found that the kinds of immaturities and developmental changes observed in human infants are also found in the young of other species. Together, these studies have found that rats, cats, and monkeys exhibit relatively poor multisensory responsiveness early in life, that its emergence follows a pattern of gradual improvement, and that early experience plays a critical role in this process. For example, Wallace and Stein (1997, 2001) have found that multisensory cells in the superior colliculus of cats and rhesus monkeys, which normally integrate auditory and visual spatial cues in the adult, do not integrate in newborn cats and monkeys, and that integration only emerges gradually over the first weeks of life. Moreover, Wallace et al. (2006) have found that the appropriate alignment of the auditory and visual maps in the superior colliculus of the rat depends on their normal spatial coregistration. The same kinds of effects have been found in barn owls and ferrets, in which calibration of the precise spatial tuning of the neural map of auditory space depends critically on concurrent visual input (King et al. 1988; Knudsen and Brainard 1991). Finally, in bobwhite quail hatchlings, the ability to respond to the audible and visible attributes of the maternal hen after hatching depends on prehatching and posthatching experience with the auditory, visual, and tactile stimulation arising from the embryo’s own vocalizations, the maternal hen, and broodmates (Lickliter and Bahrick 1994; Lickliter et al. 1996). Taken together, the human and animal data indicate that the general developmental pattern consists of an initial emergence of low-level multisensory abilities, a subsequent experience-dependent improvement of emerging abilities, and finally, the emergence of higher-level multisensory abilities. This developmental pattern, especially in humans, appears to be due to the operation of developmental differentiation and developmental integration processes. Moreover, and most intriguing, our recent discovery of multisensory perceptual narrowing indicates that even though young infants possess relatively crude and low-level types of multisensory perceptual abilities (i.e., sensitivity to A–V synchrony relations), these abilities imbue them with much broader multisensory perceptual tuning than is the case in older infants. As indicated earlier, the distinct advantage of this kind of tuning is that it provides young infants with a way of bootstrapping their multisensory perceptual abilities at a time when they are too immature and inexperienced to extract higher-level amodal attributes. In the remainder of this chapter, I review results from our studies on infant response to multisensory temporal information as an example of the gradual emergence of multisensory functions. Moreover, I review additional evidence of the role of developmental differentiation and integration processes as well as of early experience in the emergence of multisensory responsiveness. Finally, I speculate on the neural mechanisms that might underlie the developmental emergence of multisensory perception and highlight the importance of studying the interaction between neural and behavioral growth and experience.
17.4 PERCEPTION OF TEMPORAL INFORMATION IN INFANCY As indicated earlier, the temporal dimension of stimulation is the multisensory attribute par excellence because it provides observers with various types of overlapping patterns of multisensory information. For infants, this means that they have a ready-made and powerful basis for coherent
Development of Multisensory Temporal Perception
331
and cognitively meaningful multisensory experiences. This, of course, assumes that they are sensitive to the temporal flow of information in each modality. Indeed, evidence indicates that infants are sensitive to temporal information at both the unisensory and multisensory levels. For example, it has been found that infants as young as 3 months of age can predict the occurrence of a visual stimulus at a particular location based on their prior experience with a temporally predictable pattern of spatiotemporally alternating visual stimuli (Canfield and Haith 1991; Canfield et al. 1997). Similarly, it has been found that 4-month-old infants can quickly learn to detect a “missing” visual stimulus after adaptation to a regular and predictable visual stimulus regimen (Colombo and Richman 2002). In the auditory modality, studies have shown that newborn infants (1) exhibit evidence of temporal anticipation when they hear a tone that is not followed by glucose—after the tone (CS) and the glucose (UCS) were paired during an initial conditioning phase (Clifton 1974) and (2) can distinguish between different classes of linguistic input on the basis of the rhythmic attributes of the auditory input (Nazzi and Ramus 2003). Finally, in the audiovisual domain, it has been found that 7-monthold infants can anticipate the impending presentation of an audiovisual event when they first hear a white noise stimulus that has previously reliably predicted the occurrence of the audiovisual event (Donohue and Berg 1991), and that infants’ duration discrimination improves between 6 and 10 months of age (Brannon et al. 2007). Together, these findings indicate that infants are generally sensitive to temporal information in the auditory and visual modalities.
17.5 PERCEPTION OF A–V TEMPORAL SYNCHRONY Earlier, it was indicated that the multisensory world consists of patterns of temporally coincident and amodally invariant information (Gibson 1966) and that infants are likely to respond to A–V temporal synchrony relations from an early age. There are two a priori reasons why this is the case. The perceptual basis for this has already been mentioned, namely, that the detection of temporal A–V synchrony is relatively easy because it only requires perception of synchronous energy onsets and offsets in different modalities. In addition, the neural mechanisms underlying the detection of intersensory temporal synchrony cues in adults are relatively widespread in the brain and are largely subcortical (Bushara et al. 2001). Given that at least some of these mechanisms are subcortical, this makes it likely that such mechanisms are also present and operational in the immature brain. Consistent with these expectations, results from behavioral studies have shown that, starting early in life, infants respond to A–V temporal synchrony and that this cue is primary for them. These results have revealed (1) that 6- and 8-month-old infants can match pulsing auditory and flashing static visual stimuli on the basis of their duration but only if the matching pair is also synchronous (Lewkowicz 1986); (2) that 4- and 8-month-old infants can match an impact sound to one of two bouncing visual stimuli on the basis of synchrony but not on the basis of tempo, regardless of whether the matching tempos are synchronous or not (Lewkowicz 1992a, 1994); (3) that 4- to 8-month-old infants can perceive A–V synchrony relations inherent in simple audiovisual events consisting of bouncing/sounding objects (Lewkowicz 1992b) as well as those inherent in vocalizing faces (Lewkowicz 2000b, 2003); and (d) that newborns (Lewkowicz et al. 2010) and 4- to 6-monthold infants (Lewkowicz and Ghazanfar 2006) can rely on A–V synchrony to match other species’ facial and vocal expressions.
17.5.1 A–V Temporal Synchrony Threshold Given the apparently primary importance of A–V temporal synchrony, Lewkowicz (1996) conducted a series of studies to investigate the threshold for the detection of A–V temporal asynchrony in 2-, 4-, 6-, and 8-month-old infants and compared it to that in adults tested in a similar manner. Infants were first habituated to a two-dimensional object that could be seen bouncing up and down on a computer monitor and an impact sound that occurred each time the object changed direction at the bottom of the monitor. They were then given a set of separate test trials during which the
332
The Neural Bases of Multisensory Processes
impact sound was presented 150, 250, and 350 ms before the object’s visible bounce (sound-first group) or 250, 350, or 450 ms after the visible bounce (sound-second group). Infants in the soundfirst group detected the 350 ms asynchrony, whereas infants in the sound-second group detected the 450 ms asynchrony (no age effects were found). Adults, who were tested in a similar task and with the same stimuli, detected an asynchrony of 80 ms in the sound-first condition and 112 ms in the sound-second condition. Conceptualizing these results in terms of an intersensory temporal contiguity window (ITCW), they indicate that the ITCW is wider in infants than it is in adults and that it decreases in size during development.
17.5.2 Perception of A–V Speech Synchrony and Effects of Experience In subsequent studies, we found that the ITCW is substantially larger for multisensory speech than for abstract nonspeech events. In the first of these studies (Lewkowicz 2000b), we habituated 4-, 6-, and 8-month-old infants to audiovisually synchronous syllables (/ba/ or /sha/) and then tested their response to audiovisually asynchronous versions of these syllables (sound-first condition only) and found that, regardless of age, infants only detected an asynchrony of 666 ms (in pilot work, we tested infants with much lower asynchronies but did not obtain discrimination). We then replicated the finding of such a high discrimination threshold in a subsequent study (Lewkowicz 2003) in which we found that 4- to 8-month-old infants detected an asynchrony of 633 ms. It should be noted that other than our pilot work, these two studies only tested infants with one degree of A–V asynchrony. In other words, we did not formally investigate the size of the ITCW for audiovisual speech events until more recently. We investigated the size of the ITCW in our most recent studies (Lewkowicz 2010). In addition, in these studies, we examined the effects of short-term experience on the detection of A–V temporal synchrony relations and the possible mechanism underlying the detection of A–V synchrony relations. To determine the size of the ITCW, in Experiment 1, we habituated 4- to 10-month-old infants to an audiovisually synchronous syllable and then tested for their ability to detect three increasingly greater levels of asynchrony (i.e., 366, 500, and 666 ms). Infants exhibited response recovery to the 666 ms asynchrony but not to the other two asynchronies, indicating that the threshold was located between 500 and 666 ms (see Figure 17.1). Prior studies in adults have shown that when they are first tested with audiovisually asynchronous events, they perceive them as asynchronous. If, however, they are first given short-term exposure to an asynchronous event and are tested again for detection of asynchrony, they now respond to such events as if they are synchronous (Fujisaki et al. 2004; Navarra et al. 2005; Vroomen et al. 2004). In other words, short-term adaptation to audiovisually asynchronous events appears to widen the ITCW in adults. One possible explanation for this adaptation effect is that it is partly due to an experience-dependent synchrony bias that develops during adults’ lifetime of experience with exclusively synchronous audiovisual events. This bias presumably leads to the formation of an audiovisual “unity assumption” (Welch and Warren 1980). If that is the case, then it might be that infants may not exhibit an adaptation effect because of their relatively lower overall experience with synchronous multisensory events and the absence of a unity assumption. More specifically, infants may not exhibit a widening of their ITCW after habituation to an asynchronous audiovisual event. If so, rather than fail to discriminate between the asynchronous event and those that are physically less synchronous, infants may actually exhibit a decrease in the size of the ITCW and exhibit even better discrimination. To test this possibility, in Experiment 2, we habituated a new group of 4- to 10-month-old infants to an asynchronous syllable (A–V asynchrony was 666 ms) and then tested them for the detection of decreasing levels of asynchrony (i.e., 500, 366, and 0 ms). As predicted, this time, infants not only discriminated between the 666 ms asynchrony and temporal synchrony (0 ms), but they also discriminated between the 666 ms asynchrony and an asynchrony of 366 ms (see Figure 17.2). That is, short-term adaptation with a discriminable A–V asynchrony produced a decrease, rather than an increase, in the size of the ITCW. These results show that in the absence
333
Development of Multisensory Temporal Perception
Mean duration of looking (s)
12
Fam-0 ms. Nov-366 ms. Nov-500 ms. Nov-666 ms.
*
10 8 6 4 2 0
Test Trials
FIGURE 17.1 Mean duration of looking during test trials in response to each of three different A–V temporal asynchronies after habituation to a synchronous audiovisual syllable. Error bars indicate standard error of mean and asterisk indicates that response recovery in that particular test trial was significantly higher than response obtained in the familiar test trial (Fam-0 ms.).
of a unity assumption, short-term exposure to an asynchronous multisensory event does not cause infants to treat it as synchronous but rather focuses their attention on the event’s temporal attributes and, in the process, sharpens their perception of A–V temporal relations. Finally, to investigate the mechanisms underlying A–V asynchrony detection, in Experiment 3, we habituated infants to a synchronous audiovisual syllable and then tested them again for the detection of asynchrony with audiovisual asynchronies of 366, 500, and 666 ms. This time, however, the test stimuli consisted of a visible syllable and a 400 Hz tone rather than the audible syllable.
Mean duration of looking (s)
12 10 8
Fam-666 ms. Nov-500 ms. Nov-366 ms. Nov-0 ms.
* *
6 4 2 0
Test Trials
FIGURE 17.2 Mean duration of looking during test trials in response to each of three different A–V temporal asynchronies after habituation to an asynchronous audiovisual syllable. Error bars indicate standard error of mean and asterisks indicate that response recovery in those particular test trials was significantly higher than response obtained in the familiar test trial (Fam-666 ms.).
334
The Neural Bases of Multisensory Processes
Mean duration of looking (s)
12
*
Fam-0 ms. Nov-366 ms. Nov-500 ms. Nov-666 ms.
10 8 6 4 2 0
Test Trials
FIGURE 17.3 Mean duration of looking during test trials in response to each of three different A–V temporal asynchronies after habituation to an audiovisual stimulus consisting of a visible syllable and a synchronous tone. Error bars indicate standard error of mean and asterisk indicates that response recovery in that particular test trial was significantly higher than response obtained in the familiar test trial (Fam-0 ms.).
Substituting the tone for the acoustic part of the syllable was done to determine whether the dynamic variations in the spectral energy inherent in the acoustic part of the audiovisual speech signal and/ or their correlation with the dynamic variations in gestural information contribute to infant detection of A–V speech synchrony relations. Once again, infants detected the 666 ms asynchrony but not the two lower ones (see Figure 17.3). The fact that these findings replicated the findings from Experiment 1 indicates that infants do not rely on acoustic spectral energy nor on its correlation with the dynamic variations in the gestural information to detect A–V speech synchrony relations. Rather, it appears that infants attend primarily to energy onsets and offsets when processing A–V speech synchrony relations, suggesting that detection of such relations is not likely to require the operation of higher-level neural mechanisms.
17.5.3 Binding of Nonnative Faces and Vocalizations Given that energy onsets and offsets provide infants with sufficient information regarding the temporal alignment of auditory and visual inputs, the higher-level perceptual features of the stimulation in each modality are probably irrelevant to them. This is especially likely early in life where the nervous system and the sensory systems are highly immature and inexperienced. As a result, it is possible that young infants might perceive the faces and vocalizations of other species as belonging together as long as they are synchronous. We tested this idea by showing side-by-side videos of the same monkey’s face producing two different visible calls on each side to groups of 4-, 6-, 8-, and 10-month-old infants (Lewkowicz and Ghazanfar 2006). During the two initial preference trials, infants saw the faces in silence, whereas during the next two trials, infants saw the same faces and heard the audible call that matched one of the two visible calls. The different calls (a coo and a grunt) differed in their durations and, as a result, the matching visible and audible calls corresponded in terms of their onsets and offsets as well as their durations. In contrast, the nonmatching ones only corresponded in terms of their onsets. We expected that infants would look longer at the visible call that matched the audible call if they perceived the temporal synchrony that bound them. Indeed, we found that the two younger groups of infants matched the corresponding faces and vocalizations but that the two older groups did not. These results indicate that young infants can rely on A–V
Development of Multisensory Temporal Perception
335
synchrony relations to perceive even nonnative facial gestures and accompanying vocalizations as coherent entities. The older infants no longer do so for two related reasons. First, they gradually shift their attention to higher-level perceptual features as a function of increasing neural growth, maturation of their perceptual systems, and increasing perceptual experience all acting together to make it possible for them to extract such features. Second, their exclusive and massive experience with human faces and vocalizations narrows their perceptual expertise to ecologically relevant signals. In other words, as infants grow and as they acquire experience with vocalizing faces, they learn to extract more complex features (e.g., gender, affect, and identity), rendering low-level synchrony relations much less relevant. In addition, as infants grow, they acquire exclusive experience with human faces and vocalizations and, as a result, become increasingly more specialized. As they specialize, they stop responding to the faces and vocalizations of other species. Because the matching faces and vocalizations corresponded not only in terms of onset and offset synchrony but in terms of duration as well, the obvious question is whether amodal duration might have contributed to multisensory matching. To investigate this question, we repeated the Lewkowicz and Ghazanfar (2006) procedures in a subsequent study (Lewkowicz et al. 2008), except that this time, we presented the monkey audible calls out of synchrony with respect to both visible calls. This meant that the corresponding visible and audible calls were now only related in terms of their duration. Results yielded no matching in either the 4- to 6-month-old or the 8- to 10-month-old infants, indicating that A–V temporal synchrony mediated successful matching in the younger infants. The fact that the younger infants did not match in this study, despite the fact that the corresponding faces and vocalizations corresponded in their durations, shows that duration did not mediate matching in the original study. This is consistent with previous findings that infants do not match equal-duration auditory and visual inputs unless they are also synchronous (Lewkowicz 1986). If A–V temporal synchrony mediates intersensory matching in young infants, and if responsiveness to this multisensory cue depends on a basic and relatively low-level process, then it is possible that cross-species multisensory matching emerges very early in development. To determine if that is the case, we asked whether newborns also might be able to match monkey faces and vocalizations (Lewkowicz et al. 2010). In Experiment 1 of this study, we used the identical stimulus materials and testing procedures used by Lewkowicz and Ghazanfar (2006), and found that newborns also matched visible and audible monkey calls. We then investigated whether successful matching reflected matching of the synchronous onsets and offsets of the audible and visible calls. If so, then newborns should be able to make the matches even when some of the identity information is removed. Thus, we repeated Experiment 1, except that rather than present the natural call, we presented a complex tone in Experiment 2. To preserve the critical temporal features of the audible call, we ensured that the tone had the same duration as the natural call and that its onsets and offsets were synchronous with the matching visible call. Despite the absence of acoustic identity information and the absence of a correlation between the dynamic variations in facial gesture information and the amplitude and formant structure inherent in the natural audible call, newborns still performed successful intersensory matching. This indicates that newborns’ ability to make cross-species matches in Experiment 1 was based on their sensitivity to the temporally synchronous onsets and offsets of the matching faces and vocalizations and that it was not based on identity information nor on the dynamic correlation between the visible and audible call features. Together, the positive findings of cross-species intersensory matching in newborns and 4- to 6-month-old infants demonstrate that young infants are sensitive to a basic feature of their perceptual world, namely, stimulus energy onsets and offsets. This basic perceptual sensitivity bootstraps newborns’ entry into the world of multisensory objects and events and enables them to perceive them as coherent entities, regardless of their specific identity. This sensitivity is especially potent when the visual information is dynamic. When it is not, infants do not begin to bind the auditory and visual attributes of multisensory objects, such as color/shape and pitch, or color and taste until the second half of the first year of life. The pervasive and fundamental role that A–V temporal synchrony plays in infant perceptual response to multisensory attributes suggests that sensitivity to this
336
The Neural Bases of Multisensory Processes
intersensory perceptual cue reflects the operation of a fundamental early perceptual mechanism. That is, as indicated earlier, even though sensitivity to A–V temporal synchrony is mediated by relatively basic and low-level processing mechanisms, it provides infants with a powerful initial perceptual tool for gradually discovering that multisensory objects are characterized by many other forms of intersensory invariance. For example, once infants start to bind the audible and visible attributes of talking faces, they are in a position to discover that faces and the vocalizations that accompany them could also be specified by common duration, tempo, and rhythm, as well as by higher-level amodal and invariant attributes such as affect, gender, and identity.
17.6 PERCEPTION OF MULTISENSORY TEMPORAL SEQUENCES IN INFANCY Multisensory objects often participate in complex actions that are sequentially organized over time. For example, when people speak, they simultaneously produce sequences of vocal sounds and correlated facial gestures. The syntactically prescribed order of the syllables and words imbues utterances with specific meanings. Unless infants master the ability to extract the sequential structure from such an event, they will not be able to acquire language. Because this ability is so fundamental to adaptive perceptual and cognitive functioning, we have investigated its developmental emergence. When we began these studies, there was little, if any, empirical evidence on infant perception of multisensory temporal sequences to guide our initial exploration of this issue. Prior theoretical views claimed that sequence learning is an innate ability (Greenfield 1991; Nelson 1986) but neither of these views specified what they meant by sequence learning abilities nor what infants should be capable of doing in this regard. Indeed, recent empirical research on infant pattern and sequence perception has contradicted the claim that this ability is innate and, if anything, has shown that sequence perception and learning is a very complex skill that consists of several component skills and that it takes several years to reach adult levels of proficiency (Gulya and Colombo 2004; Thomas and Nelson 2001). Although no studies have investigated sequence perception and learning at birth, studies have shown that different sequence perception abilities, including the ability to perceive and learn adjacent and distant statistical relations, simple sequential rules, and ordinal position information, emerge at different points in infancy. Thus, beginning as early as 2 months of age, infants can learn adjacent statistical relations that link a series of looming visual shapes (Kirkham et al. 2002; Marcovitch and Lewkowicz 2009), by 8 months, they can learn the statistical relations that link adjacent static object features (Fiser and Aslin 2002) as well as adjacent nonsense words in a stream of sounds (Saffran et al. 1996), and by 15 months, they begin to exhibit the ability to learn distant statistical relations (Gómez and Maye 2005). Moreover, although infants as young as 5 months of age can learn simple abstract temporal rules such as one specifying the order (e.g., AAB vs. ABB) of distinct elements consisting of abstract objects and accompanying speech sounds (Frank et al. 2009), only 7.5-month-old infants can learn such rules when they are instantiated by nonsense syllables (Marcus et al. 1999, 2007) and only 11-month-olds can learn simple rules instantiated by looming objects (Johnson et al. 2009). Finally, it is not until 9 months of age that infants can track the ordinal position of a particular syllable in a string of syllables (Gerken 2006). It is important to note that most of the studies of infant sequence learning have presented unisensory stimuli even though most of our daily perceptual experiences are multisensory in nature. As a result, we investigated whether the developmental pattern found thus far in the development of sequence perception and learning differs for multisensory sequences. To do so, we provided infants with an opportunity to learn a single audiovisual sequence consisting of distinct moving objects and their impact sounds, whereas in others we allowed infants to learn a set of different sequences in which each one was composed of different objects and impact sounds. Regardless of whether infants had to learn a single sequence or multiple ones, during the habituation phase, they could see the objects appear one after another at the top of a computer monitor and then move down toward a ramp at the bottom of the stimulus display monitor. When the objects reached the ramp, they
Development of Multisensory Temporal Perception
337
made an impact sound, turned to the right, and moved off to the side and disappeared. This cycle was repeated for the duration of each habituation trial. After habituation, infants were given test trials during which the order of sequence elements was changed in some way and the question was whether they detected the change. In an initial study (Lewkowicz 2004), we asked whether infants can learn a sequence composed of three moving/impacting objects and, if so, what aspects of that sequence they encoded. Results indicated that 4-month-old infants detected serial order changes only when the changes were specified concurrently by audible and visible attributes during the learning as well as the test phase and only when the impact part of the event—a local event feature that was not informative about sequential order—was blocked from view. In contrast, 8-month-old infants detected order changes regardless of whether they were specified by unisensory or bisensory attributes and whether they could see the impact or not. In sum, younger infants required multisensory redundancy to detect the serial order changes whereas older infants did not. A follow-up study (Lewkowicz 2008) replicated the earlier findings, ruled out primacy effects, extended the earlier findings by showing that even 3-month-old infants can perceive and discriminate three-element dynamic audiovisual sequences and that they also rely on multisensory redundancy for successful learning and discrimination. In addition, this study showed that object motion plays an important role in that infants exhibited less robust responsiveness to audiovisual sequences consisting of looming rather than explicitly moving objects. Because the changes in our two initial studies involved changes in the order of a particular object/ impact sound as well as its statistical relations vis-à-vis the other sequence elements, we investigated the separate role of each of these sequential attributes in our most recent work (Lewkowicz and Berent 2009). Here, we investigated directly whether 4-month-old infants could track the statistical relations among specific sequence elements (e.g., AB, BC), and/or whether they could also encode abstract ordinal position information (e.g., that B is the second element in a sequence such as ABCD). Thus, across three experiments, we habituated infants to sequences of four moving/ sounding objects in which three of the objects and their sounds varied in their ordinal position but in which the position of one target object/sound remained invariant (e.g., ABCD, CBDA). Figure 17.4 shows an example of one of these sequences and how they moved. We then tested whether the infants detected a change in the target’s position. We found that infants detected an ordinal position change only when it disrupted the statistical relations between adjacent elements, but not when the statistical relations were controlled. Together, these findings indicate that 4-month-old infants learn the order of sequence elements by tracking their statistical relations but not their invariant ordinal position. When these findings are combined with the previously reviewed findings on sequence
FIGURE 17.4 One of three different sequences presented during the habituation phase of the sequence learning experiment (actual objects presented are shown). Each object made a distinct impact sound when it came in contact with the black ramp. Across three different sequences, the triangle was the target stimulus and, thus, for one group of infants, the target remained in second ordinal position during habituation phase and then changed to third ordinal position in the test trials.
338
The Neural Bases of Multisensory Processes
learning in infancy, they show that different and increasingly more complex temporal sequence learning abilities emerge during infancy. For example, they suggest that the ability to perceive and learn the invariant ordinal position of a sequence element emerges sometime after 4 months of age. When it emerges and what mediates its emergence is currently an open question, as are the questions about the emergence of the other more complex sequence perception and learning skills.
17.7 SPECULATIONS ON NEURAL MECHANISMS UNDERLYING THE DEVELOPMENT OF MULTISENSORY PERCEPTION It is now abundantly clear that some basic multisensory processing abilities are present early in human development, and that as infants grow and as they acquire perceptual experience, these abilities improve. As indicated earlier, this general developmental pattern is consistent with the two classic theoretical views because the core predictions that both views make is that multisensory functions improve with development. Unfortunately, both views were silent about the possible neural mechanisms underlying the developmental emergence of multisensory processing. For example, although Gibson (1969) proposed that infants are sensitive to perceptual structure and the amodal invariants that are inherent in the structured stimulus array from birth onward, her insistence that the information is already integrated in the external perceptual array can be interpreted to mean that the nervous system does not play a significant role in integration. Of course, this assumption does not square with the results from modern neurobiological studies, which clearly show that the brain plays a crucial role in this process. Consequently, a more complete theoretical framework for conceptualizing the development of multisensory processing is one that not only acknowledges that the external stimulus array is highly structured but one that also admits that the perception of that structure is intimately dependent on neural mechanisms that have evolved to detect that structure (Ghazanfar and Schroeder 2006; Stein and Stanford 2008). In other words, perception of multisensory coherence at any point in development is the joint product of the infant’s ability to detect increasingly greater stimulus structure—because of the cumulative effects of sensory/perceptual experience and learning—and to the increasing elaboration of neural structures and their functional properties. The latter may not only permit the integration of multisensory inputs but sometimes may actually induce integral perception even when stimulation in the external sensory array is only unisensory (Romei et al. 2009). Like Gibson’s ecological view of multisensory perceptual development, the developmental integration view also failed to specify the underlying neural mechanisms that mediate the long-term effects of experience with the multisensory world and, thus, is subject to similar limitations. What possible neural mechanisms might mediate multisensory processing in early development? Traditionally, it has been assumed that the neural mechanisms that mediate multisensory processing are hierarchically organized with initial analysis being sensory-specific and only later analysis being multisensory (presumably once the information arrives in the classic cortical association areas). This hierarchical processing model has recently been challenged by findings showing that multisensory interactions in the primary cortical areas begin to occur as early as 40 to 50 ms after stimulation (Giard and Peronnet 1999; Molholm et al. 2002). Moreover, it has been suggested that multisensory interactions are not only mediated by feedback connections from higher-level cortical areas onto lower level areas but that they are also mediated by feedforward and lateral connections from lower-level primary cortical areas (Foxe and Schroeder 2005). As a result, there is a growing consensus that multisensory interactions occur all along the neuraxis, that multisensory integration mechanisms are widespread in the primate neocortex, and that this is what makes the perception of multisensory coherence possible (Ghazanfar and Schroeder 2006). This conclusion is supported by findings showing that traditionally unisensory areas actually contain neurons that respond to stimulation in other modalities. For example, responsiveness in the auditory cortex has been shown to be modulated by visual input in humans (Calvert et al. 1999), monkeys (Ghazanfar et al. 2005), ferrets (Bizley et al. 2007), and rats (Wallace et al. 2004).
Development of Multisensory Temporal Perception
339
If multisensory interactions begin to occur right after the sensory input stage and before sensory elaboration has occurred, and if such interactions continue to occur as the information ascends the neural pathways to the traditional association areas of the cortex, then this resolves a critical problem. From the standpoint of the adult brain, it solves the problem of having to wait until the higher-order cortical areas can extract the various types of relations inherent in multisensory input. This way, the observer can begin to perform a veridical scene analysis and arrive at a coherent multisensory experience shortly after input arrives at the sensory organs (Foxe and Schroeder 2005). From the standpoint of the immature infant brain, the adult findings raise some interesting possibilities. For example, because these early neural interactions are of a relatively low level, they are likely to occur very early in human development and can interact with any other low level subcortical integration mechanisms. Whether this scenario is correct is currently unknown and awaits further investigation. As shown here, behavioral findings from human infants support these conjectures in that starting at birth, human infants are capable of multisensory perception. Thus, the question is no longer whether such mechanisms operate but rather what is their nature and where in the brain are such mechanisms operational. Another interesting question is whether the heterochronous emergence of heterogeneous multisensory perceptual skills that has been found in behavioral infant studies (Lewkowicz 2002) is reflected in the operation of distinct neural mechanisms emerging at different times and in different regions of the brain. There is little doubt that the neural mechanisms underlying multisensory processing are likely to be quite rudimentary in early human development. The central nervous system as well as the different sensory systems are immature and young infants are perceptually and cognitively inexperienced. This is the case despite the fact that the tactile, vestibular, chemical, and auditory modalities begin to function before birth (Gottlieb 1971) and despite the fact that this provides fetuses with some sensory experience and some opportunity for intersensory interaction (Turkewitz 1994). Consequently, newborn infants are relatively unprepared for the onslaught of new multisensory input that also, for the first time, includes visual information. In addition, newborns are greatly limited by the immature nature of their different sensory systems (Kellman and Arterberry 1998). That is, their visual limitations include poor spatial and temporal resolution and poor sensitivity to contrast, orientation, motion, depth, and color. Their auditory limitations include much higher thresholds compared to adults and include higher absolute frequency, frequency resolution, and temporal resolution thresholds. Obviously, these basic sensory functions improve rapidly over the first months of life, but there is little doubt that they initially impose limitations on infant perception and probably account for some of the developmental changes found in the development of multisensory responsiveness. The question for future studies is: How do infants overcome these limitations? The work reviewed here suggests that the answer lies in the complex interactions between neural and behavioral levels of organization and in the daily experiences that infants have in their normal ecological setting. Because developmental change is driven by such interactions (Gottlieb et al. 2006), the challenge for future studies is to explicate these interactions.
REFERENCES Bahrick, L.E. 1983. Infants’ perception of substance and temporal synchrony in multimodal events. Infant Behavior & Development 6:429–51. Bahrick, L.E. 1994. The development of infants’ sensitivity to arbitrary intermodal relations. Ecological Psychology 6:111–23. Bahrick, L.E., and R. Lickliter. 2000. Intersensory redundancy guides attentional selectivity and perceptual learning in infancy. Developmental Psychology 36:190–201. Bahrick, L.E., R. Lickliter, and R. Flom. 2004. Intersensory redundancy guides the development of selective attention, perception, and cognition in infancy. Current Directions in Psychological Science 13:99–102. Birch, H.G., and A. Lefford. 1963. Intersensory development in children. Monographs of the Society for Research in Child Development 25. Birch, H.G., and A. Lefford. 1967. Visual differentiation, intersensory integration, and voluntary motor control. Monographs of the Society for Research in Child Development 32:1–87.
340
The Neural Bases of Multisensory Processes
Bizley, J.K., F.R. Nodal, V.M. Bajo, I. Nelken, and A.J. King. 2007. Physiological and anatomical evidence for multisensory interactions in auditory cortex. Cerebral Cortex 17:2172–89. Brannon, E.M., S. Suanda, and K. Libertus. 2007. Temporal discrimination increases in precision over development and parallels the development of numerosity discrimination. Developmental Science 10:770–7. Bremner, A.J., N.P. Holmes, and C. Spence. 2008. Infants lost in (peripersonal) space? Trends in Cognitive Sciences 12:298–305. Brookes, H., A. Slater, P.C. Quinn et al. 2001. Three-month-old infants learn arbitrary auditory-visual pairings between voices and faces. Infant & Child Development 10:75–82. Bushara, K.O., J. Grafman, and M. Hallett. 2001. Neural correlates of auditory-visual stimulus onset asynchrony detection. Journal of Neuroscience 21:300–4. Calvert, G.A., M.J. Brammer, E.T. Bullmore et al. 1999. Response amplification in sensory-specific cortices during crossmodal binding. Neuroreport: For Rapid Communication of Neuroscience Research 10:2619–23. Calvert, G.A., C. Spence, and B. Stein (eds.). 2004. The Handbook of Multisensory Processes. Cambridge, MA: MIT Press. Canfield, R.L., and M.M. Haith. 1991. Young infants’ visual expectations for symmetric and asymmetric stimulus sequences. Developmental Psychology 27:198–208. Canfield, R.L., E.G. Smith, M.P. Brezsnyak, and K.L. Snow. 1997. Information processing through the first year of life: A longitudinal study using the visual expectation paradigm. Monographs of the Society for Research in Child Development 62:v–vi, 1–145. Clifton, R.K. 1974. Heart rate conditioning in the newborn infant. Journal of Experimental Child Psychology 18:9–21. Colombo, J., and W.A. Richman. 2002. Infant timekeeping: Attention and temporal estimation in 4-month-olds. Psychological Science 13:475–9. Donohue, R.L., and W.K. Berg. 1991. Infant heart-rate responses to temporally predictable and unpredictable events. Developmental Psychology 27:59–66. Fiser, J., and R.N. Aslin. 2002. Statistical learning of new visual feature combinations by infants. Proceedings of the National Academy of Sciences of the United States of America 99:15822–6. Foxe, J.J., and C.E. Schroeder. 2005. The case for feedforward multisensory convergence during early cortical processing. Neuroreport 16:419. Fraisse, P. 1982. The adaptation of the child to time. In W.J. Friedman (ed.), The developmental psychology of time, 113–40. New York: Academic Press. Frank, M.C., J.A. Slemmer, G.F. Marcus, and S.P. Johnson. 2009. Information from multiple modalities helps 5-month-olds learn abstract rules. Developmental Science 12:504–9. Fujisaki, W., S. Shimojo, M. Kashino, and S.Y. Nishida. 2004. Recalibration of audiovisual simultaneity. Nature Neuroscience 7:773–8. Gardner, J.M., D.J. Lewkowicz, S.A. Rose, and B.Z. Karmel. 1986. Effects of visual and auditory stimulation on subsequent visual preferences in neonates. International Journal of Behavioral Development 9:251–63. Gebhard, J.W., and G.H. Mowbray. 1959. On discriminating the rate of visual flicker and auditory flutter. American Journal of Psychology 72:521–9. Gerken, L. 2006. Decisions, decisions: Infant language learning when multiple generalizations are possible. Cognition 98:B67–74. Ghazanfar, A.A., and C.E. Schroeder. 2006. Is neocortex essentially multisensory? Trends in Cognitive Sciences 10:278–85. Epub 2006 May 18. Ghazanfar, A.A., J.X. Maier, K.L. Hoffman, and N.K. Logothetis. 2005. Multisensory integration of dynamic faces and voices in rhesus monkey auditory cortex. Journal of Neuroscience 25:5004–12. Giard, M.H., and F. Peronnet. 1999. Auditory–visual integration during multimodal object recognition in humans: A behavioral and electrophysiological study. Journal of Cognitive Neuroscience 11:473–90. Gibson, J.J. 1966. The senses considered as perceptual systems. Boston: Houghton-Mifflin. Gibson, E.J. 1969. Principles of perceptual learning and development. New York: Appleton. Gómez, R.L., and J. Maye. 2005. The developmental trajectory of nonadjacent dependency learning. Infancy 7:183–206. Gottlieb, G. 1971. Ontogenesis of sensory function in birds and mammals. In The biopsychology of development, ed. E. Tobach, L.R. Aronson, and E. Shaw, 67–128. New York: Academic Press. Gottlieb, G., D. Wahlsten, and R. Lickliter. 2006. The significance of biology for human development: A developmental psychobiological systems view. In Handbook of child psychology, ed. R. Lerner, 210–57. New York: Wiley.
Development of Multisensory Temporal Perception
341
Greenfield, P.M. 1991. Language, tools and brain: The ontogeny and phylogeny of hierarchically organized sequential behavior. Behavioral and Brain Sciences 14:531–95. Gulya, M., and M. Colombo. 2004. The ontogeny of serial-order behavior in humans (Homo sapiens): Representation of a list. Journal of Comparative Psychology 118:71–81. Handel, S., and L. Buffardi. 1969. Using several modalities to perceive one temporal pattern. Quarterly Journal of Experimental Psychology 21:256–66. Johnson, S.P., K.J. Fernandes, M.C. Frank et al. 2009. Abstract rule learning for visual sequences in 8- and 11-month-olds. Infancy 14:2–18. Kahana-Kalman, R., and A.S. Walker-Andrews. 2001. The role of person familiarity in young infants’ perception of emotional expressions. Child Development 72:352–69. Kamachi, M., H. Hill, K. Lander, and E. Vatikiotis-Bateson. 2003. Putting the face to the voice: Matching identity across modality. Current Biology 13:1709–14. Kellman, P.J., and M.E. Arterberry. 1998. The cradle of knowledge: Development of perception in infancy. Cambridge, MA: MIT Press. King, A.J., M.E. Hutchings, D.R. Moore, and C. Blakemore. 1988. Developmental plasticity in the visual and auditory representations in the mammalian superior colliculus. Nature 332:73–6. Kirkham, N.Z., J.A. Slemmer, and S.P. Johnson. 2002. Visual statistical learning in infancy: Evidence for a domain general learning mechanism. Cognition 83:B35–42. Knudsen, E.I., and M.S. Brainard. 1991. Visual instruction of the neural map of auditory space in the developing optic tectum. Science 253:85–7. Kuhl, P.K., and A.N. Meltzoff. 1982. The bimodal perception of speech in infancy. Science 218:1138–41. Kuhl, P.K., and A.N. Meltzoff. 1984. The intermodal representation of speech in infants. Infant Behavior & Development 7:361–81. Lashley, K.S. 1951. The problem of serial order in behavior. In Cerebral mechanisms in behavior: The Hixon symposium, ed. L.A. Jeffress, 123–47. New York: Wiley. Lewkowicz, D.J. 1986. Developmental changes in infants’ bisensory response to synchronous durations. Infant Behavior & Development 9:335–53. Lewkowicz, D.J. 1992a. Infants’ response to temporally based intersensory equivalence: The effect of synchronous sounds on visual preferences for moving stimuli. Infant Behavior & Development 15:297–324. Lewkowicz, D.J. 1992b. Infants’ responsiveness to the auditory and visual attributes of a sounding/moving stimulus. Perception & Psychophysics 52:519–28. Lewkowicz, D.J. 1994. Limitations on infants’ response to rate-based auditory-visual relations. Developmental Psychology 30:880–92. Lewkowicz, D.J. 1996. Perception of auditory-visual temporal synchrony in human infants. Journal of Experimental Psychology: Human Perception & Performance 22:1094–106. Lewkowicz, D.J. 2000a. The development of intersensory temporal perception: An epigenetic systems/limitations view. Psychological Bulletin 126:281–308. Lewkowicz, D.J. 2000b. Infants’ perception of the audible, visible and bimodal attributes of multimodal syllables. Child Development 71:1241–57. Lewkowicz, D.J. 2002. Heterogeneity and heterochrony in the development of intersensory perception. Cognitive Brain Research 14:41–63. Lewkowicz, D.J. 2003. Learning and discrimination of audiovisual events in human infants: The hierarchical relation between intersensory temporal synchrony and rhythmic pattern cues. Developmental Psychology 39:795–804. Lewkowicz, D.J. 2004. Perception of serial order in infants. Developmental Science 7:175–84. Lewkowicz, D.J. 2008. Perception of dynamic and static audiovisual sequences in 3- and 4-month-old infants. Child Development 79:1538–54. Lewkowicz, D.J. 2010. Infant perception of audio-visual speech synchrony. Developmental Psychology 46:66–77. Lewkowicz, D.J., and I. Berent. 2009. Sequence learning in 4-month-old infants: Do infants represent ordinal information? Child Development 80:1811–23. Lewkowicz, D., and K. Kraebel. 2004. The value of multisensory redundancy in the development of intersensory perception. The Handbook of Multisensory Processes: 655–78. Cambridge, MA: MIT Press. Lewkowicz, D.J., and A.A. Ghazanfar. 2006. The decline of cross-species intersensory perception in human infants. Proceedings of the National Academy of Sciences of the United States of America 103:6771–4. Lewkowicz, D.J., and G. Turkewitz. 1980. Cross-modal equivalence in early infancy: Auditory–visual intensity matching. Developmental Psychology 16:597–607. Lewkowicz, D.J., and G. Turkewitz. 1981. Intersensory interaction in newborns: Modification of visual preferences following exposure to sound. Child Development 52:827–32.
342
The Neural Bases of Multisensory Processes
Lewkowicz, D.J., R. Sowinski, and S. Place. 2008. The decline of cross-species intersensory perception in human infants: Underlying mechanisms and its developmental persistence. Brain Research 1242:291–302. Lewkowicz, D.J., I. Leo, and F. Simion. 2010. Intersensory perception at birth: Newborns match non-human primate faces and voices. Infancy 15:46–60. Lickliter, R., and L.E. Bahrick. 2000. The development of infant intersensory perception: Advantages of a comparative convergent-operations approach. Psychological Bulletin 126:260–80. Lickliter, R., and H. Banker. 1994. Prenatal components of intersensory development in precocial birds. In Development of intersensory perception: Comparative perspectives, ed. D.J. Lewkowicz and R. Lickliter, 59–80. Norwood, NJ: Lawrence Erlbaum Associates, Inc. Lickliter, R., D.J. Lewkowicz, and R.F. Columbus. 1996. Intersensory experience and early perceptual development: The role of spatial contiguity in bobwhite quail chicks’ responsiveness to multimodal maternal cues. Developmental Psychobiology 29:403–16. Maier, N.R.F., and T.C. Schneirla. 1964. Principles of animal psychology. New York: Dover Publications. Marcovitch, S., and D.J. Lewkowicz. 2009. Sequence learning in infancy: The independent contributions of conditional probability and pair frequency information. Developmental Science 12:1020–5. Marcus, G.F., S. Vijayan, S. Rao, and P. Vishton. 1999. Rule learning by seven-month-old infants. Science 283:77–80. Marcus, G.F., K.J. Fernandes, and S.P. Johnson. 2007. Infant rule learning facilitated by speech. Psychological Science 18:387–91. Marks, L. 1978. The unity of the senses. New York: Academic Press. Martin, J.G. 1972. Rhythmic (hierarchical) versus serial structure in speech and other behavior. Psychological Review 79:487–509. McGurk, H., and J. MacDonald. 1976. Hearing lips and seeing voices. Nature 264:229–39. Molholm, S., W. Ritter, M.M. Murray et al. 2002. Multisensory auditory–visual interactions during early sensory processing in humans: A high-density electrical mapping study. Cognitive Brain Research 14:115–28. Munhall, K.G., and E. Vatikiotis-Bateson. 2004. Spatial and temporal constraints on audiovisual speech perception. In The handbook of multisensory processes, ed. G.A. Calvert, C. Spence, and B.E. Stein, 177–88. Cambridge, MA: MIT Press. Myers, A.K., B. Cotton, and H.A. Hilp. 1981. Matching the rate of concurrent tone bursts and light flashes as a function of flash surround luminance. Perception & Psychophysics 30(1):33–8. Navarra, J., A. Vatakis, M. Zampini et al. 2005. Exposure to asynchronous audiovisual speech extends the temporal window for audiovisual integration. Cognitive Brain Research 25:499–507. Nazzi, T., and F. Ramus. 2003. Perception and acquisition of linguistic rhythm by infants. Speech Communication 41:233–43. Neil, P.A., C. Chee-Ruiter, C. Scheier, D.J. Lewkowicz, and S. Shimojo. 2006. Development of multisensory spatial integration and perception in humans. Developmental Science 9:454–64. Nelson, K. 1986. Event knowledge: Structure and function in development. Hillsdale, NJ: Lawrence Erlbaum Associates. Nelson, K. 2007. Young minds in social worlds. Cambridge, MA: Harvard Univ. Press. Oppenheim, R.W. 1981. Ontogenetic adaptations and retrogressive processes in the development of the nervous system and behavior: A neuroembryological perspective. In Maturation and development: Biological and psychological perspectives, ed. K.J. Connolly and H.F.R. Prechtl, 73–109. Philadelphia, PA: Lippincott. Partan, S., and P. Marler. 1999. Communication goes multimodal. Science 283:1272–3. Patterson, M.L., and J.F. Werker. 2002. Infants’ ability to match dynamic phonetic and gender information in the face and voice. Journal of Experimental Child Psychology 81:93–115. Patterson, M.L., and J.F. Werker. 2003. Two-month-old infants match phonetic information in lips and voice. Developmental Science 6(2):191–6. Piaget, J. 1952. The origins of intelligence in children. New York: International Universities Press. Pons, F., D.J. Lewkowicz, S. Soto-Faraco, and N. Sebastián-Gallés. 2009. Narrowing of intersensory speech perception in infancy. Proceedings of the National Academy of Sciences of the United States of America 106:10598–602. Reardon, P., and E.W. Bushnell. 1988. Infants’ sensitivity to arbitrary pairings of color and taste. Infant Behavior and Development 11:245–50. Romei, V., M.M. Murray, C. Cappe, and G. Thut. 2009. Preperceptual and stimulus-selective enhancement of low-level human visual cortex excitability by sounds. Current Biology 19:1799–805. Rowe, C. 1999. Receiver psychology and the evolution of multicomponent signals. Animal Behaviour 58:921–31.
Development of Multisensory Temporal Perception
343
Saffran, J.R., R.N. Aslin, and E.L. Newport. 1996. Statistical learning by 8-month-old infants. Science 274:1926–8. Scheier, C., D.J. Lewkowicz, and S. Shimojo. 2003. Sound induces perceptual reorganization of an ambiguous motion display in human infants. Developmental Science 6:233–44. Sekuler, R., A.B. Sekuler, and R. Lau. 1997. Sound alters visual motion perception. Nature 385:308. Shams, L., Y. Kamitani, and S. Shimojo. 2000. What you see is what you hear. Nature 408(6814):788. Shipley, T. 1964. Auditory flutter-driving of visual flicker. Science 145:1328–30. Slater, A., E. Brown, and M. Badenoch. 1997. Intermodal perception at birth: Newborn infants’ memory for arbitrary auditory–visual pairings. Early Development & Parenting 6:99–104. Slater, A., P.C. Quinn, E. Brown, and R. Hayes. 1999. Intermodal perception at birth: Intersensory redundancy guides newborn infants’ learning of arbitrary auditory–visual pairings. Developmental Science 2:333–8. Slutsky, D.A., and G.H. Recanzone. 2001. Temporal and spatial dependency of the ventriloquism effect. Neuroreport 12:7–10. Stein, B.E., and M.A. Meredith. 1993. The merging of the senses. Cambridge, MA: MIT Press. Stein, B.E., and T.R. Stanford. 2008. Multisensory integration: Current issues from the perspective of the single neuron. Nature Reviews. Neuroscience 9:255–66. Stein, B.E., D. Burr, C. Constantinidis et al. 2010. Semantic confusion regarding the development of multisensory integration: A practical solution. European Journal of Neuroscience 31:1713–20. Thelen, E., and L.B. Smith. 1994. A dynamic systems approach to the development of cognition and action. Cambridge, MA: MIT Press. Thomas, K.M., and C.A. Nelson. 2001. Serial reaction time learning in preschool- and school-age children. Journal of Experimental Child Psychology 79:364–87. Turkewitz, G. 1994. Sources of order for intersensory functioning. In The development of intersensory perception: Comparative perspectives, ed. D.J. Lewkowicz and R. Lickliter, 3–17. Hillsdale, NJ: Lawrence Erlbaum Associates. Turkewitz, G., and P.A. Kenny. 1982. Limitations on input as a basis for neural organization and perceptual development: A preliminary theoretical statement. Developmental Psychobiology 15:357–68. Vroomen, J., M. Keetels, B. de Gelder, and P. Bertelson. 2004. Recalibration of temporal order perception by exposure to audio-visual asynchrony. Cognitive Brain Research 22:32–5. Walker-Andrews, A.S. 1986. Intermodal perception of expressive behaviors: Relation of eye and voice? Developmental Psychology 22:373–7. Walker-Andrews, A.S. 1997. Infants’ perception of expressive behaviors: Differentiation of multimodal information. Psychological Bulletin 121:437–56. Walker-Andrews, A.S., L.E. Bahrick, S.S. Raglioni, and I. Diaz. 1991. Infants’ bimodal perception of gender. Ecological Psychology 3:55–75. Wallace, M.T., and B.E. Stein. 1997. Development of multisensory neurons and multisensory integration in cat superior colliculus. Journal of Neuroscience 17:2429–44. Wallace, M.T., and B.E. Stein. 2001. Sensory and multisensory responses in the newborn monkey superior colliculus. Journal of Neuroscience 21:8886–94. Wallace, M.T., R. Ramachandran, and B.E. Stein. 2004. A revised view of sensory cortical parcellation. Proceedings of the National Academy of Sciences of the United States of America 101:2167–72. Wallace, M.T., B.E. Stein, and R. Ramachandran. 2006. Early experience determines how the senses will interact: A revised view of sensory cortical parcellation. Journal of Neurophysiology 101:2167–72. Welch, R.B., and D.H. Warren. 1980. Immediate perceptual response to intersensory discrepancy. Psychological Bulletin 88:638–67. Welch, R.B., L.D. Duttenhurt, and D.H. Warren. 1986. Contributions of audition and vision to temporal rate perception. Perception & Psychophysics 39:294–300. Werner, H. 1973. Comparative psychology of mental development. New York: International Universities Press. Yehia, H., P. Rubin, and E. Vatikiotis-Bateson. 1998. Quantitative association of vocal-tract and facial behavior. Speech Communication 26:23–43. Zangenehpour, S., A.A. Ghazanfar, D.J. Lewkowicz, and R.J. Zatorre. 2009. Heterochrony and cross-species intersensory matching by infant vervet monkeys. PLoS ONE 4:e4302.
18
Multisensory Integration Develops Late in Humans David Burr and Monica Gori
CONTENTS 18.1 18.2 18.3 18.4 18.5
Development of Multimodal Perception in Infancy and Childhood..................................... 345 Neurophysiological Evidence for Development of Multimodal Integration.......................... 347 Development of Cue Integration in Spatial Navigation......................................................... 348 Development of Audiovisual Cue Integration....................................................................... 349 Sensory Experience and Deprivation Influence Development of Multisensory Integration.............................................................................................................................. 350 18.6 Development of Visuo-Haptic Integration............................................................................. 351 18.7 Calibration by Cross-Modal Comparison?............................................................................ 355 18.8 Haptic Discrimination in Blind and Low-Vision Children: Disruption of CrossSensory Calibration?.............................................................................................................. 356 18.9 Concluding Remarks: Evidence of Late Multisensory Development.................................... 357 Acknowledgment............................................................................................................................ 358 References....................................................................................................................................... 358
18.1 DEVELOPMENT OF MULTIMODAL PERCEPTION IN INFANCY AND CHILDHOOD From birth, we interact with the world through our senses, which provide complementary information about the environment. To perceive and interact with a coherent world, our brain has to merge information from the different senses as efficiently as possible. Because the same environmental property may be signaled by more than one sense, the brain must integrate redundant signals of a particular property (such as the size and shape of an object held in the hand), which can result in a more precise estimate than either individual estimate. Much behavioral, electrophysiological, and neuroimaging evidence has shown that signals from the different senses related to the same event, congruent in space and time, increase the accuracy and precision of its encoding well beyond what would be possible from independent estimates from individual senses. Several recent studies have suggested that human adults integrate redundant information in a statistically optimal fashion (e.g., Alais and Burr 2004; Ernst and Banks 2002; Trommershäuser et al. in press). An important question is whether this optimal multisensory integration is present at birth, or whether (and if so when) it develops during childhood. Early development of multisensory integration could be useful for the developing brain, but may also bring fresh challenges, given the dramatic changes that the human brain and body undergo during this period. The clear advantages of multisensory integration may come at a cost to the developing organism. In fact, as we will see later in this chapter, many multisensory functions appear only late in development, well after the maturation of individual senses. Sensory systems are not mature at birth, but become increasingly refined during development. The brain has to continuously update its mapping between sensory and motor correspondence and to take these changes into account. This is a very protracted process, with cognitive changes and 345
346
The Neural Bases of Multisensory Processes
neural reorganization lasting well into early adolescence (Paus 2005). A further complication is that different senses develop at different rates: first touch, followed by vestibular, chemical, and auditory (all beginning to function before birth), and finally vision (Gottlieb 1971). The differences in development rates could exacerbate the challenges for cross-modal integration and calibrating, needing to take into account growing limbs, eye length, interocular distances, etc. Some sensory properties, like contrast sensitivity, visual acuity, binocular vision, color perception, and some kinds of visual motion perception mature rapidly to reach near adult-like levels within 8 to 12 months of age (for a review, see Atkinson 2000). Similarly, young infants can explore, manipulate, and discriminate the form of objects haptically, analyzing and coding tactile and weight information, during a period when their hands are undergoing rapid changes (Streri 2003; Streri et al. 2000, 2004; Striano and Bushnell 2005). On the other hand, not all perceptual skills develop early. For example, auditory frequency discrimination (Olsho 1984; Olsho et al. 1988), temporal discrimination (Trehub et al. 1995), and basic speech abilities all improve during infancy (Jusczyk et al. 1998). Also, projective size and shape are not noticed or understood until at least 7 years of age, and evidence suggests that even visual acuity and contrast sensitivity continue to improve slightly up until 5 to 6 years of age (Brown et al. 1987). Other attributes, such as the use of binocular cues to control prehensile movements (Watt et al. 2003) and the development of complex form and motion perception (Del Viva et al. 2006; Ellemberg et al. 1999, 2004; Kovács et al. 1999; Lewis et al. 2004) continue until 8 to 14 years of age. Object manipulation also continues to improve until 8 to 14 years (Rentschler et al. 2004), and tactile object recognition in blind and sighted children does not develop until 5 to 6 years (Morrongiello et al. 1994). Many other complex and experience-dependent capacities, such as facilitation of speech perception in noise (e.g., Elliott 1979; Johnson 2000), have been reported to be immature throughout childhood. All these studies suggest that there is a difference not only in the developmental rates of different sensory systems, but also in the development of different aspects within each sensory system, all potential obstacles for the development of cue integration. The development of multimodal perceptual abilities in human infants has been studied with various techniques, such as habituation and preferential looking. Many studies suggest that some multisensory processes, such as cross-modal facilitation, cross-modal transfer, and multisensory matching are present to some degree at an early age (e.g., Streri 2003; Lewkowicz 2000, for review). Young infants can match signals between different sensory modalities (Dodd 1979; Lewkowicz and Turkewitz 1981) and detect equivalence in the amodal properties of objects across the senses (e.g., Patterson and Werker 2002; Rose 1981). For example, they can match faces with voices (Bahrick 2001) and visual and auditory motion signals (Lewkowicz 1992) on the basis of their synchrony. By 3 to 5 months of age, they can discriminate audiovisual changes in tempo and rhythm (Bahrick et al. 2002; Bahrick and Lickliter 2000), from 4 months of age, they can match visual and tactile form properties (Rose and Ruff 1987), and at about 6 months of age, they can do duration-based matches (Lewkowicz 1986). Young infants seem to be able to benefit from multimodal redundancy of information across senses (Bahrick and Lickliter 2000, 2004; Bahrick et al. 2002; Lewkowicz 1988a, 1996; Neil et al. 2006). There is also evidence for cross-modal facilitation, in which stimuli in one modality increases the responsiveness to stimuli in other modalities (Lewkowicz and Lickliter 1994; Lickliter et al. 1996; Morrongiello et al. 1998). However, not all forms of facilitation develop early. Infants do not exhibit multisensory facilitation of reflexive head and eye movements for spatial localization until about 8 months of age (Neil et al. 2006), and multisensory coactivation during a simple audiovisual detection task does not occur until 8 years of age in most children (Barutchu et al. 2009, 2010). Recent studies suggest that human infants can transfer information gleaned from one sense to another (e.g., Streri 2003; Streri et al. 2004). For example, 1-month-old infants can visually recognize an object they have previously explored orally (Gibson and Walker 1984; Meltzoff and Borton 1979) and 2-month-old infants can visually recognize an object they have previously felt (Rose 1981; Streri et al. 2008). However, many of these studies show an asymmetry in the transfer (Sann
Multisensory Integration Develops Late in Humans
347
and Streri 2007; Streri 2003; Streri et al. 2008) or a partial dominance of one modality over another (Lewkowicz 1988a, 1988b), supporting the idea that, even when multimodal skills are present, they are not necessarily fully mature. Recent results (Bremner et al. 2008a, 2008b) on the representation of peripersonal space support the presence of two distinct mechanisms in sensory integration with different developmental trends: the first, relying principally on visual information, is present during the first 6 months; the second, incorporating information of hand and body posture with visual, develops only after 6.5 months of age. Over the past years, the majority of multisensory studies in infants and children have investigated the development of multisensory matching, transfer, and facilitation abilities, whereas few of those have investigated the development of multisensory integration. Those few that did investigate multisensory integration in school-age children point to unimodal dominance rather than integration abilities (Hatwell 1987; Klein 1966; McGurk and Power 1980; Misceo et al. 1999).
18.2 NEUROPHYSIOLOGICAL EVIDENCE FOR DEVELOPMENT OF MULTIMODAL INTEGRATION There is now firm neurophysiological evidence for multimodal integration. Many studies have demonstrated that the midbrain structure superior colliculus is involved in integrating information between modalities and in initializing and controlling the localization and orientation of motor responses (Stein et al. 1993). This structure is highly sensitive to input from the association cortex (Stein 2005), and the inactivation of this input impairs the integration of multisensory signals (Jiang and Stein 2003). Maturation of multisensory responses depends strongly on environmental experience (Wallace and Stein 2007): after visual deprivation (Wallace et al. 2004), the responses of multisensory neurons are atypical, and fail to show multisensory integration. A typically developed superior colliculus is structured in layers. Neurons in the superficial layers are unisensory, whereas those in the deeper layers respond to the combination of visual, auditory, and tactile stimuli (Stein et al. 2009b). Neurons related to a specific sensory modality have their own spatial map that is spatially registered with the maps of the neurons involved in the processing of other modalities (Stein et al. 1993, 2009b). These multisensory neurons respond to spatiotemporally coincident multisensory stimuli with a multisensory enhancement (more impulses than evoked by the strongest stimulus; Meredith and Stein 1986). Multisensory enhancement has been observed in several different species of animals (in the superior colliculus of cat, hamster, guinea pig, and monkeys as well as in the cortex of cat and monkey (Meredith and Stein 1986; Stein et al. 2009a, 2009b; Wilkinson et al. 1996) and functional magnetic resonance imaging and behavioral studies support the existence of similar processes in humans (e.g., Macaluso and Driver 2004). Multimodal responses of collicular neurons are not present at birth but develop late in cats and monkeys (Stein et al. 1973; Wallace and Stein 1997, 2001). For example, in the cat superior colliculus, neurons are somatosensory at birth (Stein et al. 1973), whereas auditory and visual neurons appear only postnatally. Initially, these neurons respond well to either somatic or auditory or visual signals. Enhanced multisensory responses emerge many weeks later, and its development depends on both experience and input from association cortex (Wallace and Stein 2001). Behavioral data suggest that the visual modality is principally involved in the processing of the spatial domain and the auditory system in the temporal domain. Most neurophysiological studies have investigated spatial rather than temporal processing. However, development of temporal properties may be interesting, as the temporal patterns of stimulation can be perceived in the uterus before birth by the vestibular, tactile, and auditory senses. Indeed, neurophysiological studies suggest that somatosensory–audio multisensory neurons develop a few days after birth whereas multisensory neurons that also modulate visual information only appear a few weeks later (Stein et al. 1973). Thus, integration of temporal attributes of perception could develop before spatial attributes (such as location or orientation), which are not typically available prenatally.
348
The Neural Bases of Multisensory Processes
18.3 DEVELOPMENT OF CUE INTEGRATION IN SPATIAL NAVIGATION When do human infants start integrating multisensory signals, and when does the integration become statistically optimal? Nardini et al. (2008) studied the reliance on multiple spatial cues for short-range navigation in children and adults. Navigation depends on both visual landmarks and self-generated cues, such as vestibular and proprioceptive signals generated from the movement of the organism in the environment. To measure and quantify the ability of adults and children to integrate this information, they first measured the precision for each modality and then observed the improvement in the bimodal condition. The subjects (adults and children aged 4 to 5 and 7 to 8 years) walked in a dark room with peripherally illuminated landmarks and collected a series of objects (1, 2, and 3 in Figure 18.1a). After a delay, they replaced the objects. Subjects were provided with two cues to navigation, visual landmarks (“moon,” “lightning bolt,” and “star” in Figure 18.1a) and self-motion. They recorded the distance between the participant’s responses and the correct location as well as root mean square errors for each condition, both for the two unimodal conditions—with the room in darkness (no landmarks; SM) and with visual landmarks present (LM) but subjects (a)
(b) 1 3
Start
1R 3
2
Start SM (self-motion) LM (landmarks) SM+LM
100
Mean SD (cm)
(c)
1
2
80 60 40 20 0
Mean SD (cm) (measured)/ mean predicted SD (model)
(d)
4-5 yr.
7-8 yr.
Group
Prediction, integration model ±1 s.e. 100 4-5 yr.
7-8 yr.
Adult Prediction, alternation model ±1 s.e.
Adult
90 80 70 60 50 40 30 0 0.00 0.25 0.50 0.75 1.00 0.00 0.25 0.50 0.75 1.00 0.00 0.25 0.50 0.75 1.00 Mean relative LM proximity (measured)/LM weight or probability (model)
FIGURE 18.1 (See color insert.) Use of multiple cues for navigation in adults and children. (a) Representation of room in which subject performed the task in nonconflictual condition. Starting from “start,” subject picked up three numbered objects in sequence. Three visual landmarks (a “moon,” a “lightning bolt,” and a “star”) were also present in the room. (b) Representation of room in which subject performed the task in conflictual condition. Here, landmarks were rotated around the subject from white to colored position of 15°. (c) Mean standard deviation (SD) of participant responses for three different conditions. (d) Curves report the means of functions that predict mean standard deviation (SD ±1 SE) from integration model (in green) or alternation model (in pink) for different age groups. (Reproduced from Nardini, M. et al., Curr. Biol., 18, 689–693, 2008. With permission.)
Multisensory Integration Develops Late in Humans
349
disoriented—and with both cues present (SM + LM). Figure 18.1c shows a clear developmental trend in the unimodal performance, with mean mislocalization thresholds decreasing with age, suggesting that navigation improves during development. More interestingly, whereas adults take advantage of multiple cue integration, the children do not. SM + LM thresholds were higher than LM thresholds for children in both age groups, whereas the adults showed lower thresholds in the two-cue condition (evidence of cross-sensory fusion). Nardini et al. (2008) also measured navigation in a conflict condition (Figure 18.1b), in which landmarks were rotated by 15° after the participants had collected the objects. They considered two models, one in which the cues were weighted by the inverse of variance and integrated (green line in Figure 18.1d), and one in which subjects alternate between the two cues (pink line in Figure 18.1d). Although the integration model predicted adult performance in the conflict condition, 4- to 5- and 7- to 8-year-olds followed the alternation model rather than the integration model. Although adults clearly integrate optimally multiple cues for navigation, young children do not, alternating between cues from trial to trial. These results suggest that the development of the two individual spatial representations occur before they are integrated within a common unique reference frame. This study suggests that optimal multisensory integration of spatial cues for short-range navigation occurs late during development.
18.4 DEVELOPMENT OF AUDIOVISUAL CUE INTEGRATION Audiovisual integration is fundamental for many tasks, such as orientation toward novel stimuli and understanding speech in noisy environments. As the auditory system starts to develop before vision, commencing in utero, it is interesting to examine when the two senses are integrated. Neil et al. (2006) measured audiovisual facilitation of spatial location in adults and 1- to 10-monthold infants, by comparing the response latency and accuracy of head and eye turns toward unimodal (visual or auditory) and bimodal stimuli. Subjects were required to orient toward a stimulus (a red vertical line or a sustained burst of white noise, or both) presented at one of five different locations. For all stimuli, orientation latencies decreased steadily with age, from about 900 ms at 0 to 2 months to 200 ms for adults. The response to the bimodal stimulus was faster than for either unimodal stimulus at all ages, but only for adults and for 8- to 10-month-old infants was the “race model” (the standard probability summation model of reaction times) consistently violated, implying neural integration. For young infants, the results were well-explained by independent probability summation, without any evidence that the audiovisual signals were combined in any physiological way. Only after 8 to 10 months did the faster bimodal response suggest that behavioral summation had occurred. Although multisensory facilitation for audiovisual reflexive eye and head movements for spatial localization has been found to develop at about 10 months of age (Neil et al. 2006), recent findings (Barutchu et al. 2009, 2010) report a different developmental trend for multisensory facilitation of visual–audio not reflexive motor responses. Barutchu et al. (2009) studied the motor reaction times during audiovisual detection task and found that multisensory facilitation is still immature by 10 to 11 years of age. In fact, only at around 7 years of age did the facilitation start to become consistent with the coactivation model (Barutchu et al. 2009). These authors suggest that the difference observed in these two trends can depend on the development of the process being facilitated by multisensory integration. Thus, the maturity of processes being facilitated during eye and head reflexive movements precedes the maturity of the processes being facilitated during more complex detection motor tasks (Barutchu et al. 2009) or speech perception (Massaro 1987). Also, Tremblay et al. (2007) showed that different audiovisual illusions seem to develop at different rates. They investigated the development of visuo–audio abilities for two different tasks: one for speech illusion and one for nonspeech illusion. They found that although audiovisual speech illusions varied as a function of age and does not develop until 10 years of age, nonspeech illusions were the same across ages, and already present at 5 years of age. Later in the chapter, we shall suggest a different interpretation of these results, one of “crossmodal calibration,” which we believe could stabilize at different ages for different tasks.
350
The Neural Bases of Multisensory Processes
18.5 SENSORY EXPERIENCE AND DEPRIVATION INFLUENCE DEVELOPMENT OF MULTISENSORY INTEGRATION Animal studies have shown that deprivation of cross-modal cues compromises the development of normal multisensory responses. For example, Wallace et al. (2004) found that cats deprived of audiovisual and visuo–tactile experience showed no multisensory response enhancement in the superior colliculus. Similarly, patients with specific sensory deficits, such as congenital deafness or blindness later restored by surgery techniques, are ideal models to investigate the effects of sensory experience on multisensory integration in humans. For example, Putzar et al. (2007) tested patients born with dense congenital binocular cataracts (removed at 2 or 3 months) on a nonverbal audiovisual task as well as audiovisual speech perception. This group actually performed better than a control group on the nonverbal task, where they were required to make temporal judgments of visual stimuli presented together with auditory distractors, suggesting that the visuo–auditory “binding” was weaker in patients who had been visually deprived for the first few months of life. Similarly, they performed worse than controls in the speech experiment, where a fusion between spatial and temporal visuo– auditory perceptual aspects assisted the task. These results highlight the importance of adequate sensory input during early life for the development of multisensory interactions (see also Gori et al. 2010; Hotting and Roder 2009; Röder et al. 2004, 2007). Also, auditory deprivation can influence the perception of multisensory stimuli, notably speech perception, which involves the interaction of temporal and spatial visual and audio signals. The clearest example of this is the McGurk effect (McGurk and Power 1980): subjects listening to a spoken phoneme (e.g., /pa/) and watching a speaker pronounce another phoneme (such as /ka/) will report hearing an in-between phoneme, /ta/. This compelling illusion occurs both for adults and young children (e.g., Bergeson and Pisoni 2003). Schorr et al. (2005) took advantage of this robust illusion to study bimodal fusion in children born deaf, with hearing restored by cochlear implants. They first replicated the illusion in a control group of children with normal hearing, of whom 57% showed bimodal fusion on at least 70% of trials, perceiving /ta/ when /ka/ was pronounced and /pa/ observed on video (Figure 18.2). Of those who did not show fusion, the majority showed a clear 100
Subjects (%)
80 60 40 20 0
AV
A
V
Controls
AV
A
V
Implants < 30 m
AV
A
V
Implants > 30 m
FIGURE 18.2 McGurk effect in children with cochlear implants compared with age-matched controls. Phoneme /pa/ was played to subjects while they observed a video of lips pronouncing /ka/, and reported the phoneme they perceived. Black bars show percentage of each group to report fusion (/ka/) on at least 70% of trials; light gray bars show auditory dominance (/pa/) and dark gray bars show visual dominance (/ka/). For controls, more than half showed bimodal fusion (McGurk effect), and of those that did not, most showed auditory dominance. Also, for children with early cochlear implants (before 30 months of age), majority show fusion, but those that did not showed visual dominance. For children with later implants, almost all showed visual dominance.
351
Multisensory Integration Develops Late in Humans
auditory dominance. Among the group who had implants at an early age (before 30 months), a similar proportion (56%) perceived the fused phoneme, suggesting that bimodal fusion was occurring. However, the majority of those who did not perceive the fused phoneme perceived the visual /ka/ rather than the auditory /pa/ that the control children perceive. For late implants, however, only one showed cross-modal fusion, all the others showed visual dominance. These results suggest that cross-modal fusion is not innate, but needs to be learned. The group of hearing-restored children who received the implant after 30 months of age showed no evidence of cross-modal fusion, with the visual phoneme dominating perception. Those with early implants demonstrate a remarkable plasticity in acquiring bimodal fusion, suggesting that there is a sensitive period for the development of bimodal integration of speech. It is interesting that in normal-hearing children, sound dominates the multimodal perception, whereas vision dominated in all the cochlea-implanted children, both early and late implants. It is possible that the dominance can be explained by reliability-based integration. Speech is a complex temporal task in audition and spatiotemporal task in vision. Although performance has not yet been measured (to our knowledge), it is reasonable to suppose that in normal-hearing children, the auditory perception is more precise, explaining the dominance. What about the cochlea-implanted children? Is their auditory precision worse than visual precision, so the visual dominance is the result of ideal fusion? Or is the auditory perception actually better than visual perception at this task, so the visual dominance is not the most optimal solution? In this case, it may be that vision remains the most robust sense, even if not the most precise. This would be interesting to investigate, perhaps in a simplified situation, as has been done for visuo-haptic judgments (see following section).
18.6 DEVELOPMENT OF VISUO-HAPTIC INTEGRATION One of the earliest studies to investigate the capacity of integrated information between perceptual systems was that of Ernst and Banks (2002), who investigated the integration of visual and haptic estimates of size in human adults. Their results were consistent with a simple but powerful model that proposes that visual and haptic inputs are combined in an optimal fashion, maximize the precision of the final estimate (see also chapter by Marc Ernst). This maximum likelihood estimate (MLE) model combines sensory information by summing the independent estimates from each modality, after weighting the estimates by their reliability, in turn, inversely proportional to the variance of the presumed underlying noise distribution.
ŜVH = wVŜV + wH Ŝ H
(18.1)
where ŜVH is the combined visual and haptic estimate, estimate and ŜV and ŜH the independent haptic and visual estimates. The weights w V and wH sum to unity and are inversely proportional to the variance (σ 2) of the presumed underlying noise distribution.
(
)
(
)
wV = σ V−2 σ H−2 + σ V−2 , wH = σ H−2 σ H−2 + σ V−2
(18.2)
The MLE prediction for the visuo-haptic threshold (σVH) is given by
−2 σ VH = σ V−2 + σ H−2
(18.3)
where σ V and σ H are the visual and haptic unimodal thresholds. The improvement is greatest ( 2 ) when σ V = σ H. This model has been spectacularly successful in predicting human multimodal integration for various tasks, including visuo-haptic size judgments (Ernst and Banks 2002), audiovisual position
352
The Neural Bases of Multisensory Processes
judgments (Alais and Burr 2004), and visual–tactile integration of sequence of events (Bresciani and Ernst 2007). Gori et al. (2008) adapted the technique to study the development of reliabilitybased cross-sensory integration of two aspects of form perception: size and orientation discrimination. The size discrimination task (top left icon of Figure 18.3) was a low-technology, child-friendly adaptation of Ernst and Banks’ technique (Ernst and Banks 2002), where visual and haptic information were placed in conflict with each other to investigate which dominates perception under various degrees of visual degradation. The stimuli were physical blocks of variable height, displayed in (a)
(d)
Haptic standard
(b) 1.0
(e)
1.0
10 Years
Proportion “steeper”
Proportion “taller”
0.0 MLE prediction Haptic standard
(c)
5 Years
0.0 MLE prediction
(f ) 1.0
Visual standard
5 Years
0.5
0.5
0.0
8 Years
0.5
0.5
1.0
Visual standard
–6
–3
0
3
6
0.0
–12 –6
0
6
12
MLE prediction
MLE prediction
Relative probe size (mm)
Relative probe orientation (deg)
FIGURE 18.3 (See color insert.) Development of cross-modal integration for size and orientation discrimination. Illustration of experimental setup for size (a) and orientation (d) discrimination. Sample psychometric functions for four children, with varying degrees of cross-modal conflict. (b and c) Size discriminations: SB age 10.2 (b); DV age 5.5 (c); (e and f) orientation discrimination: AR age 8.7 (e); GF age 5.7 (f). Lower colorcoded arrows show MLE predictions, calculated from threshold measurements (Equation 18.1). Black-dashed horizontal lines show 50% performance point, intersecting with curves at their PSE (shown by short vertical bars). Upper color-coded arrows indicate size of haptic standard in size condition (b and c) and orientation of visual standard in orientation condition (e and f). Older children generally follow the adult pattern, whereas 5-year-olds were dominated by haptic information for size task, and visual information for orientation task. For size judgment, amount of conflict was 0 for red symbols, +3 mm (where plus means vision was larger) for blue symbols, and –3 mm for green symbols. For orientation, same colors refer to 0° and ±4°.
Multisensory Integration Develops Late in Humans
353
front of an occluding screen for visual judgments, behind the screen for haptic judgments, or both in front and behind for bimodal judgments. All trials involved a two-alternative forced-choice task in which the subject judged whether a standard block seemed taller or shorter than a probe of variable height. For the single-modality trials, one stimulus was the standard, always 55 mm high, the other the probe, of variable height. The proportion of trials in which the probe was judged taller than the standard was computed for each probe height, yielding psychometric functions. The crucial condition was the dual-modality condition, in which visual and haptic sizes of the standard were in conflict, with the visual block 55 + Δ mm and the haptic block 55 – Δ mm (Δ = 0 or ±3 mm). The probe was composed of congruent visual and haptic stimuli of variable heights (48–62 mm). After validating the technique with adults, demonstrating that optimal cross-modal integration also occurred under these conditions, we measured haptic, visual, and bimodal visuo-haptic size discrimination in 5- to 10-year-old children. Figure 18.3 shows sample psychometric functions for the dual-modality measurements, fitted with cumulative Gaussian functions whose median estimates the point of subjective equality (PSE) between the probe and standard. The pattern of results for the 10-year-old (Figure 18.3b) was very much like those for the adult: negative values of Δ caused the curves to shift leftward, positive values caused them to shift rightward. That is, to say, the curves followed the visual standard, suggesting that visual information was dominating the match, as the MLE model suggests it should, as the visual thresholds were lower than the haptic thresholds. This is consistent with the MLE model (indicated by color-coded arrows below the abscissa): the visual judgment was more precise, and should therefore dominate. For the 5-year-olds (Figure 18.3c), however, the results were completely different: the psychometric functions shifted in the direction opposite to that of the 10-year-olds, following the bias of the haptic stimulus. The predictions (color-coded arrows under the abscissa) are similar for both the 5- and 10-year-olds, as for both groups of children, visual thresholds were much lower than haptic thresholds, so the visual stimuli should dominate: but for the 5-year-olds, the reverse holds, with the haptic standard dominating the match. These data show that for size judgments, touch dominates over vision. But is this universally true? We repeated the experiments with another spatial task, orientation discrimination, another basic spatial task that could, in principle, be computed by neural hardware of the primary visual cortex (Hubel and Wiesel 1968). Subjects were required to discriminate which bar of a dual presentation (standard and probe) was rotated more counterclockwise. As with the size discriminations, we first measured thresholds in each separate modality, then visuo-haptically, by varying degrees of conflict (Δ = 0 or ±4°). Figure 18.3e and F show sample psychometric functions for the dual-modality measurements for a 5- and 8-year-old child. As with the size judgments, the pattern of results for the 8-year-old was very much like those for the adult, with the functions of the three different conflicts (Figure 18.3e) falling very much together, as predicted from the single modality thresholds by the MLE model (arrows under the abscissa). Again, however, the pattern of results for the 5-year-old was quite different (Figure 18.3f). Although the MLE model predicts similar curves for the three conflict conditions, the psychometric functions very closely followed the visual standards (indicated by the arrows above the graphs), the exact opposite pattern to that observed for size discrimination. Figure 18.4 reports PSEs for children in all ages for the three conflict conditions, plotted as a function of the MLE predictions from single-modality discrimination thresholds. If the MLE prediction held, the data should fall along the black-dotted equality line (like in the bottom graph that reports the adults’ results). For adults this was so, for both size and orientation. However, at 5 years of age, the story was quite different. For the size discriminations (Figure 18.4a), not only do the measured PSEs not follow the MLE predictions, they varied inversely with Δ (following the haptic standard), lining up almost orthogonal to the equality line. Similarly, the data for the 6-year-olds do not follow the prediction, but there is a tendency for the data to be more scattered rather than ordered orthogonal to the prediction line. By 8 years of age, the data begin
354
The Neural Bases of Multisensory Processes (a) Size
(b) Orientation 5Y
6 3 0 –3 –6
6Y
6 3 0 –3 –6
4 2 0 –2 –4
8Y
6 3 0 –3 –6
4 2 0 –2 –4
10Y
4 2 0 –2 –4
4 2 0 –2 –4
Adults
–4 –2 0 2 4
PSE measured (deg)
PSE measured (mm)
4 2 0 –2 –4
6 3 0 –3 –6
–6 –3 0 3 6
(mm) (deg) Prediction from thresholds
FIGURE 18.4 (See color insert.) Summary data showing PSEs for all subjects for all conflict conditions, plotted against predictions, for size (a) and orientation (b) discriminations. Different colors refer to different subjects within each age group. Symbol shapes refer to level of cross-sensory conflict (Δ): squares, 3 mm or 4°; circles, –3 mm or –4°; upright triangles, 0; diamonds, 2 mm; inverted triangles, –2 mm. Closed symbols refer to no-blur condition for size judgments, and vertical orientation judgments; open symbols to modest blur (screen at 19 cm) or oblique orientations; cross in symbols to heavy blur (screen at 39 cm).
to follow the prediction, and by age 10, the data falls along it well, similar to the adult pattern of results. Figure 18.5a shows how thresholds vary with age for the various conditions. For both tasks, visual and haptic thresholds decreased steadily up till 10 years (orientation more so than size). The light-blue symbols show the thresholds predicted from the MLE model (Equation 18.3). For the adults, the predicted improvement was close to the best single-modality threshold, and indeed, the dual-modality thresholds were never worse than the best single-modality threshold. For the 5-year-old children, the results were quite different, with the dual-modality thresholds following the worst thresholds. For the size judgment, they follow the haptic thresholds, not only much higher than the MLE predictions, but twice the best single-modality (visual) thresholds. This result shows not only that integration was not optimal, it was not even a close approximation, like “winner take all.” Indeed, it shows a “loser take all” strategy. This reinforces the PSE data in showing that these young children do not integrate cross-modally in a way that benefits perceptual discrimination. Figure 18.5b plots the development of theoretical (violet symbols) and observed (black symbols) visual and haptic weights. For both size and orientation judgments, the theoretical haptic weights (calculated from thresholds) were fairly constant over age, 0.2 to 0.3 for size and 0.3 to 0.4 for
355
Multisensory Integration Develops Late in Humans
Haptic weight
30 10
3
5
3
1.0
10 Adult Blur
3
10
Adult
Thresholds (deg)
10
1 (b)
Haptic Vision MLE Cross-modal
2
0.0
PSEs Thresholds
0.5
0.5
Visual weight
Thresholds (mm)
(a)
1.0
0.0 3
10
Adult
3
10
Adult
Age (y)
FIGURE 18.5 (See color insert.) Development of thresholds and visuo-haptic weights. Average thresholds (geometric means) for haptic (red symbols), visual (green), and visuo-haptic (dark blue) size and orientation discrimination, together with average MLE predictions (light blue), as a function of age. Predictions were calculated individually for each subject and then averaged. Tick-labeled “blur” shows thresholds for visual stimuli blurred by a translucent screen 19 cm from blocks. Error bars are ±1 SEM. Haptic and visual weights for size and orientation discrimination, derived from thresholds via MLE model (violet circles) or from PSE values (black squares). Weights were calculated individually for each subject, and then averaged. After 8 to 10 years, the two estimates converged, suggesting that the system then integrates in a statistically optimal manner.
orientation. However, the haptic weights necessary to predict the 5-year-old PSE size data are 0.6 to 0.8, far, far greater than the prediction, implying that these young children give far more weight to touch for size judgments than is optimal. Similarly, the haptic weights necessary to predict the orientation judgments are around 0, far less than the prediction, suggesting that these children base orientation judgments almost entirely on visual information. In neither case does anything like optimal cue combination occur.
18.7 CALIBRATION BY CROSS-MODAL COMPARISON? Our experiments showed that before 8 years of age, children do not integrate information between senses, but one sense dominates the other. Which sense dominates depends on the situation: for size judgments, touch dominates; for orientation vision, neither seems to act as the “gold standard.” Given the overwhelming body of evidence for optimal integration in adults, that children do not integrate in an optimal manner was not to be expected, and suggests that multisensory interaction in infants is fundamentally different from that in adults. How could it differ? Although most recent work on multisensory interactions has concentrated on sensory fusion, the efficient combination of information from all the senses, an equally important but somewhat neglected potential function is calibration. In his 300-year-old “Essay towards a new theory of vision,” Bishop George Berkeley (1709) correctly observed that vision has no direct access to attributes such as distance, solidarity, or “bigness.” These can be acquired visually only after they have been associated with touch (proposition 45): in other words, “touch educates vision,” perhaps better expressed as “touch calibrates vision.” Calibration is probably necessary at all ages, but during the early years of life, when
356
The Neural Bases of Multisensory Processes
High precision Low accuracy
Low precision High accuracy
FIGURE 18.6 Accuracy and precision. Accuracy is defined as closeness of a measurement to its true physical value (its veracity), whereas precision is degree of reproducibility or repeatability between measurements, usually measured as standard deviation of distribution. “Target analogy” shows high precision but poor accuracy (left), and good average accuracy but poor precision (right). The archer would correct his or her aim by calibrating sights of the bow. Similarly, perceptual systems can correct for a bias by cross-calibration between senses.
children are effectively “learning to see,” calibration may be expected to be more important. It is during these years that limbs are growing rapidly, eye length and eye separation are increasing, all necessitating constant recalibration between sight and touch. Indeed, many studies suggest that the first 8 years in humans corresponds to the critical period of plasticity in humans for many attributes, for many properties such as binocular vision (Banks et al. 1975) and acquiring accent-free language (Doupe and Kuhl 1999). So before 8 years of age, calibration may be more important than integration. The advantages of fusing sensory information are probably more than offset by those of keeping the evolving system calibrated and using one system to calibrate another precludes the fusion of the two. Therefore, if we accept Berkeley’s ideas that vision must be calibrated by touch might explain why size discrimination thresholds are dominated by touch, even though touch is less precise than vision. But why are orientation thresholds dominated by vision? Perhaps Berkeley was not quite right, and touch does not always calibrate vision, but the more robust sense for a particular task is the calibrator. In the same way that the more precise sense has the highest weights for sensory fusion, perhaps the more accurate sense is used for calibration. The more accurate need not be the more precise, but is probably the more robust. Accuracy is defined in absolute terms, as the distance from physical reality, whereas precision is a relative measure, related to the reliability or repeatability of the results (see Figure 18.6). It is therefore reasonable that for size, touch will be more accurate, as vision cannot code it directly, but only by a complex calculation of retinal size and estimate of distance. Orientation, on the other hand, is coded directly by primary visual cortex (Hubel and Wiesel 1968), and calculated from touch only indirectly via complex coordinate transforms.
18.8 HAPTIC DISCRIMINATION IN BLIND AND LOW-VISION CHILDREN: DISRUPTION OF CROSS-SENSORY CALIBRATION? If the idea of calibration is correct, then early deficits in one sense should affect the function of other senses that rely on it for calibration. Specifically, haptic impairment should lead to poor visual discrimination of size and visual impairment to poor haptic discrimination of orientation. We have tested and verified the latter of these predictions (Gori et al. 2010). In 17 congenitally visually impaired children (aged 5–19 years), we measured haptic discrimination thresholds for both orientation and size, and found that orientation, but not size, thresholds were impaired. Figure 18.7 plots size against orientation thresholds, both normalized by age-matched normally sighted children.
357
Normalized size thresholds
Multisensory Integration Develops Late in Humans
1
0.3 0.3
1
10
Normalized orientation thresholds
FIGURE 18.7 Thresholds for orientation discrimination, normalized by age-matched controls, plotted against normalized size thresholds, for 17 unsighted or low-vision children aged between 5 and 18 years. Most points lie in lower-right quadrant, implying better size and poorer orientation discrimination. Arrows refer to group averages, 2.2 ± 0.3 for orientation and 0.8 ± 0.06 for size. Star in lower-left quadrant is the acquired low-vision child. (Reprinted from Gori, M. et al., Curr. Biol., 20, 223–5, 2010. With permission.)
Orientation discrimination thresholds were all worse than the age-matched controls (>1), on average twice as high, whereas size discrimination thresholds were generally better than the controls ( (VRTL – VRnoT) does not entail this confound, with the mere effect of visual and tactile stimulation subtracting out in the interaction. Our results consistently showed that nonpredictive, task-irrelevant tactile stimuli on the same side of the visual target can boost activity in occipital visual cortex contralateral to the target side (e.g., Macaluso et al. 2000b). Figure 25.3a shows an example of this cross-modal, stimulus-driven, and spatially specific effect in the visual cortex. In this experiment, the visual target was delivered equiprobably in left or right visual hemifield near to the subject’s face. Task-irrelevant nonpredictive touch consisted of air puffs presented equiprobably on the left or right side of the forehead, in close spatial correspondence of the position of the visual stimuli on each side. The task of the subject was to discriminate the “up/down” elevation of the visual target (two LEDs were mounted on each side). The test for hemifield-specific effects of cross-modal attention (i.e., the interaction between the position of the visual stimulus and the spatial congruence of the bimodal stimulation; for more details on this topic, see also Zimmer and Macaluso 2007) revealed increased activation in the left occipital cortex when both vision and touch were on the right side of space; and activation in the right occipital cortex for spatially congruent (same-side) vision and touch on the left side (see Figure 25.3a). Accordingly, task-irrelevant touch can affect processing in the visual cortex in a spatially specific and fully stimulus-driven manner. This is consistent with the hypothesis that spatial information about one modality (e.g., touch) can be transmitted to anatomically distant areas that process stimuli in a different modality (e.g., occipital visual cortex), and that this can occur irrespective of strategic, endogenous task requirements (see also McDonald and Ward 2000; Kennett et al. 2001; Kayser et al. 2005; Teder-Salejarvi et al. 2005; related findings about stimulus-driven cross-talks between areas processing different modalities are also discussed later in this book). The finding of spatially specific cross-modal influences of touch in visual areas is also remarkable because the visual cortex registers the stimulus position in a retino-centered frame of reference, whereas the position of touch is initially registered in a body-centered frame of reference. Thus, the question arises on whether these side-specific effects of multisensory spatial congruence truly reflect the alignment of visual and tactile stimuli in external space or rather merely reflect an overall hemispheric bias. Indeed, a congruent VRTR stimulus entails a double stimulation of the left hemisphere, whereas on incongruent VRTL trials the two stimuli will initially activate opposite hemispheres (see also Kinsbourne 1970, on hemispheric biases in spatial attention). We dissociated the influence of hemisphere versus external location manipulating the direction of gaze with respect to the hand position (Macaluso et al. 2002a). Tactile stimuli were always delivered to the right hand that was positioned centrally. When subjects fixated on the left side, the right visual field stimulus was spatially aligned with touch, and both right touch and right vision projected to the left hemisphere. However, when gaze was shifted to the right side, now the left visual field stimulus was spatially aligned with right touch, with vision and touch projecting initially to opposite hemispheres. The fMRI results showed that common location in external space, rather than common hemisphere, determined crossmodal influences in the occipital cortex. Hence, right-hand touch can boost the right visual field when the right hand is in the right visual field, but will boost the left visual field if a posture change puts the right hand in the left visual field (see also Kennett et al. 2001, for a related ERP study).
494
The Neural Bases of Multisensory Processes
(a) Stimulus-driven cross-modal spatial interactions +
Right occipital cortex VT
4 3 2 1 0 –1 –2 –3 –4
Cong VL TL
VR TR
Cong
VL TR
VR TL
Incong
Effect size (a.u.)
Left occipital cortex 4 3 2 1 0 –1 –2 –3 –4
Incong
T
V
VL TL
VR TR
Cong
VL TR
VR TL
Incong
+
4 2 0
Cong Incong Cong Incong LOW load
HIGH load
+ VT
Left–right vision (a.u.)
Left occipital cortex
High
6
Low
Right–left vision (a.u.)
(b) Stimulus-driven cross-modal interactions and endogenous visuo-spatial load
6
Right occipital cortex
4 2 0
Cong Incong Cong Incong LOW load
HIGH load
FIGURE 25.3 Stimulus-driven cross-modal spatial attention and interactions with endogenous control. (a) Stimulus-driven cross-modal influences in visual cortex. In this event-related fMRI study (unpublished data), subjects performed a visual discrimination task (“up/down” judgment) with visual stimuli presented in left or right hemifield near the forehead. Task-irrelevant touch was presented equiprobably on left or right side of the forehead, yielding to spatially congruent trials (vision and touch on same side; e.g., both stimuli on right side, cf. top-central panel) and incongruent trials (vision and touch on opposite sides; e.g., vision on the right and touch on the left). Imaging data tested for interaction between position of visual target (left/right) and spatial congruence of bimodal stimulation (congruent/incongruent: e.g., testing for greater activation for right than left visual targets, in spatially congruent vs. incongruent trials). This revealed activity enhancement in occipital visual areas when a contralateral visual target was coupled with a spatially congruent task-irrelevant touch. For example, left occipital cortex showed greater activation comparing “right minus left visual targets,” when touch was congruent vs. incongruent (see signal plot on left side: compare “bar 2 minus 1” vs. “bar 4 minus 3”); effectively yielding to maximal activation of left occipital cortex area when a right visual target was combined with right touch on same side (see bar 2, in same plot). (b) Stimulus-driven cross-modal influences and endogenous visuospatial attention. (From Zimmer, U. and Macaluso, E., Eur. J. Neurosci., 26, 1681–1691, 2007.) Also in this study, we indexed side-specific cross-modal influences testing for interaction between position of visual stimuli and spatial congruence of visuo-tactile input (see also Figure 25.3a; note that, for simplicity, panel b shows only “right-congruent” condition), but now with both vision and touch fully task-irrelevant. We assessed these cross-modal spatial effects under two conditions of endogenous visuospatial attentional load. In “High load” condition, subjects were asked to detect subtle changes of orientation of a grating patch presented above fixation. In “Low load” condition, they detected changes of luminance at fixation. fMRI results showed that activity in occipital cortex increased for spatially congruent visuo-tactile stimuli in contralateral hemifield, and that—critically—this occurred irrespective of load of visuospatial endogenous task. Accordingly, analogous effects of spatial congruence were found in “Low load” condition (bar 1 minus 2) and in “High load” condition (bar 3 minus 4, in each signal plot). V/T, vision/touch; L/R, left/ right; Cong/Incong, congruent (VT on the same side)/incongruent (VT on opposite sides).
Spatial Constraints in Multisensory Attention
495
The finding that cross-modal influences in sensory-specific occipital cortex can take posture into account suggests that intermediate brain structures representing the current posture are also involved. Postural signals have been found to affect activity in many different regions of the brain, including fronto-parietal areas that also participate in attention control and multisensory processing (Andersen et al. 1997; Ben Hamed and Duhamel 2002; Boussaoud et al. 1998; Bremmer et al. 1999; Kalaska et al. 1997; Fasold et al. 2008). Hence, we can hypothesize that the fronto-parietal cortex may also take part in stimulus-driven multisensory attention control. In the visual modality, stimulus-driven control has been associated primarily with activation of a vFP, including the TPJ and the IFG. These areas activate when subjects are cued to attended to one hemifield but the visual target appears on the opposite side (invalid trials), thus triggering a stimulus/target-driven shift of visuospatial attention (plus other task-related resetting processes; see below). We employed a variation of this paradigm to study stimulus-driven shifts of attention in vision and in touch (Macaluso et al. 2002c). A central informative cue instructed the subject to attend to one side. On 80% of the trials the target appeared on the attended side (valid trials), whereas in the remaining 20% of the trials the target appeared on the opposite side (invalid trials). Critically, the target could be either visual (LED near to the left/right hands, on each side) or tactile (air puff on the left/right hands). The modality of the target stimulus was randomized and unpredictable, thus subjects could not strategically prepare to perform target discrimination in one or the other modality. The dorsal FP network activated irrespective of cue validity, consistent with the role of this network in voluntary shifts of attention irrespective of modality (see also Wu et al. 2007). The direct comparison of invalid versus valid trials revealed activation of the vFP (TPJ and IFG), both for invalid visual targets and for invalid tactile targets. This demonstrates that both visual and tactile target stimuli at the unattended location can trigger stimulus-driven reorienting of spatial attention and activation of the vFP network (see also Mayer et al. 2006; Downar et al. 2000). Nonetheless, extensive investigation of spatial cueing paradigms in the visual modality indicates that the activation of the vFP network does not reflect pure stimulus-driven control. As a matter of fact, invalid trials involve not only stimulus-driven shifts of attention from the cued location to the new target location, but they also entail breaches of expectation (Nobre et al. 1999), updating taskrelated settings (Corbetta and Shulman 2002) and processing of low frequency stimuli (Vossel et al. 2006). Several different strategies have been undertaken to tease apart the contribution of these factors (e.g., Kincade et al. 2005; Indovina and Macaluso 2007). Overall, the results of these studies lead to the current view that task-related (e.g., the task-relevance of the reorienting stimulus, i.e., the target that requires judgment and response) and stimulus-driven factors jointly contribute to the activation of the vFP system (see Corbetta et al. 2008 for review). Additional evidence for the role of task relevance for the activation of vFP in the visual modality comes from a recent fMRI study, where we combined endogenous predictive cues and exogenous nonpredictive visual cues on the same trial (Natale et al. 2009). Each trial began with a central, predictive endogenous cue indicating the most likely (left/right) location of the upcoming target. The endogenous cue was followed by a task-irrelevant, nonpredictive exogenous cue (brightening and thickening of a box in the left or right hemifield) that was quickly followed by the (left or right) visual target. This allowed us to cross factorially the validity of endogenous and exogenous cues within the same trial. We reasoned that if pure stimulus-driven attentional control can influence activity in vFP, exogenous cues that anticipate the position of an “endogenous-invalid” taskrelevant target (e.g., endogenous cue left, exogenous cue right, target right) should affect reorienting related activation of vFP. Behaviorally, we found that both endogenous and exogenous cues affected response times. Subjects were faster to discriminate “endogenous-invalid” targets when the exogenous cue anticipated the position of the target (exogenous valid trials, as in the stimulus sequence above). However, the fMRI data did not reveal any significant effect of the exogenous cues in the vFP, which activated equivalently in all conditions containing task-relevant targets on the opposite side of the endogenously cued hemifield (i.e., all endogenous-invalid trials). These findings are in agreement with the hypothesis that fully task-irrelevant visual stimuli do not affect activity in vFP
496
The Neural Bases of Multisensory Processes
(even when the behavioral data demonstrate an influence on these task-irrelevant cues on target discrimination; see also Kincade et al. 2005). However, a different picture emerged when we used task-irrelevant auditory rather than visual cues (Santangelo et al. 2009). The experimental paradigm was analogous to the pure visual study described above, with a predictive endogenous cue followed by a nonpredictive exogenous cue (now auditory) and by the visual target, within each trial. The visual targets were presented in the left/right hemifields near to the subject’s face and the task-irrelevant auditory stimuli were delivered at corresponding external locations. The overall pattern of reaction times was similar to the visual study: both valid endogenous and valid exogenous cues speeded up responses, confirming cross-modal influences of the task-irrelevant auditory cues on the processing of the visual targets (McDonald et al. 2000). The fMRI data revealed the expected activation of vFP for “endogenously invalid” visual targets, demonstrating once again the role of these regions during reorienting toward task-relevant targets (e.g., Corbetta et al. 2000). But critically, now the side of the task-irrelevant auditory stimuli was found to modulate activity in the vFP. Activation of the right TPJ for endogenous-invalid trials diminished when the auditory cue was on the same side as the upcoming invalid target (e.g., endogenous cue left, exogenous auditory cue right, visual target right). Accordingly, task-irrelevant sounds that anticipate the position of the invalid visual target reduce reorienting-related activation in TPJ, demonstrating a “pure” stimulus-driven cross-modal spatial effect in the ventral attention control system (but see also Downar et al. 2001; Mayer et al. 2009). To summarize, multisensory studies of stimulus-driven attention showed that: (1) task-irrelevant stimuli in one modality modulate activity in sensory-specific areas concerned with a different modality, and they can do so in a spatially specific manner (e.g., boosting of activity in contralateral occipital cortex for touch and vision on the same side); (2) spatially specific cross-modal influences in sensory-specific areas take posture into account, suggesting indirect influences via higher-order areas; (3) control regions in vFP operate supramodally, activating during stimulus-driven spatial reorienting toward visual or tactile targets; (4) task-irrelevant auditory stimuli can modulate activity in vFP, revealing a “special status” of multisensory stimulus-driven control compared with unisensory visuospatial attention (cf. Natale et al. 2009). These findings call for an extension of site-source models of attention control, which should take into account the “special status” of multisensory stimuli. In particular, models of multisensory attention control should include pathways allowing nonvisual stimuli to reach the visual cortex and to influence activity in the ventral attention network irrespective of task-relevance. Figure 25.1b shows some of the hypothetical pathways that may mediate these effects. “Pathway 1” entails direct feedforward influences from auditory/somatosensory cortex into the vFP attention system. The presence of multisensory neurons in the temporo-parietal cortex and inferior premortor cortex (Bruce et al. 1981; Barraclough et al. 2005; Hyvarinen 1981; Dong et al. 1994; Graziano et al. 1997), plus activation of these regions for vision, audition, and touch in humans (Macaluso and Driver 2001; Bremmer et al. 2001; Beauchamp et al. 2004; Downar et al. 2000) is consistent with convergent multisensory projections into the vFP. A possible explanation for the effect of taskirrelevant auditory cues in TPJ (see Santangelo et al. 2009) is that feedforward pathways from the auditory cortex, unlike the pathway from occipital cortex, might not be under “task-related inhibitory influences” (see Figure 25.1a). The hypothesis of inhibitory influences on the visual, occipitalto-TPJ pathway was initially put forward by Corbetta and Shulman (2002) as a possible explanation for why task-irrelevant visual stimuli do not activate TPJ (see also Natale et al. 2009). More recently the same authors suggested that these inhibitory effects may arise from the middle frontal gyrus and/or via subcortical structures (locus coeruleus; for details on this topic, see Corbetta et al. 2008). Our finding of a modulatory effect by task-irrelevant audition in TPJ (Santangelo et al. 2009) suggests that these inhibitory effects may not apply in situations involving task-irrelevant stimuli in a modality other than vision. “Pathway 2” involves indirect influences of multisensory signals in the ventral FP network, via dorsal FP regions. Task-related modulations of the pathway between occipital cortex and TPJ are
Spatial Constraints in Multisensory Attention
497
thought to implicate the dFP network (Corbetta et al. 2008; see also the previous paragraph). Because multisensory stimuli can affect processing in the dorsal FP network (via feedforward convergence), these may in turn modify any influence that the dorsal network exerts on the ventral network (see also He et al. 2007, for an example of how changes/lesions of one attention network can affect functioning of the other network). This could comprise the abolishment of any inhibitory influence on (auditory) task-irrelevant stimuli. The involvement of dorsal FP areas may also be consistent with the finding that cross-modal effects in unisensory areas take posture into account. Postural signals modulate activity of neurons in many dFP regions (e.g., Andersen et al. 1997; Ben Hamed et al. 2002; Boussaoud et al. 1998; Bremmer et al. 1999; Kalaska et al. 1997). An indirect route via dFP could therefore combine sensory signals and postural information about eyes/head/body, yielding to cross-modal influences according to position in external space (cf. Stein and Steinford 2008; but note that postural signals are available in multisensory regions of the vFP network; Graziano et al. 1997; and the SC, Grossberg et al. 1997; see also Pouget et al. 2002 Deneve and Pouget 2004 for computational models on this issue). “Pathway 3” involves direct anatomical projections between sensory-specific areas that process stimuli in different modalities. These have been now reported in many animal studies (e.g., Falchier et al. 2002; Rockland and Ojima 2003; Cappe and Barone 2005) and could mediate automatic influences of one modality (e.g., touch) on activity in sensory-specific areas of a different modality (e.g., occipital visual cortex; see also Giard and Peronnet 1999; Kayser et al. 2005; Eckert et al. 2008). These connections between sensory-specific areas may provide fast, albeit spatially coarse, indications about the presence of a multisensory object or event in the external environment. In addition, a direct effect of audition or touch in occipital cortex could change the functional connectivity between occipital cortex and TPJ (see Indovina and Macaluso 2004), also determining stimulus-driven cross-modal influences in vFP. Finally, additional pathways are likely to involve subcortical structures (“pathways 4” in Figure 25.1b). Many different subcortical regions contain multisensory neurons and can influence cortical processing (e.g., superior colliculus, Meredith and Stein 1983; thalamus, Cappe et al. 2009; basal ganglia, Nagy et al. 2006). In addition, subcortical structures are important for spatial orienting (e.g., intermediate and deep layers SC are involved in the generation of overt saccadic responses; see also Frens and Van Opstal 1998, for a study on overt orienting to bimodal stimuli) and have been linked to selection processes in spatial attention (Shipp 2004). The critical role of SC for combining spatial information across sensory modalities has been also demonstrated in two recent behavioral studies (Maravita et al. 2008; Leo et al. 2008). These showed that superior behavioral performance for spatially aligned, same-side versus opposite-side audiovisual trials disappears when the visual stimuli are invisible to the SC (purple/blue stimuli).
25.5 POSSIBLE RELATIONSHIP BETWEEN SPATIAL ATTENTION AND MULTISENSORY INTEGRATION Regardless of the specific pathways involved (see preceding section), the finding that spatial information can be shared between multiple sensory-specific and multisensory areas even in condition of stimulus-driven automatic attention, suggests a possible relationship between attention control and the integration of space across sensory modalities. The central idea here is that attention may “broadcast” information about the currently relevant location between anatomically distant brain areas, thus providing a mechanism that coordinates spatial representations in different sensory modalities and implying some relationship between attention and multisensory integration. The functional relationship between attention and multisensory integration is very much debated and not understood yet (e.g., Talsma et al. 2007; McDonald et al. 2001; Alsius et al. 2005; Saito et al. 2005; Macaluso and Driver 2005; Driver and Spence 1998; Bertelson et al. 2000; Kayser et al. 2005). This is attributable—at least, to some extent—to the difficulty of defining univocal indexes of multisensory integration. Different authors have proposed and utilized a variety of measures to
498
The Neural Bases of Multisensory Processes
highlight interactions between stimuli in different senses. These include phenomenological measures such as the perception of multisensory illusions (e.g., as in the “McGurk” illusion, McGurk and MacDonald 1976; see also Soto-Faraco and Alsius 2009; or the “sound-bounce” illusion, Bushara et al. 2003), behavioral criteria based on violations of the Miller inequality (Miller 1982; see Tajadura-Jiménez et al. 2009, for an example), or physiological measures related to nonlinear effects in single-cell spiking activity (Meredith and Stein 1986b), EEG (Giard and Peronnet 1999), or fMRI (Calvert et al. 2001) signals. At present, there is still no consensus as most of these measures have drawbacks and no single index appears suitable for all possible experimental situations (for an extensive treatment, see Beauchamp 2005; Laurienti et al. 2005; Holmes 2009). In the case of cross-modal spatial cueing effects in stimulus-driven attention, the issue is further complicated by the fact that stimulus-driven effects are driven by changes in stimulus configuration (same vs. different position), which is also considered a critical determinant for multisensory integration (Meredith and Stein 1986b). Therefore, it is difficult to experimentally tease apart these two processes. In our initial study (Macaluso et al. 2000b), we showed boosting of activity in occipital cortex contralateral to the position of spatially congruent bimodal visuo-tactile stimuli that were presented simultaneously and for a relatively long duration (300 ms). McDonald et al. (2001) argued that these cross-modal influences may relate to multisensory interactions rather than spatial attention, as there was no evidence that task-irrelevant touch captured attention on the side of the visual target. However, this point is difficult to address because it is impossible to obtain behavioral evidence that exogenous cues—which by definition do not require any response—trigger shifts of spatial attention. A related argument was put forward suggesting that a minimum condition to disentangle attention versus integration is to introduce a gap between the offset of the cue and the onset of the target (McDonald et al. 2001). This should eliminate multisensory integration (the trial would never include simultaneous bimodal stimulation), while leaving spatial attentional effects intact (i.e., faster and more accurate behavioral responses for same-side vs. opposite-side trials). However, we have previously argued that criteria based on stimulus timing may be misleading because of differential response latencies and discharge proprieties of neurons in different regions of the brain (Macaluso et al. 2001). Thus, physically nonoverlapping stimuli (e.g., an auditory cue that precedes a visual target) may produce coactivation of a bimodal neuron that has shorter response latency for audition than for vision (e.g., see Meredith et al. 1987; for related findings using ERPs in humans; see also Meylan and Murray 2007). As an extension of the idea that the temporal sequence of events may be used to disentangle the role of attention and multisensory integration in stimulus-driven cross-modal cueing paradigms (McDonald et al. 2001), one may consider the timing of neuronal activation rather than the timing of the external stimuli. This can be addressed in the context of site-source models of attention (cf. Figure 25.1). Along these lines, Spence et al. (2004) suggested that if control regions activate before any modulation in sensory areas, this would speak for a key role of attention in cross-modal integration; meanwhile, if attentional control engages after cross-modal effects in sensory-specific areas, this would favor the view that multisensory integration takes place irrespective of attention. In the latter case, cross-modal cueing effects could be regarded as arising as a “consequence” of the integration process (see also Busse et al. 2005). Using ERP and dipole source localization in a stimulus-driven audiovisual cueing paradigm, McDonald and colleagues (2003) found that associative regions in the posterior temporal cortex activate before any cross-modal spatial effect in the visual cortex. In this study, there was a 17- to 217-ms gap between cue offset and target onset, and the analysis of the behavioral data showed increased perceptual sensitivity (d′) for valid compared to invalid trials. Accordingly, the authors suggested that the observed sequence of activation (including cross-modal influences of audition on visual ERPs) could be related to involuntary shifts of spatial attention. However, this study did not assess brain activity associated specifically with the exogenous cues, thus again not providing any direct evidence for cue-related shifts of attention. Using a different approach to investigate the dynamics of cross-modal influences in sensory areas, a recent fMRI study of functional connectivity showed that during processing of simultaneous audiovisual
Spatial Constraints in Multisensory Attention
499
streams, temporal areas causally influence activity in visual and auditory cortices, rather than the other way round (Noesselt et al. 2007). Thus, cross-modal boosting of activity in sensory-specific areas seems to arise because of backprojections from multisensory regions, emphasizing the causal role of high-order associative areas and consistent with some coupling between attention control and the sharing of spatial information across sensory modalities (which, depending on the definition, can be viewed as an index of multisensory integration). More straightforward approaches can be undertaken to investigate the relationship between endogenous attention and multisensory integration. Still pending on the specific definition of multisensory integration (see above), one may ask whether endogenous attention affects the way signals in different modalities interact with each other. For example, Talsma and Woldorff (2005) indexed multisensory integration using a supra-additive criterion on ERP amplitudes (AV > A + V), and tested whether this was different for stimuli at the endogenously attended versus unattended side (note that both vision and audition were task-relevant/attended in this experiment). Supra-additive responses for AV stimuli were found in frontal and centro-medial scalp sites. Critically, this effect was larger for stimuli at the attended than the unattended side, demonstrating some interplay between spatial endogenous attention and multisensory integration (see also the study of Talsma et al. 2007, who manipulated relevant-modality rather than relevant-location). In a similar vein, we have recently investigated the effect of selective visuospatial endogenous attention on the processing of audiovisual speech stimuli (Fairhall and Macaluso 2009). Subjects were presented visually with two “speaking mouths” simultaneously in the left and right visual fields. A central auditory stream (speaking voice) was congruent with one of the two visual stimuli (mouth reading the same tale’s passage) and incongruent with the other one (mouth reading a different passage). In different blocks, subjects were asked to attend either to the congruent or to the incongruent visual stimulus. In this way, we were able to keep the absolute level of multisensory information present in the environment constant, testing specifically for the effect of selective spatial attention to congruent or incongruent multisensory stimuli. The results showed that endogenous visuospatial attention can influence the processing of audiovisual stimuli, with greater activation for “attend to congruent” than “attend to incongruent” conditions. This interplay between attention and multisensory processing was found to affect brain activity at multiple stages, including highlevel regions in the superior temporal sulcus, subcortically in the superior colliculus, as well as in sensory-specific occipital visual cortex (V1 and V2). Endogenous attention has been found not only to boost multisensory processing, but also in some cases to reduce responses for attended versus unattended multisensory stimuli. For example, van Atteveldt and colleagues (2007) presented subjects with letter–sound pairs that were either congruent or incongruent. Under conditions of passive listening, activity increased in association cortex for congruent compared to incongruent presentations. However, this effect disappeared as soon as subjects were asked to perform an active “same/different” judgment with the letters and sounds. The authors suggested that voluntary top-down attention can overrule bottom-up multisensory interactions (see also Mozolic et al. 2008, on the effect of active attention to one modality during multisensory stimulation). In another study on audiovisual speech, Miller and D’Esposito (2005) dissociated patterns of activation related to physical stimulus attributes (synchronous vs. asynchronous stimuli) and perception (“fused” vs. “unfused” percept). This showed that active perception leads to increases in activity in the auditory cortex and the superior temporal sulcus for fused audiovisual stimuli, whereas in the SC activity decreased for synchronous vs. asynchronous stimuli, irrespective of perception. These results indicate that constraints of multisensory integration may change as a function of endogenous factors (fused/unfused percept), for example, with synchronous audiovisual stimuli reducing rather than increasing activity in the SC (cf. Miller and D’Esposito 2005 and Meredith et al. 1987). Another approach to investigate the relationship between endogenous attention and multisensory integration is to manipulate the attentional load of a primary task and to assess how this influences multisensory processing. The underlying idea is that if a single/common pool of neural resources
500
The Neural Bases of Multisensory Processes
mediates both processes, increasing the amount of resources spent on a primary attentional task should lead to some changes in the processing of the multisensory stimuli. On the contrary, if multisensory integration does not depend on endogenous attention, changes in the attentional task should not have any influence on multisensory processing. We used this approach to investigate the possible role of endogenous visuospatial attention for the integration of visuo-tactile stimuli (Zimmer and Macaluso 2007). We indexed multisensory integration comparing same-side versus opposite-side visual–tactile stimuli and assessing activity enhancement in contralateral occipital cortex for the same-side condition (cf. Figure 25.3a). These visual and tactile stimuli were fully task-irrelevant and did not require any response. Concurrently, we asked subjects to perform a primary endogenous visuospatial attention task. This entailed either attending to central fixation (low load) or sustaining visuospatial covert attention to a location above fixation to detect subtle orientation changes in a grating patch (high load; see Figure 25.3b). The results showed cross-modal enhancements in the contralateral visual cortex for spatially congruent trials, irrespective of the level of endogenous load (see signal plots in Figure 25.3b). These findings suggest that the processing of visuo-tactile spatial congruence in visual cortex can be uncoupled from endogenous visuospatial attention control (see also Mathiak et al. 2005, for a magnetoencephalography study reporting related findings in auditory cortex). In summary, direct investigation of the possible relationship between attention control and multisensory integration revealed that voluntary attention to multisensory stimuli or changing the task relevance of the unisensory components of a multisensory stimulus (attend to one modality, to both, or to neither) can affect multisensory interactions. This indicates that—to some extent—attention control and multisensory integration make use of a shared pool of processing resources. However, when both components of a multisensory stimulus are fully task-irrelevant, changes in cognitive load in a separate task does not affect the integration of the multisensory input (at least for the load manipulations reported by Zimmer et al. 2007; Mathiak et al. 2005). Taken together, these findings suggest that multisensory interactions can occur at multiple levels of processing, and that different constraints apply depending on the relative weighting of stimulusdriven and endogenous attentional requirements. This multifaceted scenario can be addressed in the context of models of spatial attention control that include multiple routes for the interaction of signals in different modalities (see Figure 25.1b). It can be hypothesized that some of these pathways (or network’s nodes) are under the modulatory influence of endogenous and/or stimulus-driven attention. For instance, cross-modal interactions that involve dorsal FP areas are likely to be subject to endogenous and task-related attentional factors (e.g., see Macaluso et al. 2002b). Conversely, stimulus-driven factors may influence multisensory interactions that take place within or via the ventral FP system (e.g., Santangelo et al. 2009). Direct connections between sensory-specific areas should be—at least in principle—fast, automatic, and preattentive (Kayser et al. 2005), although attentional influences may then superimpose on these (e.g., see Talsma et al. 2007). Some interplay between spatial attention and multisensory processing can take place also in subcortical areas, as demonstrated by attentional modulation there (Fairhall et al. 2008; see also Wallace and Stein 1994; Wilkinson et al. 1996, for the role of cortical input on multisensory processing in the SC).
25.6 CONCLUSIONS Functional imaging studies of multisensory spatial attention revealed a complex interplay between effects associated with the external stimulus configuration (e.g., spatially congruent vs. incongruent multisensory input) and endogenous task requirements. Here, I propose that these can be addressed in the context of “site-source” models of attention that include control regions in dorsal and vFP associative cortex, connected via feedforward and feedback projections with sensoryspecific areas (plus subcortical regions). This architecture permits sharing spatial information across multiple brain regions that represent space (unisensory, multisensory, plus motor representations). Spatial attention and the selection of currently relevant location result from the dynamic
Spatial Constraints in Multisensory Attention
501
interplay between the nodes of this network, with both stimulus-driven and endogenous factors influencing the relative contribution of each node and pathway. I propose that the coordination of activity within this complex network underlies the integration of space across modalities, producing a sensory–motor system that allows us to perceive and act within a unified representation of external space. In this framework, future studies may seek to better specify the dynamics of this network. A key issue concerns possible causal links between activation of some parts of the network and attention/integration effects in other parts of the network. This relationship is indeed a main feature of the “sites-sources” distinction emphasized in this model. This can be addressed in several ways. Transcranic magnetic stimulation (TMS) can be used to transiently knock out one node of the network during multisensory attention tasks, revealing the precise timing of activation of each network’s node. Using this approach, Chambers and colleagues (2004a) identified two critical windows for the activation of inferior parietal cortex during visuospatial reorienting, and demonstrated the involvement of the same region (the angular gyrus) for stimulus-driven visuo-tactile spatial interactions (Chambers et al. 2007; but see also Chambers et al. 2004b, for modality-specific effects). TMS was also used to demonstrate the central role of posterior parietal cortex for spatial remapping between vision and touch (Bolognini and Maravita 2007) and to infer direct influences of auditory input on human visual cortex (Romei et al. 2007). Most recently, TMS has been combined with fMRI, which allows investigating the causal influence of one area (e.g., frontal or parietal regions) on activity in other areas (e.g., sensory-specific visual areas; see Ruff et al. 2006; and Bestmann et al. 2008, for review). These studies may be extended to multisensory attention paradigms, looking for the coupling between fronto-parietal attention control regions and sensory areas as a function of the type of input (unisensory or multisensory, spatially congruent or incongruent). Task-related changes in functional coupling between brain areas can also be assessed using analyses of effective connectivity (e.g., dynamic causal modeling; Stephan et al. 2007). These have been successfully applied to both fMRI and ERP data in multisensory experiments, showing causal influences of associative areas in parietal and temporal cortex on sensory processing in the visual cortex (Moran et al. 2008; Noesselt et al. 2007; Kreifelts et al. 2007). Future studies may combine attentional manipulations (e.g., the direction of endogenous attention) and multisensory stimuli (e.g., spatially congruent vs. incongruent multisensory input), providing additional information on the causal role of top-down and bottom-up influences for the formation of an integrated system that represents space across sensory modalities.
REFERENCES Alsius, A., J. Navarra, R. Campbell, and S. Soto-Faraco. 2005. Audiovisual integration of speech falters under high attention demands. Curr Biol 15: 839–843. Andersen, R. A., L. H. Snyder, D. C. Bradley, and J. Xing. 1997. Multimodal representation of space in the posterior parietal cortex and its use in planning movements. Annu Rev Neurosci 20: 303–330. Arrington, C. M., T. H. Carr, A. R. Mayer, and S. M. Rao. 2000. Neural mechanisms of visual attention: Objectbased selection of a region in space. J Cogn Neurosci 2: 106–117. Barraclough, N. E., D. Xiao, C. I. Baker, M. W. Oram, and D. I. Perrett. 2005. Integration of visual and auditory information by superior temporal sulcus neurons responsive to the sight of actions. J Cogn Neurosci 17: 377–391. Beauchamp, M. S. 2005. Statistical criteria in FMRI studies of multisensory integration. Neuroinformatics 3: 93–113. Beauchamp, M. S., B. D. Argall, J. Bodurka, J. H. Duyn, and A. Martin. 2004. Unraveling multisensory integration: Patchy organization within human STS multisensory cortex. Nat Neurosci 7: 1190–1192. Ben Hamed, S., J. R. Duhamel, F. Bremmer, and W. Graf. 2001. Representation of the visual field in the lateral intraparietal area of macaque monkeys: A quantitative receptive field analysis. Exp Brain Res 140: 127–144. Ben Hamed, S., and J. R. Duhamel. 2002. Ocular fixation and visual activity in the monkey lateral intraparietal area. Exp Brain Res 142: 512–528.
502
The Neural Bases of Multisensory Processes
Bertelson, P., J. Vroomen, B. de Gelder, and J. Driver. 2000. The ventriloquist effect does not depend on the direction of deliberate visual attention. Percept Psychophys 62: 321–332. Bestmann, S., C. C. Ruff, F. Blankenburg, N. Weiskopf, J. Driver, and J. C. Rothwell. 2008. Mapping causal interregional influences with concurrent TMS-fMRI. Exp Brain Res 191: 383–402. Bolognini, N., and A. Maravita. 2007. Proprioceptive alignment of visual and somatosensory maps in the posterior parietal cortex. Curr Biol 17: 1890–1895. Boussaoud, D., C. Jouffrais, and F. Bremmer. 1998. Eye position effects on the neuronal activity of dorsal premotor cortex in the macaque monkey. J Neurophysiol 80: 1132–1150. Bremmer, F., W. Graf, S. Ben Hamed, and J. R. Duhamel. 1999. Eye position encoding in the macaque ventral intraparietal area (VIP). Neuroreport 10: 873–878. Bremmer, F., A. Schlack, N. J. Shah, O. Zafiris, M. Kubischik, K. Hoffmann et al. 2001. Polymodal motion processing in posterior parietal and premotor cortex: A human fMRI study strongly implies equivalencies between humans and monkeys. Neuron 29: 287–296. Bressler, S. L., W. Tang, C. M. Sylvester, G. L. Shulman, and M. Corbetta. 2008. Top-down control of human visual cortex by frontal and parietal cortex in anticipatory visual spatial attention. J Neurosci 28: 10056–10061. Bruce, C., R. Desimone, and C. G. Gross. 1981. Visual properties of neurons in a polysensory area in superior temporal sulcus of the macaque. J Neurophysiol 46: 369–384. Bushara, K. O., T. Hanakawa, I. Immisch, K. Toma, K. Kansaku, and M. Hallett. 2003. Neural correlates of cross-modal binding. Nat Neurosci 6:190–195. Busse, L., K. C. Roberts, R. E. Crist, D. H. Weissman, and M. G. Woldorff. 2005. The spread of attention across modalities and space in a multisensory object. Proc Natl Acad Sci USA 102: 18751–18756. Calvert, G. A., P. C. Hansen, S. D. Iversen, and M. J. Brammer. 2001. Detection of audio-visual integration sites in humans by application of electrophysiological criteria to the BOLD effect. Neuroimage 14: 427–438. Cappe, C., and P. Barone. 2005. Heteromodal connections supporting multisensory integration at low levels of cortical processing in the monkey. Eur J Neurosci 22: 2886–2902. Cappe, C., A. Morel, P. Barone, and E. M. Rouiller. 2009. The thalamocortical projection systems in primate: an anatomical support for multisensory and sensorimotor interplay. Cereb Cortex 19: 2025–2037. Chambers, C. D., J. M. Payne, and J. B. Mattingley. 2007. Parietal disruption impairs reflexive spatial attention within and between sensory modalities. Neuropsychologia 45: 1715–1724. Chambers, C. D., J. M. Payne, M. G. Stokes, and J. B. Mattingley. 2004a. Fast and slow parietal pathways mediate spatial attention. Nat Neurosci 7: 217–218. Chambers, C. D., M. G. Stokes, and J. B. Mattingley. 2004b. Modality-specific control of strategic spatial attention in parietal cortex. Neuron 44: 925–930. Ciaramitaro, V. M., G. T. Buracas, and G. M. Boynton. 2007. Spatial and cross-modal attention alter responses to unattended sensory information in early visual and auditory human cortex. J Neurophysiol 98: 2399–2413. Corbetta, M., J. M. Kincade, J. M., Ollinger, M. P. McAvoy, and G. L. Shulman. 2000. Voluntary orienting is dissociated from target detection in human posterior parietal cortex. Nat Neurosci 3: 292–297. Corbetta, M., G. Patel, and G. L. Shulman. 2008. The reorienting system of the human brain: From environment to theory of mind. Neuron 58: 306–324. Corbetta, M., and G. L. Shulman. 2002. Control of goal-directed and stimulus-driven attention in the brain. Nat Rev Neurosci 3: 215–229. Corbetta, M., A. P. Tansy, C. M. Stanley, S. V. Astafiev, A. Z. Snyder, and G. L. Shulman. 2005. A functional MRI study of preparatory signals for spatial location and objects. Neuropsychologia 43: 2041–2056. Deneve, S., and A. Pouget. 2004. Bayesian multisensory integration and cross-modal spatial links. J Physiol Paris 98: 249–258. Desimone, R., and J. Duncan. 1995. Neural mechanisms of selective visual attention. Annl Rev Neurosci 18: 193–222. Dong, W. K., E. H. Chudler, K. Sugiyama, V. J. Roberts, and T. Hayashi. 1994. Somatosensory, multisensory, and task-related neurons in cortical area 7b (PF) of unanesthetized monkeys. J Neurophysiol 72: 542–564. Downar, J., A. P. Crawley, D. J. Mikulis, and K. D. Davis. 2000. A multimodal cortical network for the detection of changes in the sensory environment. Nat Neurosci 3: 277–283. Downar, J., A. P. Crawley, D. J. Mikulis, and K. D. Davis. 2001. The effect of task relevance on the cortical response to changes in visual and auditory stimuli: An event-related fMRI study. Neuroimage 14: 1256–1267. Driver, J., and C. Spence. 1998. Attention and the crossmodal construction of space. Trends Cogn Sci 2: 254–262.
Spatial Constraints in Multisensory Attention
503
Duhamel, J. R., C. L. Colby, and M. E. Goldberg. 1998. Ventral intraparietal area of the macaque: Congruent visual and somatic response properties. J Neurophysiol 79: 126–136. Eckert, M. A., N. V. Kamdar, C. E. Chang, C. F. Beckmann, M. D. Greicius, and V. Menon. 2008. A crossmodal system linking primary auditory and visual cortices: Evidence from intrinsic fMRI connectivity analysis. Hum Brain Mapp 29: 848–857. Eimer, M. 1999. Can attention be directed to opposite locations in different modalities? An ERP study. Clin Neurophysiol 110: 1252–1259. Eimer, M., and J. Driver. 2000. An event-related brain potential study of cross-modal links in spatial attention between vision and touch. Psychophysiology 37: 697–705. Eimer, M., and J. Driver. 2001. Crossmodal links in endogenous and exogenous spatial attention: Evidence from event-related brain potential studies. Neurosci Biobehav Rev 25: 497–511. Eimer, M., and J. van Velzen. 2002. Crossmodal links in spatial attention are mediated by supramodal control processes: Evidence from event-related potentials. Psychophysiology 39: 437–449. Eimer, M., J. van Velzen, and J. Driver. 2002. Cross-modal interactions between audition, touch, and vision in endogenous spatial attention: ERP evidence on preparatory states and sensory modulations. J Cogn Neurosci 14: 254–271. Fairhall, S. L., and E. Macaluso. 2009. Spatial attention can modulate audiovisual integration at multiple cortical and subcortical sites. Eur J Neurosci 29: 1247–1257. Falchier, A., S. Clavagnier, P. Barone, and H. Kennedy. 2002. Anatomical evidence of multimodal integration in primate striate cortex. J Neurosci 22: 5749–5759. Farah, M. J., A. B. Wong, M. A. Monheit, and L. A. Morrow. 1989. Parietal lobe mechanisms of spatial attention: Modality-specific or supramodal? Neuropsychologia 27: 461–470. Fasold, O., J. Heinau, M. U. Trenner, A. Villringer, and R. Wenzel. 2008. Proprioceptive head posture-related processing in human polysensory cortical areas. Neuroimage 40: 1232–1242. Frens, M. A., and A. J. Van Opstal. 1998. Visual–auditory interactions modulate saccade-related activity in monkey superior colliculus. Brain Res Bull 46: 211–224. Giard, M. H., and F. Peronnet. 1999. Auditory–visual integration during multimodal object recognition in humans: A behavioral and electrophysiological study. J Cogn Neurosci 11: 473–490. Graziano, M. S., and C. G. Gross. 1993. A bimodal map of space: Somatosensory receptive fields in the macaque putamen with corresponding visual receptive fields. Exp Brain Res 97: 96–109. Graziano, M. S., and C. G. Gross. 1995. The representation of extrapersonal space: A possible role for bimodal, visuo-tactile neurons. In The Cognitive Neurosciences, ed. M. S. Gazzaniga, 1021–1034. Cambridge, MA: MIT Press. Graziano, M. S., X. T. Hu, and C. G. Gross. 1997. Visuospatial properties of ventral premotor cortex. J Neurophysiol 77: 2268–2292. Green, J. J., and J. J. McDonald. 2008. Electrical neuroimaging reveals timing of attentional control activity in human brain. PLoS Biol 6: 81. Green, J. J., W. A. Teder-Salejarvi, and J. J. McDonald. 2005. Control mechanisms mediating shifts of attention in auditory and visual space: A spatio-temporal ERP analysis. Exp Brain Res 166: 358–369. Gross, C. G., and M. S. Graziano. 1995. Multiple representations of space in the brain. The Neuroscientist 1: 43–50. Grossberg, S., K. Roberts, M. Aguilar, and D. Bullock. 1997. A neural model of multimodal adaptive saccadic eye movement control by superior colliculus. J Neurosci 17: 9706–9725. Hagler Jr., D. J., and M. I. Sereno. 2006. Spatial maps in frontal and prefrontal cortex. Neuroimage 29: 567–577. He, B. J., A. Z. Snyder, J. L. Vincent, A. Epstein, G. L. Shulman, and M. Corbetta. 2007. Breakdown of functional connectivity in frontoparietal networks underlies behavioral deficits in spatial neglect. Neuron 53: 905–918. Heinze�������������������������������������������������������������������������������������������������� , H. J., G. R. Mangun, W. Burchert, H. Hinrichs, M. Scholz, T. F. Munte et al. 1994. ������������� Combined spatial and temporal imaging of brain activity during visual selective attention in humans. Nature 372: 543–546. Holmes, N. P. 2009. The principle of inverse effectiveness in multisensory integration: Some statistical considerations. Brain Topogr 21: 168–176. Hopfinger, J. B., M. H. Buonocore, and G. R. Mangun. 2000. The neural mechanisms of top-down attentional control. Nat Neurosci 3: 284–291. Hotting, K., F. Rosler, and B. Roder. 2003. Crossmodal and intermodal attention modulate event-related brain potentials to tactile and auditory stimuli. Exp Brain Res 148: 26–37. Hyvarinen, J. 1981. Regional distribution of functions in parietal association area 7 of the monkey. Brain Res 206: 287–303.
504
The Neural Bases of Multisensory Processes
Indovina, I., and E. Macaluso, E. 2004. Occipital–parietal interactions during shifts of exogenous visuospatial attention: Trial-dependent changes of effective connectivity. Magn Reson Imaging 22: 1477–1486. Indovina, I., and E. Macaluso. 2007. Dissociation of stimulus relevance and saliency factors during shifts of visuospatial attention. Cereb Cortex 17: 1701–1711. Kalaska, J. F., S. H. Scott, P. Cisek, and L. E. Sergio. 1997. Cortical control of reaching movements. Curr Opin Neurobiol 7: 849–859. Kastner, S., M. A. Pinsk, P. De Weerd, R. Desimone, and L. G. Ungerleider. 1999. Increased activity in human visual cortex during directed attention in the absence of visual stimulation. Neuron 22: 751–761. Kastner, S., and L. G. Ungerleider. 2001. The neural basis of biased competition in human visual cortex. Neuropsychologia 39: 1263–1276. Kayser, C., C. I. Petkov, M. Augath, and N. K. Logothetis. 2005. Integration of touch and sound in auditory cortex. Neuron 48: 373–384. Kelley, T. A., J. T. Serences, B. Giesbrecht, and S. Yantis. 2008. Cortical mechanisms for shifting and holding visuospatial attention. Cereb Cortex 18: 114–325. Kennett, S., M. Eimer, C. Spence, and J. Driver. 2001. Tactile–visual links in exogenous spatial attention under different postures: Convergent evidence from psychophysics and ERPs. J Cogn Neurosci 13: 462–478. Kida, T., K. Inui, T. Wasaka, K. Akatsuka, E. Tanaka, and R. Kakigi. 2007. Time-varying cortical activations related to visual–tactile cross-modal links in spatial selective attention. J Neurophysiol 97: 3585–3596. Kincade, J. M., R. A. Abrams, S. V. Astafiev, G. L. Shulman, and M. Corbetta. 2005. An event-related functional magnetic resonance imaging study of voluntary and stimulus-driven orienting of attention. J Neurosci 25: 4593–4604. Kinsbourne, M. 1970. The cerebral basis of lateral asymmetries in attention. Acta Psychol (Amst) 33: 193–201. Kreifelts, B., T. Ethofer, W. Grodd, M. Erb, and D. Wildgruber. 2007. Audiovisual integration of emotional signals in voice and face: An event-related fMRI study. Neuroimage 37: 1445–1456. Laurienti, P. J., J. H. Burdette, M. T. Wallace, Y. F. Yen, A. S. Field, and B. E. Stein. 2002. Deactivation of sensory-specific cortex by cross-modal stimuli. J Cogn Neurosci 14: 420–429. Laurienti, P. J., T. J. Perrault, T. R. Stanford, M. T. Wallace, and B. E. Stein. 2005. On the use of superadditivity as a metric for characterizing multisensory integration in functional neuroimaging studies. Exp Brain Res 166: 289–297. Leo, F., C. Bertini, G. di Pellegrino, and E. Ladavas. 2008. Multisensory integration for orienting responses in humans requires the activation of the superior colliculus. Exp Brain Res 186: 67–77. Lewis, J. W., M. S. Beauchamp, and E. A. DeYoe. 2000. A comparison of visual and auditory motion processing in human cerebral cortex. Cereb Cortex 10: 873–888. Luck, S. J., L. Chelazzi, S. A. Hillyard, and R. Desimone. 1997. Neural mechanisms of spatial selective attention in areas V1, V2, and V4 of macaque visual cortex. J Neurophysiol 77: 24–42. Macaluso, E., and J. Driver. 2001. Spatial attention and crossmodal interactions between vision and touch. Neuropsychologia 39: 1304–1316. Macaluso, E., and J. Driver. 2005. Multisensory spatial interactions: A window onto functional integration in the human brain. Trends Neurosci 28: 264–271. Macaluso, E., J. Driver, and C. D. Frith. 2003a. Multimodal spatial representations engaged in human parietal cortex during both saccadic and manual spatial orienting. Curr Biol 13: 990–999. Macaluso, E., M. Eimer, C. D. Frith, and J. Driver. 2003b. Preparatory states in crossmodal spatial attention: Spatial specificity and possible control mechanisms. Exp Brain Res 149: 62–74. Macaluso, E., C. Frith, and J. Driver. 2000a. Selective spatial attention in vision and touch: Unimodal and multimodal mechanisms revealed by PET. J Neurophysiol 83: 3062–3075. Macaluso, E., C. D. Frith, and J. Driver. 2005. Multisensory stimulation with or without saccades: fMRI evidence for crossmodal effects on sensory-specific cortices that reflect multisensory location-congruence rather than task-relevance. Neuroimage 26: 414–425. Macaluso, E., C. D. Frith, and J. Driver. 2001. Multisensory integration and crossmodal attention effects in the human brain. Science [Technical response] 292: 1791. Macaluso, E., C. D. Frith, and J. Driver. 2002a. Crossmodal spatial influences of touch on extrastriate visual areas take current gaze direction into account. Neuron 34: 647–658. Macaluso, E., C. D. Frith, and J. Driver. 2002b. Directing attention to locations and to sensory modalities: Multiple levels of selective processing revealed with PET. Cereb Cortex 12: 357–368. Macaluso, E., C. D. Frith, and J. Driver. 2002c. Supramodal effects of covert spatial orienting triggered by visual or tactile events. J Cogn Neurosci 14: 389–401.
Spatial Constraints in Multisensory Attention
505
Macaluso, E., C. D. Frith, and J. Driver. 2000b. Modulation of human visual cortex by crossmodal spatial attention. Science 289: 1206–1208. Maravita, A., N. Bolognini, E. Bricolo, C. A. Marzi, and S. Savazzi. 2008. Is audiovisual integration subserved by the superior colliculus in humans? Neuroreport 19: 271–275. Martinez, A., L. Anllo-Vento, M. I. Sereno, L. R. Frank, R. B. Buxton, D. J. Dubowitz et al. 1999. Involvement of striate and extrastriate visual cortical areas in spatial attention. Nat Neurosci 2: 364–369. Massaro, D. W. 1999. Speechreading: Illusion or window into pattern recognition. Trends Cogn Sci 3: 310–317. Mathiak, K., I. Hertrich, M. Zvyagintsev, W. Lutzenberger, and H. Ackermann. 2005. Selective influences of cross-modal spatial-cues on preattentive auditory processing: A whole-head magnetoencephalography study. Neuroimage 28: 627–634. Mayer, A. R., A. R. Franco, and D. L. Harrington. 2009. Neuronal modulation of auditory attention by informative and uninformative spatial cues. Hum Brain Mapp 30: 1652–1666. Mayer, A. R., D. Harrington, J. C. Adair, and R. Lee. 2006. The neural networks underlying endogenous auditory covert orienting and reorienting. Neuroimage 30: 938–949. McDonald, J. J., W. A. Teder-Salejarvi, F. Di Russo, and S. A. Hillyard. 2003. Neural substrates of perceptual enhancement by cross-modal spatial attention. J Cogn Neurosci 15: 10–19. McDonald, J. J., W. A. Teder-Salejarvi, and S. A. Hillyard. 2000. Involuntary orienting to sound improves visual perception. Nature 407: 906–908. McDonald, J. J., W. A. Teder-Salejarvi, and L. M. Ward. 2001. Multisensory integration and crossmodal attention effects in the human brain. Science 292: 1791. McDonald, J. J., and L. M. Ward. 2000. Involuntary listening aids seeing: Evidence from human electrophysiology. Psychol Sci 11: 167–171. McGurk, H., and J. MacDonald. 1976. Hearing lips and seeing voices. Nature 264: 746–748. Meredith, M. A., J. W. Nemitz, and B. E. Stein. 1987. Determinants of multisensory integration in superior colliculus neurons: I. Temporal factors. J Neurosci 7: 3215–3229. Meredith, M. A., and B. E. Stein. 1996. Spatial determinants of multisensory integration in cat superior colliculus neurons. J Neurophysiol 75: 1843–1857. Meredith, M. A., and B. E. Stein. 1986a. Visual, auditory, and somatosensory convergence on cells in superior colliculus results in multisensory integration. J Neurophysiol 56: 640–662. Meredith, M. A., and B. E. Stein. 1986b. Spatial factors determine the activity of multisensory neurons in cat superior colliculus. Brain Res 365: 350–354. Meredith, M. A., and B. E. Stein. 1983. Interactions among converging sensory inputs in the superior colliculus. Science 221: 389–391. Meyer, M., S. Baumann, S. Marchina, and L. Jancke. 2007. Hemodynamic responses in human multisensory and auditory association cortex to purely visual stimulation. BMC Neurosci 8: 14. Meylan, R. V., and M. M. Murray. 2007. Auditory–visual multisensory interactions attenuate subsequent visual responses in humans. Neuroimage 35: 244–254. Miller, J. 1982. Discrete versus continuous stagemodels of human information processing: In search of partial output. Exp Psychol Hum Percept Perform 8: 273–296. Miller, L. M., and M. D’Esposito. 2005. Perceptual fusion and stimulus coincidence in the cross-modal integration of speech. J Neurosci 25: 5884–5893. Moore, T. 2006. The neurobiology of visual attention: Finding sources. Curr Opin Neurobiol 16: 159–165. Moran, R. J., S. Molholm, R. B. Reilly, and J. J. Foxe. 2008. Changes in effective connectivity of human superior parietal lobule under multisensory and unisensory stimulation. Eur J Neurosci 27: 2303–2312. Mozolic, J. L., D. Joyner, C. E. Hugenschmidt, A. M. Peiffer, R. A. Kraft, J. A. Maldjian et al. 2008. Crossmodal deactivations during modality-specific selective attention. BMC Neurol 8: 35. Nagy, A., G. Eordegh, Z. Paroczy, Z. Markus, and G. Benedek. 2006. Multisensory integration in the basal ganglia. Eur J Neurosci 24: 917–924. Natale, E., C. A. Marzi, and E. Macaluso. 2009. FMRI correlates of visuo-spatial reorienting investigated with an attention shifting double-cue paradigm. Hum Brain Mapp 30: 2367–2381. Nobre, A. C., J. T. Coull, C. D. Frith, and M. M. Mesulam. 1999. Orbitofrontal cortex is activated during breaches of expectation in tasks of visual attention. Nat Neurosci 2: 11–12. Noesselt, T., J. W. Rieger, M. A. Schoenfeld, M. Kanowski, H. Hinrichs, H. J. Heinze et al. 2007. Audiovisual temporal correspondence modulates human multisensory superior temporal sulcus plus primary sensory cortices. J Neurosci 27: 11431–11441. Pessoa, L., S. Kastner, and L. G. Ungerleider. 2003. Neuroimaging studies of attention: From modulation of sensory processing to top-down control. J Neurosci 23: 3990–3998.
506
The Neural Bases of Multisensory Processes
Posner, M. I. 1980. Orienting of attention. Q J Exp Psychol 32: 3–25. Posner, M. I., J. A. Walker, F. J. Friedrich, and R. D. Rafal. 1984. Effects of parietal injury on covert orienting of attention. J Neurosci 4: 1863–1874. Pouget, A., S. Deneve, and J. R. Duhamel. 2002. A computational perspective on the neural basis of multisensory spatial representations. Nat Rev Neurosci 3: 741–747. Rockland, K. S., and H. Ojima. 2003. Multisensory convergence in calcarine visual areas in macaque monkey. Int J Psychophysiol 50: 19–26. Romei, V., M. M. Murray, L. B. Merabet, and G. Thut. 2007. Occipital transcranial magnetic stimulation has opposing effects on visual and auditory stimulus detection: Implications for multisensory interactions. J Neurosci 27: 11465–11472. Ruff, C. C., F. Blankenburg, O. Bjoertomt, S. Bestmann, E. Freeman, J. D. Haynes et al. 2006. Concurrent TMS-fMRI and psychophysics reveal frontal influences on human retinotopic visual cortex. Curr Biol 16: 1479–1488. Saito, D. N., K. Yoshimura, T. Kochiyama, T. Okada, M. Honda, and N. Sadato. 2005. Cross-modal binding and activated attentional networks during audio-visual speech integration: A functional MRI study. Cereb Cortex 15: 1750–1760. Santangelo, V., M. O. Belardinelli, C. Spence, and E. Macaluso. 2009. Interactions between voluntary and stimulusdriven spatial attention mechanisms across sensory modalities. J Cogn Neurosci 21: 2384–2397. Saygin, A. P., and M. I. Sereno. 2008. Retinotopy and attention in human occipital, temporal, parietal, and frontal cortex. Cereb Cortex 18: 2158–2168. Sereno, M. I., and R. S. Huang. 2006. A human parietal face area contains aligned head-centered visual and tactile maps. Nat Neurosci 9: 1337–1343. Sereno, M. I., S. Pitzalis, and A. Martinez. 2001. Mapping of contralateral space in retinotopic coordinates by a parietal cortical area in humans. Science 294: 1350–1354. Shipp, S. 2004. The brain circuitry of attention. Trends Cogn Sci 8: 223–230. Shomstein, S., and S. Yantis. 2006. Parietal cortex mediates voluntary control of spatial and nonspatial auditory attention. J Neurosci 26: 435–439. Sommer, M. A., and R. H. Wurtz. 2000. Composition and topographic organization of signals sent from the frontal eye field to the superior colliculus. J Neurophysiol 83: 1979–2001. Soto-Faraco, S., and A. Alsius. 2009. Deconstructing the McGurk–MacDonald illusion. Exp Psychol Hum Percept Perform 35: 580–587. Spence, C., and Driver, J. 1996. Audiovisual links in endogenous covert spatial attention. J Exp Psychol Hum Percept Perform 22: 1005–1030. Spence, C., J. J. McDonald, and J. Driver. 2004. Exogenous spatial-cuing studies of human cross-modal attention and multisensory integration. In: Crossmodal space and crossmodal attention, ed. C. Spence and J. Driver, 277–320. Oxford: Oxford Univ. Press. Spence, C., M. E. Nicholls, N. Gillespie, and J. Driver. 1998. Cross-modal links in exogenous covert spatial orienting between touch, audition, and vision. Percept Psychophys 60: 544–557. Stein, B. E., and M. A. Meredith. 1993. The merging of the senses. Cambridge, MA: MIT Press. Stein, B. E., and T. R. Stanford. 2008. Multisensory integration: Current issues from the perspective of the single neuron. Nat Rev Neurosci 9: 255–266. Stephan, K. E., L. M. Harrison, S. J. Kiebel, O. David, W. D. Penny, and K. J. Friston. 2007. Dynamic causal models of neural system dynamics: Current state and future extensions. J Biosci 32: 129–144. Talsma, D., T. J. Doty, and M. G. Woldorff. 2007. Selective attention and audiovisual integration: Is attending to both modalities a prerequisite for early integration? Cereb Cortex 17: 679–690. Talsma, D., and M.G. Woldorff. 2005. Selective attention and multisensory integration: Multiple phases of effects on the evoked brain activity. J Cogn Neurosci 17: 1098–1114. Tajadura-Jiménez, A., N. Kitagawa, A. Väljamäe, M. Zampini, M. M. Murray, and C. Spence. 2009. Auditory– somatosensory multisensory interactions are spatially modulated by stimulated body surface and acoustic spectra. Neuropsychologia 47: 195–203. Teder-Salejarvi, W. A., F. Di Russo, J. J. McDonald, and S. A. Hillyard. 2005. Effects of spatial congruity on audio-visual multimodal integration. J Cogn Neurosci 17: 1396–1409. Teder-Salejarvi, W. A., T. F. Munte, F. Sperlich, and S. A. Hillyard. 1999. Intra-modal and cross-modal spatial attention to auditory and visual stimuli. An event-related brain potential study. Cogn Brain Res 8: 327–343. Tootell, R. B., M. S. Silverman, E. Switkes, and R. L. De Valois. 1982. Deoxyglucose analysis of retinotopic organization in primate striate cortex. Science 218: 902–904. Treisman, A. M., and G. Gelade. 1980. A feature-integration theory of attention. Cogn Psychol. 12: 97–136.
Spatial Constraints in Multisensory Attention
507
Trenner, M. U., H. R. Heekeren, M. Bauer, K. Rossner, R. Wenzel, A. Villringer et al. 2008. What happens in between? Human oscillatory brain activity related to crossmodal spatial cueing. PLoS ONE 3: 1467. van Atteveldt, N. M., E. Formisano, R. Goebel, and L. Blomert. 2007. Top-down task effects overrule automatic multisensory responses to letter–sound pairs in auditory association cortex. Neuroimage 36: 1345–1360. Vandenberghe, R., D. R. Gitelman, T. B. Parrish, and M. M. Mesulam. 2001. Functional specificity of superior parietal mediation of spatial shifting. Neuroimage 14: 661–673. Vossel, S., C. M. Thiel, and G. R. Fink. 2006. Cue validity modulates the neural correlates of covert endogenous orienting of attention in parietal and frontal cortex. Neuroimage 32: 1257–1264. Wallace, M. T., J. G. McHaffie, and B. E. Stein. 1997. Visual response properties and visuotopic representation in the newborn monkey superior colliculus. J Neurophysiol 78: 2732–2741. Wallace, M. T., and B. E. Stein, 1994. Cross-modal synthesis in the midbrain depends on input from cortex. J Neurophysiol 71: 429–432. Wilkinson, L. K., M. A. Meredith, and B. E. Stein. 1996. The role of anterior ectosylvian cortex in crossmodality orientation and approach behavior. Exp Brain Res 112: 1–10. Wu, C. T., D. H. Weissman, K. C. Roberts, and M. G. Woldorff. 2007. The neural circuitry underlying the executive control of auditory spatial attention. Brain Res 1134: 187–198. Yantis, S., J. Schwarzbach, J. T. Serences, R. L. Carlson, M. A. Steinmetz, J. J. Pekar et al. 2002. Transient neural activity in human parietal cortex during spatial attention shifts. Nat Neurosci 5: 995–1002. Zimmer, U., and E. Macaluso. 2007. Processing of multisensory spatial congruency can be dissociated from working memory and visuo-spatial attention. Eur J Neurosci 26: 1681–1691.
26
Cross-Modal Spatial Cueing of Attention Influences Visual Perception John J. McDonald, Jessica J. Green, Viola S. Störmer, and Steven A. Hillyard
CONTENTS 26.1 Spatial Attention: Modality-Specific or Supramodal?..........................................................509 26.2 Involuntary Cross-Modal Spatial Attention Enhances Perceptual Sensitivity...................... 511 26.3 Involuntary Cross-Modal Spatial Attention Modulates Time-Order Perception.................. 512 26.4 Beyond Temporal Order: The Simultaneity Judgment Task................................................. 516 26.5 Involuntary Cross-Modal Spatial Attention Alters Appearance........................................... 518 26.6 Possible Mechanisms of Cross-Modal Cue Effects............................................................... 520 26.7 Conclusions and Future Directions........................................................................................ 523 References....................................................................................................................................... 523
26.1 SPATIAL ATTENTION: MODALITY-SPECIFIC OR SUPRAMODAL? It has long been known that “looking out of the corner of one’s eye” can influence the processing of objects in the visual field. One of the first experimental demonstrations of this effect came from Hermann von Helmholtz, who, at the end of the nineteenth century, demonstrated that he could identify letters in a small region of a briefly illuminated display if he directed his attention covertly (i.e., without moving his eyes) toward that region in advance (Helmholtz 1866). Psychologists began to study this effect systematically in the 1970s using the spatial-cueing paradigm (Eriksen and Hoffman 1972; Posner 1978). Across a variety of speeded response tasks, orienting attention to a particular location in space was found to facilitate responses to visual targets that appeared at the cued location. Benefits in speeded visual performance were observed when attention was oriented voluntarily (endogenously, in a goal-driven manner) in response to a spatially predictive symbolic visual cue or involuntarily (exogenously, in a stimulus-driven manner) in response to a spatially nonpredictive peripheral visual cue such as a flash of light. For many years, the covert orienting of attention in visual space was seen as a special case, because initial attempts to find similar spatial cueing effects in the auditory modality did not succeed (e.g., Posner 1978). Likewise, in several early cross-modal cueing studies, voluntary and involuntary shifts of attention in response to visual cues were found to have no effect on the detection of subsequent auditory targets (for review, see Spence and McDonald 2004). Consequently, during the 1970s and 1980s (and to a lesser extent 1990s), the prevailing view was that location-based attentional selection was a modality-specific and predominantly visual process. Early neurophysiological and neuropsychological studies painted a different picture about the modality specificity of spatial attention. On the neurophysiological front, Hillyard and colleagues (1984) showed that sustaining attention at a predesignated location to the left or right of fixation modulates the event-related potentials (ERPs) elicited by stimuli in both task-relevant and task- irrelevant 509
510
The Neural Bases of Multisensory Processes
modalities. Visual stimuli presented at the attended location elicited an enlarged negative ERP component over the anterior scalp 170 ms after stimulus onset, both when visual stimuli were relevant and when they were irrelevant. Similarly, auditory stimuli presented at the attended location elicited an enlarged negativity over the anterior scalp beginning 140 ms after stimulus onset, both when auditory stimuli were relevant and when they were irrelevant. Follow-up studies confirmed that spatial attention influences ERP components elicited by stimuli in an irrelevant modality when attention is sustained at a prespecified location over several minutes (Teder-Sälejärvi et al. 1999) or is cued on a trial-by-trial basis (Eimer and Schröger 1998). The results from these ERP studies indicate that spatial attention is not an entirely modality-specific process. On the neuropsychological front, Farah and colleagues (1989) showed that unilateral damage to the parietal lobe impairs reaction time (RT) performance in a spatial cueing task involving spatially nonpredictive auditory cues. Prior visual-cueing studies had shown that patients with damage to the right parietal lobe were substantially slower to detect visual targets appearing in the left visual field following a peripheral visual cue to the right visual field (invalid trials) than when attention was cued to the left (valid trials) or was cued to neither side (neutral trials) (Posner et al. 1982, 1984). This location-specific RT deficit was attributed to an impairment in the disengagement of attention, mainly because the patients appeared to have no difficulty in shifting attention to the contralesional field following a valid cue or neutral cue. In Farah et al.’s study, similar impairments in detecting contralesional visual targets were observed following either invalid auditory or visual cues presented to the ipsilesional side. On the basis of these results, Farah and colleagues concluded that sounds and lights automatically engage the same supramodal spatial attention mechanism. Given the neurophysiological and neuropsychological evidence in favor of a supramodal (or at least partially shared) spatial attention mechanism, why did several early behavioral studies appear to support the modality-specific view of spatial attention? These initial difficulties in showing spatial attention effects outside of the visual modality may be attributed largely to methodological factors, because some of the experimental designs that had been used successfully to study visual spatial attention were not ideal for studying auditory spatial attention. In particular, because sounds can be rapidly detected based on spectrotemporal features that are independent of a sound’s spatial location, simple detection measures that had shown spatial specificity in visual cueing tasks did not always work well for studying spatial attention within audition (e.g., Posner 1978). As researchers began to realize that auditory spatial attention effects might be contingent on the degree to which sound location is processed (Rhodes 1987), new spatial discrimination tasks were developed to ensure the use of spatial representations (McDonald and Ward 1999; Spence and Driver 1994). With these new tasks, researchers were able to document spatial cueing effects using all the various combinations of visual, auditory, and tactile cue and target stimuli. As reviewed elsewhere (e.g., Driver and Spence 2004), voluntary spatial cueing studies had begun to reveal a consistent picture by the mid 1990s: voluntarily orienting attention to a location facilitated the processing of subsequent targets regardless of the cue and target modalities. The picture that emerged from involuntary spatial cueing studies remained less clear because some of the spatial discrimination tasks that were developed failed to reveal cross-modal cueing effects (for detailed reviews of methodological issues, see Spence and McDonald 2004; Wright and Ward 2008). For example, using an elevation-discrimination task, Spence and Driver found an asymmetry in the involuntary spatial cueing effects between visual and auditory stimuli (Spence and Driver 1997). In their studies, spatially nonpredictive auditory cues facilitated responses to visual targets, but spatially nonpredictive visual cues failed to influence responses to auditory targets. For some time the absence of a visual–auditory cue effect weighed heavily on models of involuntary spatial attention. In particular, it was taken as evidence against a single supramodal attention system that mediated involuntary deployments of attention in multisensory space. However, researchers began to suspect that Spence and Driver’s (1997) missing audiovisual cue effect stemmed from the large spatial separation between cue and target, which existed even on validly (ipsilaterally) cued trials, and the different levels of precision with which auditory and visual stimuli can be localized.
Cross-Modal Spatial Cueing of Attention Influences Visual Perception
511
Specifically, it was hypothesized that visual cues triggered shifts of attention that were focused too narrowly around the cued location to affect processing of a distant auditory target (Ward et al. 2000). Data from a recent study confirmed this narrow-focus explanation for the last remaining “missing link” in cross-modal spatial attention (Prime et al. 2008). Visual cues were found to facilitate responses to auditory targets that were presented at the cued location but not auditory targets that were presented 14° above or below the cued location (see also McDonald et al. 2001). The bulk of the evidence to date indicates that orienting attention involuntarily or voluntarily to a specific location in space can facilitate responding to subsequent targets, regardless of the modality of the cue and target stimuli. In principle, such cross-modal cue effects might reflect the consequences of a supramodal attention-control system that alters the perceptual representations of objects in different modalities (Farah et al. 1989). However, the majority of behavioral studies to date have examined the effects of spatial cues on RT performance, which is at best a very indirect measure of perceptual experience (Luce 1986; Watt 1991). Indeed, measures of response speed are inherently ambiguous in that RTs reflect the cumulative output of multiple stages of processing, including low-level sensory and intermediate perceptual stages, as well as later stages involved in making decisions and executing actions. In theory, spatial cueing could influence processing at any one of these stages. There is some evidence that the appearance of a spatial cue can alter an observer’s willingness to respond and reduce the uncertainty of his or her decisions without affecting perception (Shiu and Pashler 1994; Sperling and Dosher 1986). Other evidence suggests that whereas voluntary shifts of attention can affect perceptual processing, involuntary shifts of attention may not (Prinzmetal et al. 2005). In this chapter, we review studies that have extended the RT-based chronometric investigation of cross-modal spatial attention by utilizing psychophysical measures that better isolate perceptuallevel processes. In addition, neurophysiological and neuroimaging methods have been combined with these psychophysical approaches to identify changes in neural activity that might underlie the cross-modal consequences of spatial attention on perception. These methods have also examined neural activity within the cue–target interval that might reflect supramodal (or modality specific) control of spatial attention and subsequent anticipatory biasing of activity within sensory regions of the cortex.
26.2 INVOLUNTARY CROSS-MODAL SPATIAL ATTENTION ENHANCES PERCEPTUAL SENSITIVITY The issue of whether attention affects perceptual or post-perceptual processing of external stimuli has been vigorously debated since the earliest dichotic listening experiments revealed that selective listening influenced auditory performance (Broadbent 1958; Cherry 1953; Deutsch and Deutsch 1963; Treisman and Geffen 1967). In the context of visual–spatial cueing experiments, the debate has focused on two general classes of mechanisms by which attention might influence visual performance (see Carrasco 2006; Lu and Dosher 1998; Luck et al. 1994, 1996; Smith and Ratcliff 2009; Prinzmetal et al. 2005). On one hand, attention might lead to a higher signal-to-noise ratio for stimuli at attended locations by enhancing their perceptual representations. On the other hand, attention might reduce the decision-level or response-level uncertainty without affecting perceptual processing. For example, spatial cueing might bias decisions about which location contains relevant stimulus information (the presumed signal) in favor of the cued location, thereby promoting a strategy to exclude stimulus information arising from uncued locations (the presumed noise; e.g., Shaw 1982, 1984; Shiu and Pashler 1994; Sperling and Dosher 1986). Such noise-reduction explanations account for the usual cueing effects (e.g., RT costs and benefits) without making assumptions about limited perceptual capacity. Several methods have been developed to discourage decision-level mechanisms so that any observable cue effect can be ascribed more convincingly to attentional selection at perceptual stages
512
The Neural Bases of Multisensory Processes
of processing. One such method was used to investigate whether orienting attention involuntarily to a sudden sound influences perceptual-level processing of subsequent visual targets (McDonald et al. 2000). The design was adapted from earlier visual-cueing studies that eliminated location uncertainty by presenting a mask at a single location and requiring observers to indicate whether they saw a target at the masked location (Luck et al. 1994, 1996; see also Smith 2000). The mask serves a dual purpose in this paradigm: to ensure that the location of the target (if present) is known with complete certainty and to backwardly mask the target so as to limit the accrual and persistence of stimulus information at the relevant location. Under such conditions, it is possible to use methods of signal detection theory to obtain a measure of an observer’s perceptual sensitivity (d′)—the ability to discern a sensory event from background noise—that is independent of the observer’s decision strategy (which, in signal detection theory, is characterized by the response criterion, β; see Green and Swets 1966). Consistent with a perceptual-level explanation, McDonald and colleagues (2000) found that perceptual sensitivity was higher when the visual target appeared at the location of the auditory cue than when it appeared on the opposite side of fixation (Figure 26.1a and b). This effect was ascribed to an involuntary shift of attention to the cued location because the sound provided no information about the location of the impending target. Also, because there was no uncertainty about the target location, the effect could not be attributed to a reduction in location uncertainty. Consequently, the results provided strong evidence that shifting attention involuntarily to the location of a sound actually improves the perceptual quality of a subsequent visual event appearing at that location (see also Dufour 1999). An analogous effect on perceptual sensitivity has been reported in the converse audiovisual combination, when spatially nonpredictive visual cues were used to orient attention involuntarily before the onset of an 800-Hz target embedded in a white-noise mask (Soto-Faraco et al. 2002). Together, these results support the view that sounds and lights engage a common supramodal spatial attention system, which then modulates perceptual processing of relevant stimuli at the cued location (Farah et al. 1989). To investigate the neural processes by which orienting spatial attention to a sudden sound influences processing of a subsequent visual stimulus, McDonald and colleagues (2003) recorded ERPs in the signal-detection paradigm outlined above. ERPs to visual stimuli appearing at validly and invalidly cued locations began to diverge from one another at about 100 ms after stimulus onset, with the earliest phase of this difference being distributed over the midline central scalp (Figure 26.1c and d). After about 30–40 ms, this ERP difference between validly and invalidly cued visual stimuli shifted to midline parietal and lateral occipital scalp regions. A dipole source analysis indicated that the initial phase of this difference was generated in or near the multisensory region of the superior temporal sulcus (STS), whereas the later phase was generated in or near the fusiform gyrus of the occipital lobe (Figure 26.1e). This pattern of results suggests that enhanced visual perception produced by the cross-modal orienting of spatial attention may depend on feedback connections from the multisensory STS to the ventral stream of visual cortical areas. Similar cross-modal cue effects were observed when participants made speeded responses to the visual targets, but the earliest effect was delayed by 100 ms (McDonald and Ward 2000). This is in line with behavioral data suggesting that attentional selection might take place earlier when target detection accuracy (or fine perceptual discrimination; see subsequent sections) is emphasized than when speed of responding is emphasized (Prinzmetal et al. 2005).
26.3 INVOLUNTARY CROSS-MODAL SPATIAL ATTENTION MODULATES TIME-ORDER PERCEPTION The findings reviewed in the previous section provide compelling evidence that cross-modal attention influences the perceptual quality of visual stimuli. In the context of a spatial cueing experiment, perceptual enhancement at an early stage of processing could facilitate decision and response
513
Cross-Modal Spatial Cueing of Attention Influences Visual Perception (a)
validly cued invalidly cued
(b) Detectibility (d′)
2.00 1.75 1.50 1.25 1.00 0
Left
Right
Target location
(c)
(d)
PO7
PO8
validly cued invalidly cued
(e)
x=43
–1µV
–1.73
200 ms x=33
120–140 ms
µV
150–170 ms
0
–2.2
µV
0
x=35
STS dipoles FG dipoles PostC dipoles
FIGURE 26.1 Results from McDonald et al.’s (2000, 2003) signal detection experiments. (a) Schematic illustration of stimulus events on a valid-cue trial. Small light displays were fixed to bottoms of two loudspeaker cones, one situated to the left and right of a central fixation point. Each trial began with a spatially nonpredictive auditory cue from the left or right speaker (first panel), followed by a faint visual target on some trials (second panel) and a salient visual mask (third panel). Participants were required to indicate whether they saw the visual target. (b) Perceptual sensitivity data averaged across participants. (c) Grand-average event-related potentials (ERPs) to left visual field stimuli following valid and invalid auditory cues. The ERPs were recorded from lateral occipital electrodes PO7 and PO8. Negative voltages are plotted upward, by convention. Shaded box highlights interval of P1 and N1 components, in which cue effects emerged. (d) Scalp topographies of enhanced negative voltages to validly cued visual targets. (e) Projections of best-fitting dipolar sources onto sections of an individual participant’s MRI. Dipoles were located in superior temporal lobe (STS), fusiform gyrus (FG), and perisylvian cortex near post-central gyrus (PostC). PostC dipoles accounted for relatively late (200–300 ms) activity over more anterior scalp regions.
processing at later stages, thereby leading to faster responses for validly cued objects than for invalidly cued objects. Theoretically, however, changes in the timing of perceptual processing could also contribute to the cue effects on RT performance: an observer might become consciously aware of a target earlier in time when it appears at a cued location than when it appears at an uncued location. In fact, the idea that attention influences the timing of our perceptions is an old and controversial one. More than 100 years ago, Titchener (1908) asserted that when confronted with multiple objects, an observer becomes consciously aware of an attended object before other unattended objects. Titchener called the hypothesized temporal advantage for attended objects the law of prior entry.
514
The Neural Bases of Multisensory Processes
Observations from laboratory experiments in the nineteenth and early twentieth centuries were interpreted along the lines of attention-induced prior entry. In one classical paradigm known as the complication experiment, observers were required to indicate the position of a moving pointer at the moment a sound was presented (e.g., Stevens 1904; Wundt 1874; for a review, see Boring 1929). When listening in anticipation for the auditory stimulus, observers typically indicated that the sound appeared when the pointer was at an earlier point along its trajectory than was actually the case. For example, observers might report that a sound appeared when a pointer was at position 4 even though the sound actually appeared when the pointer was at position 5. Early on, it was believed that paying attention to the auditory modality facilitated sound perception and led to a relative delay of visual perception, so that the pointer’s perceived position lagged behind its actual position. However, this explanation fell out of favor when later results indicated that a specific judgment strategy, rather than attention-induced prior entry, might be responsible for the mislocalization error (e.g., Cairney 1975). In more recent years, attention-induced prior entry has been tested experimentally in visual temporal-order judgment (TOJ) tasks that require observers to indicate which of two rapidly presented visual stimuli appeared first. When the attended and unattended stimuli appear simultaneously, observers typically report that the attended stimulus appeared to onset before the unattended stim ulus (Stelmach and Herdman 1991; Shore et al. 2001). Moreover, in line with the supramodal view of spatial attention, such changes in temporal perception have been found when shifts in spatial attention were triggered by spatially nonpredictive auditory and tactile cues as well as visual cues (Shimojo et al. 1997). Despite the intriguing behavioral results from TOJ experiments, the controversy over attentioninduced prior entry has continued. The main problem harks back to the debate over the complication experiments: an observer’s judgment strategy might contribute to the tendency to report the cued target as appearing first (Pashler 1998; Schneider and Bavelier 2003; Shore et al. 2001). Thus, in a standard TOJ task, observers might perceive two targets to appear simultaneously but still report seeing the target on the cued side first because of a decision rule that favors the cued target (e.g., when in doubt, select the cued target). Simple response biases (e.g., stimulus–response compatibility effects) can be avoided quite easily by altering the task (McDonald et al. 2005; Shore et al. 2001), but it is difficult to completely avoid the potential for response bias. As noted previously, ERP recordings can be used to distinguish between changes in high-level decision and response processes and changes in perceptual processing that could underlie entry to conscious awareness. An immediate challenge to this line of research is to specify the ways in which the perceived timing of external events might be associated with activity in the brain. Philosopher Daniel Dennett expounded two alternatives (Dennett 1991). On one hand, the perceived timing of external events may be derived from the timing of neural activities in relevant brain circuits. For example, the perceived temporal order of external events might be based on the timing of early cortical evoked potentials. On the other hand, the brain might not represent the timing of perceptual events with time itself. In Dennett’s terminology, the represented time (e.g., A before B) is not necessarily related to the time of the representing (e.g., representing of A does not necessarily precede representing of B). Consequently, the perceived temporal order of external events might be based on nontemporal aspects of neural activities in relevant brain circuits. McDonald et al. (2005) investigated the effect of cross-modal spatial attention on visual timeorder perception using ERPs to track the timing of cortical activity in a TOJ experiment. A spatially nonpredictive auditory cue was presented to the left or right side of fixation just before the occurrence of a pair of simultaneous or nearly simultaneous visual targets (Figure 26.2a). One of the visual targets was presented at the cued location, whereas the other was presented at the homologous location in the opposite visual hemifield. Consistent with previous behavioral studies, the auditory spatial cue had a considerable effect on visual TOJs (Figure 26.2b). Participants judged the cued target as appearing first on 79% of all simultaneous-target trials. To nullify this cross-modal cueing effect, the uncued target had to be presented nearly 70 ms before the cued target.
515
Cross-Modal Spatial Cueing of Attention Influences Visual Perception (b) Cued side reported first (%)
(a)
Cue
T1
100 Actual simultaneity 75 50
0
T2
Point of subjective simultaneity
25
uncued target first –70
–35
cued target first 0
35
70
Cued side on set advantage (ms)
(c)
Ipsilateral to cued side Contralateral to cued side
(d)
P1
–2 µV
1.55
N1
0
100
(e)
90-120 ms
0
p < .05 200
Time post stimulus (ms)
300
ipsilateral
contralateral
μV
STS dipole
FG dipole
FIGURE 26.2 Results from McDonald et al.’s (2005) temporal-order-judgment experiment. (a) Schematic illustration of events on a simultaneous-target trial (top) and nonsimultaneous target trials (bottom). Participants indicated whether a red or a green target appeared first. SOA between cue and first target event was 100– 300 ms, and SOA between nonsimultaneous targets was 35 or 70 ms. T1 and T2 denote times at which visual targets could occur. (b) Mean percentage of trials on which participants reported seeing the target on cued side first, as a function of cued-side onset advantage (CSOA; i.e., lead time). Negative CSOAs indicate that uncued-side target was presented first; positive CSOAs indicate that cued-side target was presented first. (c) Grand-average ERPs to simultaneous visual targets, averaged over 79% of trials on which participants indicated that cued-side target appeared first. ERPs were recorded at contralateral and ipsilateral occipital electrodes (PO7/PO8). Statistically significant differences between contralateral and ipsilateral waveforms are denoted in gray on time axis. (d) Scalp topographies of ERP waveforms in time range of P1 (90–120 ms). Left and right sides of the map show electrodes ipsilateral and contralateral electrodes, respectively. (e) Projections of best-fitting dipolar sources onto sections of an average MRI. Dipoles were located in superior temporal lobe (STS) and fusiform gyrus (FG). FG dipoles accounted for cue-induced P1 amplitude modulation, whereas STS dipoles accounted for a long-latency (200–250 ms) negative deflection.
To elucidate the neural basis of this prior-entry effect, McDonald and colleagues (2005) examined the ERPs elicited by simultaneously presented visual targets following the auditory cue. The analytical approach taken was premised on the lateralized organization of the visual system and the pattern of ERP effects that have been observed under conditions of bilateral visual stimulation. Several previous studies on visual attention showed that directing attention to one side of a bilateral visual display results in a lateralized asymmetry of the early ERP components measured over the occipital scalp, with an increased positivity at electrode sites contralateral to the attended location beginning in the time range of the occipital P1 component (80–140 ms; Heinze et al. 1990, 1994; Luck et al. 1990; see also Fukuda and Vogel 2009). McDonald et al. (2005) hypothesized that if attention speeds neural transmission at early stages of the visual system, the early ERP components elicited by simultaneous visual targets would show an analogous lateral asymmetry in time, such that the P1 measured contralateral to the attended (cued) visual target would occur earlier than the P1 measured contralateral to the unattended (uncued) visual target. Such a finding would
516
The Neural Bases of Multisensory Processes
be consistent with Stelmach and Herdman’s (1991) explanation of attention-induced prior entry as well as with the view that the time course of perceptual experience is tied to the timing of the early evoked activity in the visual cortex (Dennett 1991). Such a latency shift was not observed, however, even though the auditory cue had a considerable effect on the judgments of temporal order of the visual targets. Instead, cross-modal cueing led to an amplitude increase (with no change in latency) of the ERP positivity in the ventral visual cortex contralateral to the side of the auditory cue, starting in the latency range of the P1 component (90–120 ms) (Figure 26.2c–e). This finding suggests that the effect of spatial attention on the perception of temporal order occurs because an increase in the gain of the cued sensory input causes a perceptual threshold to be reached at an earlier time, not because the attended input was transmitted more rapidly than the unattended input at the earliest stages of processing. The pattern of ERP results obtained by McDonald and colleagues is likely an important clue for the understanding the neural basis of visual prior entry due to involuntary deployments of spatial attention to sudden sounds. Although changes in ERP amplitude appear to underlie visual perceptual prior entry when attention is captured by lateralized auditory cues, changes in ERP timing might contribute to perceptual prior entry in other situations. This issue was addressed in a recent study of multisensory prior entry, in which participants voluntarily attended to either visual or tactile stimuli and judged whether the stimulus on the left or right appeared first, regardless of stimulus modality (Vibell et al. 2007). The ERP analysis centered on putatively visual ERP peaks over the posterior scalp (although ERPs to the tactile stimuli were not subtracted out and thus may have contaminated the ERP waveforms; cf. Talsma and Woldorff 2005). Interestingly, the P1 peaked at an average of 4 ms earlier when participants were attending to the visual modality than when they were attending the tactile modality, suggesting that modality-based attentional selection may have a small effect on the timing of early, evoked activity in the visual system. These latency results are not entirely clear, however, because the small-but-significant attention effect may have been caused by a single participant with an implausibly large latency difference (17 ms) and may have been influenced by overlap with the tactile ERP. Unfortunately, the authors did not report whether attention had a similar effect on the latency of the tactile ERPs, which may have helped to corroborate the small attention effect on P1 latency. Notwithstanding these potential problems in the ERP analysis, it is tempting to speculate that voluntary modality-based attentional selection influences the timing of early visual activity, whereas involuntary location-based attentional selection influences the gain of early visual activity. The question would still remain, however, how very small changes in ERP latency (4 ms or less) could underlie much larger perceptual effects of tens of milliseconds.
26.4 BEYOND TEMPORAL ORDER: THE SIMULTANEITY JUDGMENT TASK Recently, Santangelo and Spence (2008) offered an alternative explanation for the finding of McDonald and colleagues (2005) that nonpredictive auditory spatial cues affect visual time order perception. Specifically, the authors suggested that the behavioral results in McDonald et al.’s TOJ task were not due to changes in perception but rather to decision-level factors. They acknowledged that simple response biases (e.g, a left cue primes a “left” response) would not have contributed to the behavioral results because participants indicated the color, not the location, of the target that appeared first. However, Santangelo and Spence raised the concern that some form of “secondary” response bias might have contributed to the TOJ effects (Schneider and Bavelier 2003; Stelmach and Herdman 1991).* For example, participants might have decided to select the stimulus at the cued location when uncertain as to which stimulus appeared first. In an attempt to circumvent such secondary response biases, Santangelo and Spence used a simultaneity judgment (SJ) task, in which participants had to judge whether two stimuli were presented simultaneously or successively (Carver and Brown 1997; Santangelo and Spence 2008; Schneider and Bavelier 2003). They reported that * This argument would also apply to the findings of Vibell et al.’s (2007) cross-modal TOJ study.
Cross-Modal Spatial Cueing of Attention Influences Visual Perception
517
the uncued target had to appear 15–17 ms before the cued target in order for participants to have the subjective impression that the two stimuli appeared simultaneously. This difference is referred to as a shift in the point of subjective simultaneity (PSS), and it is typically attributed to the covert orienting of attention (but see Schneider and Bavelier 2003, for an alternative sensory-based account). The estimated shift in PSS was much smaller than the one reported in McDonald et al.’s earlier TOJ task (17.4 vs. 68.5 ms), but the conclusions derived from the two findings were the same: Involuntary capture of spatial attention by a sudden sound influences the perceived timing of visual events. Santangelo and Spence went on to argue that the shift in PSS reported by McDonald et al. might have been due to secondary response biases and, as a result, the shift in PSS observed in their study provided “the first unequivocal empirical evidence in support of the effect of cross-modal attentional capture on the latencies of perceptual processing” (p. 163). Although the SJ task has its virtues, there are two main arguments against Santangelo and Spence’s conclusions. First, the authors did not take into account the neurophysiological findings of McDonald and colleagues’ ERP study. Most importantly, the effect of auditory spatial cuing on early ERP activity arising from sensory-specific regions of the ventral visual cortex cannot be explained in terms of response bias. Thus, although it may be difficult to rule out all higher-order response biases in a TOJ task, the ERP findings provide compelling evidence that cross-modal spatial attention modulates early visual-sensory processing. Moreover, although the SJ task may be less susceptible to some decision-level factors, it may be impossible to rule out all decision-level factors entirely as contributors to the PSS effect.* Thus, it is not inconceivable that Santangelo and Spence’s behavioral findings may have reflected post-perceptual rather than perceptual effects. Second, it should be noted that Santangelo and Spence’s results provided little, if any, empirical support for the conclusion that cross-modal spatial attention influences the timing of visual perceptual processing. The problem is that their estimated PSS did not accurately represent their empirical data. Their PSS measure was derived from the proportion of “simultaneous” responses, which varied as a function of the stimulus onset asynchrony (SOA) between the target on the cued side and the target on the uncued side. As shown in their Figure 2a, the proportion of “simultaneous” responses peaked when the cued and uncued targets appeared simultaneously (0 ms SOA) and decreased as the SOA between targets increased. The distribution of responses was fit to a Gaussian function using maximum likelihood estimation, and the mean of the fitted Gaussian function—not the observed data—was used as an estimate of the PSS. Critically, this procedure led to a mismatch between the mean of the fitted curve (or more aptly, the mean of the individual-subject fitted curves) and the mean of the observed data. Specifically, whereas the mean of the fitted curves fell slightly to the left of the 0-ms SOA (uncued target presented first), the mean of the observed data actually fell slightly to the right of the 0-ms SOA (cued target presented first) because of a positive skew of the distribution.† Does auditory cueing influence the subjective impression of simultaneity in the context of a SJ task? Unfortunately, the results from Santangelo and Spence’s study provide no clear answer to this question. The reported leftward shift in PSS suggests that the auditory cue had a small facilitatory effect on the perceived timing of the ipsilateral target. However, the rightward skew of the observed * Whereas Santangelo and Spence (2008) made the strong claim that performance in SJ tasks should be completely independent of all response biases, Schneider and Bavelier (2003) argued only that performance in SJ tasks should be less susceptible to such decision-level effects than performance in TOJ tasks. † The mismatch between the estimated PSS and the mean of the observed data in Santangelo and Spence’s (2008) SJ task might have been due to violations in the assumptions of the fitting procedure. Specifically, the maximum likelihood procedure assumes that data are distributed normally, whereas the observed data were clearly skewed. Santangelo and Spence did perform one goodness-of-fit test to help determine whether the data differed significantly from the fitted Gaussians, but this test was insufficient to pick up the positive skew (note that other researchers have employed multiple goodness-of-fit tests before computing PSS; e.g., Stone et al. 2001). Alternatively, the mismatch between the estimated PSS and the mean of the observed data might have arisen because data from the simultaneous-target trials were actually discarded prior to the curve-fitting procedure. This arbitrary step shifted the mode of the distribution 13 ms to the left (uncued target was presented 13 ms before cued target), which happened to be very close to the reported shift in PSS.
518
The Neural Bases of Multisensory Processes
distribution (and consequential rightward shift in the mean) suggests that the auditory cue may actually have delayed perception of the ipsilateral target. Finally, the mode of the observed distribution suggests that the auditory cue had no effect on subjective reports of simultaneity. These inconclusive results suggest that the SJ task may lack adequate sensitivity to detect shifts in perceived time order induced by cross-modal cueing.
26.5 INVOLUNTARY CROSS-MODAL SPATIAL ATTENTION ALTERS APPEARANCE The findings of the signal-detection and TOJ studies outlined in previous sections support the view that involuntary cross-modal spatial attention alters the perception of subsequent visual stimuli as well as the gain of neural responses in extrastriate visual cortex 100–150 ms after stimulus onset. These results largely mirrored the effects of visual spatial cues on visual perceptual sensitivity (e.g., Luck et al. 1994; Smith 2000) and temporal perception (e.g. Stelmach and Herdman 1991; Shore et al. 2001). However, none of these studies directly addressed the question of whether attention alters the subjective appearance of objects that reach our senses. Does attention make white objects appear whiter and dark objects appear darker? Does it make the ticking of a clock sound louder? Psychologists have pondered questions like these for well over a century (e.g., Fechner 1882; Helmholtz 1866; James 1890). Recently, Carrasco and colleagues (1994) introduced a psychophysical paradigm to address the question, “does attention alter appearance.” The paradigm is similar to the TOJ paradigm except that, rather than varying the SOA between two visual targets and asking participants to judge which one was first (or last), the relative physical contrast of two targets is varied and participants are asked to judge which one is higher (or lower) in perceived contrast. In the original variant of the task, a small black dot was used to summon attention to the left or right just before the appearance of two Gabor patches at both left and right locations. When the physical contrasts of the two targets were similar or identical, observers tended to report the orientation of the target on the cued side. Based on these results, Carrasco and colleagues (2004) concluded that attention alters the subjective impression of contrast. In subsequent studies, visual cueing was found to alter the subjective impressions of several other stimulus features, including color saturation, spatial frequency, and motion coherence (for a review, see Carrasco 2006). Carrasco and colleagues performed several control experiments to help rule out alternative explanations for their psychophysical findings (Prinzmetal et al. 2008; Schneider and Komlos 2008). The results of these controls argued against low-level sensory factors (Ling and Carrasco 2007) as well as higher-level decision or response biases (Carrasco et al. 2004; Fuller et al. 2008). However, as we have discussed in previous sections, it is difficult to rule out all alternative explanations on the basis of the behavioral data alone. Moreover, results from different paradigms have led to different conclusions about whether attention alters appearance: whereas the results from Carrasco’s paradigm have indicated that attention does alter appearance, the results from an equality-judgment paradigm introduced by Schneider and Komlos (2008) have suggested that attention may alter decision processes rather than contrast appearance. Störmer et al. (2009) recently investigated whether cross-modal spatial attention alters visual appearance. The visual cue was replaced by a spatially nonpredictive auditory cue delivered in stereo so that it appeared to emanate from a peripheral location of a visual display (25° from fixation). After a 150-ms SOA, two Gabors were presented, one at the cued location and one on the opposite side of fixation (Figure 26.3a). The use of an auditory cue eliminated some potential sensory interactions between visual cue and target that might boost the contrast of the cued target even in the absence of attention (e.g., the contrast of a visual cue could add to the contrast of the cued-location target, thereby making it higher in contrast than the uncued-location target). As in Carrasco et al.’s (2004) high-contrast experiment, the contrast of one (standard) Gabor was set at 22%, whereas the
519
Cross-Modal Spatial Cueing of Attention Influences Visual Perception (b)
p < .05 –100
0
100
Time (ms)
200
300
N1
–1 µV
Probability of choosing test patch as higher in contrast
(a)
1.0
Test patch cued
0.8 0.6 0.4 0.2
Standard patch cued
0.0 6
13
22
37
78
Contrast level of test patch (log)
(c)
(d)
120–140 ms 1.8
0
Ipsilateral
P1
μV
Contralateral
120–140 ms
(e) Mean amplitude difference contralateral minus ipsilateral
Ipsilateral to cued side Contralateral to cued side
120–140 ms
1.5 1.0 0.5 0.0
PO3/PO4
–0.5 –1.0 –0.1
PO7/PO8 0.0
0.1
0.2
0.3
Difference of the probability of choosing cued target as higher contrast minus the probability of choosing uncued target as higher in contrast
FIGURE 26.3 Results from Störmer et al.’s (2009) contrast-appearance experiment. (a) Stimulus sequence and grand-average ERPs to equal-contrast Gabor, recorded at occipital electrodes (PO7/PO8) contralateral and ipsilateral to cued side. On a short-SOA trial (depicted), a peripheral auditory cue was presented 150 ms before a bilateral pair of Gabors that varied in contrast (see text for details). Isolated target ERPs revealed an enlarged positivity contralateral to cued target. Statistically significant differences between contralateral and ipsilateral waveforms are denoted in gray on time axis. (b) Mean probability of reporting contrast of test patch to be higher than that of standard patch, as a function of test-patch contrast. Probabilities for cued-test and cued-standard trials are shown separately. (c) Scalp topographies of equal-contrast-Gabor ERPs in time interval of P1 (120–140 ms). Left and right sides of the map show electrodes ipsilateral and contralateral electrodes, respectively. (d) Localization of distributed cortical current sources underlying contralateral-minus-ipsilateral ERP positivity in 120–140 ms interval, projected onto cortical surface. View of the ventral surface, with occipital lobes at the top. Source activity was estimated using LAURA algorithm and is shown in contralateral hemisphere (right side of brain) only. (e) Correlations between individual participants’ tendencies to report the cued-side target to be higher in contrast and magnitude of enlarged ERP positivities recorded at occipital and parieto-occipital electrodes (PO7/PO8, PO3/PO4) in 120–140 ms interval.
contrast of the other (test) Gabor varied between 6% and 79%. ERPs were recorded on the trials (1/3 of the total) where the two Gabors were equal in contrast. Participants were required to indicate whether the higher-contrast Gabor patch was oriented horizontally or vertically. The psychophysical findings in this auditory cueing paradigm were consistent with those reported by Carrasco and colleagues (2004). When the test and standard Gabors had the same physical contrast, observers reported the orientation of the cued-location Gabor significantly more often than the uncued-location Gabor (55% vs. 45%) (Figure 26.3b). The point of subjective equality (PSE)—the
520
The Neural Bases of Multisensory Processes
test contrast at which observers judged the test patch to be higher in contrast on half of the trials— averaged 20% when the test patch was cued and 25% when the standard patch was cued (in comparison with the 22% standard contrast; Figure 26.3a). These results indicate that spatially nonpredictive auditory cues as well as visual cues can influence subjective (visual) contrast judgments. To investigate whether the auditory cue altered visual appearance as opposed to a decision or response processes, Störmer and colleagues (2009) examined the ERPs elicited by the equal-contrast Gabors as a function of cue location. The authors reasoned that changes in subjective appearance would likely be linked to modulations of early ERP activity in visual cortex associated with perceptual processing rather than decision- or response-level processing (see also Schneider and Komlos 2008). Moreover, any such effect on early ERP activity should correlate with the observers’ tendencies to report the cued target as being higher in contrast. This is exactly what was found. Starting at approximately 90 ms after presentation of the equal-contrast targets, the waveform recorded contralaterally to the cued side became more positive than the waveform recorded ipsilaterally to the cued side (Figure 26.3a). This contralateral positivity was observed on those trials when observers judged the cued-location target to be higher in contrast but not when observers judged the uncuedlocation target to be higher in contrast. The tendency to report the cued-location target as being higher in contrast correlated with the contralateral ERP positivity, most strongly in the time interval of the P1 component (120–140 ms), which is generated at early stages of visual cortical processing. Topographical mapping and distributed source modeling indicated that the increased contralateral positivity in the P1 interval reflected modulations of neural activity in or near the fusiform gyrus of the occipital lobe (Figure 26.3c and d). These ERP findings converge with the behavioral evidence that cross-modal spatial attention affects visual appearance through modulations at an early sensory level rather than by affecting a late decision process.
26.6 POSSIBLE MECHANISMS OF CROSS-MODAL CUE EFFECTS The previous sections have focused on the perceptual consequences of cross-modal spatial cueing. To sum up, salient-but-irrelevant sounds were found to enhance visual perceptual sensitivity, accelerate the timing of visual perceptions, and alter the appearance of visual stimuli. Each of these perceptual effects was accompanied by modulation of the early cortical response elicited by the visual stimulus within ventral-stream regions. Such findings are consistent with the hypothesis that auditory and visual stimuli engage a common neural network involved in the control and covert deployment of attention in space (Farah et al. 1989). Converging lines of evidence have pointed to the involvement of several key brain structures in the control and deployment of spatial attention in visual tasks. These brain regions include the superior colliculus, pulvinar nucleus of the thalamus, intraparietal sulcus, and dorsal premotor cortex (for additional details, see Corbetta and Shulman 2002; LaBerge 1995; Posner and Raichle 1994). Importantly, multisensory neurons have been found in each of these areas, which suggests that the neural network responsible for the covert deployment of attention in visual space may well control the deployment of attention in multisensory space (see Macaluso, this volume; Ward et al. 1998; Wright and Ward 2008). At present, however, there is no consensus as to whether a supramodal attention system is responsible for the cross-modal spatial cue effects outlined in the previous sections. Two different controversies have emerged. The first concerns whether a single, supramodal system controls the deployment of attention in multisensory space or whether separate, modality-specific systems direct attention to stimuli of their respective modalities. The latter view can account for cross-modal cueing effects by assuming that the activation of one system triggers coactivation of others. According to this separate-but-linked proposal, a shift of attention to an auditory location would lead to a separate shift of attention to the corresponding location of the visual field. Both the supramodal and separate-but-linked hypotheses can account for cross-modal cueing effects, making it difficult to distinguish between the two views in the absence of more direct measures of the neural activity that underlies attention control.
Cross-Modal Spatial Cueing of Attention Influences Visual Perception
521
The second major controversy over the possible mechanisms of cross-modal cue effects is specific to studies utilizing salient-but-irrelevant stimuli to capture attention involuntarily. In these studies, the behavioral and neurophysiological effects of cueing are typically maximal when the cue appears 100–300 ms before the target. Although it is customary to attribute these facilitatory effects to the covert orienting of attention, they might alternatively result from sensory interactions between cue and target (Tassinari et al. 1994). The cross-modal-cueing paradigm eliminates unimodal sensory interactions, such as those taking place at the level of the retina, but the possibility of cross-modal sensory interaction remains because of the existence of multisensory neurons at many levels of the sensory pathways that respond to stimuli in different modalities (Driver and Noesselt 2008; Foxe and Schroeder 2005; Meredith and Stein 1996; Schroeder and Foxe 2005). In fact, the majority of multisensory neurons do not simply respond to stimuli in different modalities, but rather appear to integrate the input signals from different modalities so that their responses to multimodal stimulation differ quantitatively from the simple summation of their unimodal responses (for reviews, see Stein and Meredith 1993; Stein et al. 2009; other chapters in this volume). Such multisensory interactions are typically largest when stimuli from different modalities occur at about the same time, but they are possible over a period of several hundreds of milliseconds (Meredith et al. 1987). In light of these considerations, the cross-modal cueing effects described in previous sections could in principle have been due to the involuntary covert orienting of spatial attention or to the integration of cue and target into a single multisensory event (McDonald et al. 2001; Spence and McDonald 2004; Spence et al. 2004). Although it is often difficult to determine which of these mechanisms are responsible for crossmodal cueing effects, several factors can help to tip the scales in favor of one explanation or the other. One factor is the temporal relationship between the cue and target stimuli. A simple rule of thumb is that increasing the temporal overlap between the cue and target will make multisensory integration more likely and pre-target attentional biasing less likely (McDonald et al. 2001). Thus, it is relatively straightforward to attribute cross-modal cue effects to multisensory integration when cue and target are presented concurrently or to spatial attention when cue and target are separated by a long temporal gap. The likely cause of cross-modal cueing effects is not so clear, however, when there is a short gap between cue and target that is within the temporal window where integration is possible. In such situations, other considerations may help to disambiguate the causes of the cross-modal cueing effects. For example, multisensory integration is largely an automatic and invariant process, whereas stimulus-driven attention effects are dependent on an observer’s goals and intentions (i.e., attentional set; e.g., Folk et al. 1992). Thus, if cross-modal spatial cue effects were found to be contingent upon an observer’s current attentional set, they would be more likely to have been caused by pre-target attentional biasing. To our knowledge, there has been little discussion of the dependency of involuntary cross-modal spatial cueing effects on attentional set and other task-related factors (e.g., Ward et al. 2000). A second consideration that could help distinguish between alternative mechanisms of crossmodal cueing effects concerns the temporal sequence of control operations (Spence et al. 2004). According to the most prominent multisensory integration account, signals arising from stimuli in different modalities converge onto multimodal brain regions and are integrated therein. The resulting integrated signal is then fed back to the unimodal brain regions to influence processing of subsequent stimuli in modality-specific regions of cortex (Calvert et al. 2000; Macaluso et al. 2000). Critically, such an influence on modality-specific processing would occur only after feedforward convergence and integration of the unimodal signals takes place (Figure 26.4a). This contrasts with the supramodal-attention account, according to which the cue’s influence on modality-specific processing may be initiated before the target in another modality has been presented (i.e., before integration is possible). In the context of a peripheral cueing task, a cue in one modality (e.g., audi tion) would initiate a sequence of attentional control operations (such as disengage, move, reengage; see Posner and Raichle 1994) that would lead to anticipatory biasing of activity in another modality (e.g., vision) before the appearance of the target (Figure 26.4b). In other words, whereas
522
The Neural Bases of Multisensory Processes (a) Integration
(b) Attention
AV
AV
cue
Auditory
target
Visual
cue
Visual
Auditory
attentional spotlight
AV
Visual
time
Auditory
AV
Visual
Auditory
AV
Auditory
Visual
AV
Visual
Auditory
AV
Auditory
target
Visual
FIGURE 26.4 Hypothetical neural mechanisms for involuntary cross-modal spatial cueing effects. (a) Integration-based account. Nearly simultaneous auditory and visual stimuli first activate unimodal auditory and visual cortical regions and then converge upon a multisensory region (AV). Audiovisual interaction within multisensory region feeds back to boost activity in visual cortex. (b) Attention-based account. An auditory cue elicits a shift of spatial attention in a multisensory representation, which leads to pre-target biasing of activity in visual cortex and ultimately boosts target-related activity in visual cortex.
multisensory integration occurs only after stimulation in two (or more) modalities, the consequences of spatial attention are theoretically observable after stimulation in the cue modality alone. Thus, a careful examination of neural activity in the cue–target interval would help to ascertain whether pre-target attentional control is responsible for the cross-modal cueing effects on perception. This is a challenging task in the case of involuntary cross-modal cue effects, because the time interval between the cue and target is typically very short. In the future, however, researchers might successfully adapt the electrophysiological methods used to track the voluntary control of spatial attention (e.g., Doesburg et al. 2009; Eimer et al. 2002; Green and McDonald 2008; McDonald and Green 2008; Worden et al. 2000) to look for signs of attentional control in involuntary cross-modal cueing paradigms such as the ones described in this chapter.
Cross-Modal Spatial Cueing of Attention Influences Visual Perception
523
26.7 CONCLUSIONS AND FUTURE DIRECTIONS To date, most of the research on spatial attention has considered how attending to a particular region of space influences the processing of objects within isolated sensory modalities. However, a growing number of studies have demonstrated that orienting attention to the location of a stimulus in one modality can influence the perception of subsequent stimuli in different modalities. As outlined here, recent cross-modal spatial cueing studies have shown that the occurrence of a nonpredictive auditory cue affects the way we see subsequent visual objects in several ways: (1) by improving the perceptual sensitivity for detection of masked visual stimuli appearing at the cued location, (2) by producing earlier perceptual awareness of visual stimuli appearing at the cued location, and (3) by altering the subjective appearance of visual stimuli appearing at the cued location. Each of these cross-modally induced changes in perceptual experience is accompanied by short-latency changes in the neural processing of targets within occipitotemporal cortex in the vicinity of the fusiform gyrus, which is generally considered to represent modality-specific cortex belonging to the ventral stream of visual processing. There is still much to be learned about these cross-modally induced changes in perception. One outstanding question is why spatial cueing appears to alter visual perception in tasks that focus on differences in temporal order or contrast (Carrasco et al. 2004; McDonald et al. 2005; Störmer et al. 2009) but not in tasks that focus on similarities (i.e., “same or not” judgments; Santangelo and Spence 2008; Schneider and Komlos 2008). Future studies could address this question by recording physiological measures (such as ERPs) in the two types of tasks. If an ERP component previously shown to correlate with perception were found to be elicited equally well under the two types of task instructions, it might be concluded that the same-or-not judgment lacks sensitivity to reveal perceptual effects. Another outstanding question is whether the cross-modal cueing effects reviewed in this chapter are caused by the covert orienting of attention or by passive intersensory interactions. Some insight may come from recent ERP studies of the “double flash” illusion produced by the interaction of a single flash with two pulsed sounds (Mishra et al. 2007, 2010). In these studies, an enhanced early ventral stream response at 100–130 ms was observed in association with the perceived extra flash. Importantly, this neural correlate of the illusory flash was sensitive to manipulations of spatial selective attention, suggesting that the illusion is not the result of automatic multisensory integration. Along these lines, it is tempting to conclude that the highly similar enhancement of early ventral-stream activity found in audiovisual cueing studies (McDonald et al. 2005; Störmer et al. 2009) also results from the covert deployment of attention rather than the automatic integration of cue and target stimuli. Future studies could address this issue by looking for electrophysiological signs of attentional control and anticipatory modulation of visual cortical activity before the onset of the target stimulus. A further challenge for future research will be to extend these studies to different combinations of sensory modalities to determine whether cross-modal cueing of spatial attention has analogous effects on the perception of auditory and somatosensory stimuli. Such findings would be consistent with the hypothesis that stimuli from the various spatial senses can all engage the same neural system that mediates the covert deployment of attention in multisensory space (Farah et al. 1989).
REFERENCES Boring, E. G. 1929. A history of experimental psychology. New York: Appleton-Century. Broadbent, D. E. 1958. Perception and communication. London: Pergamon Press. Cairney, P. T. 1975. The complication experiment uncomplicated. Perception 4: 255–265. Calvert, G. A., R. Campbell, and M. J. Brammer. 2000. Evidence from functional magnetic resonance imaging of crossmodal binding in the human heteromodal cortex. Current Biology 10: 649–657.
524
The Neural Bases of Multisensory Processes
Carrasco, M. 2006. Covert attention increases contrast sensitivity: Psychophysical, neurophysiological, and neuroimaging studies. In Progress in Brain Research, Volume 154, Part 1: Visual Perception. Part I. Fundamentals of Vision: Low and Mid-level Processes in Perception, ed. S. Martinez-Conde, S. L. Macknik, L. M. Martinez, J. M. Alonso, and P. U. Tse, 33–70. Amsterdam: Elsevier. Carrasco, M., S. Ling, and S. Read. 2004. Attention alters appearance. Nature Neuroscience 7: 308–313. Carver, R. A., and V. Brown. 1997. Effects of amount of attention allocated to the location of visual stimulus pairs on perception of simultaneity. Perception & Psychophysics 59: 534–542. Cherry, C. E. 1953. Some experiments on the recognition of speech with one and two ears. Journal of the Acoustical Society of America 25: 975–979. Corbetta, M., and G. L. Shulman. 2002. Control of goal-directed and stimulus-driven attention in the brain. Nature Reviews Neuroscience 3: 201–215. Dennett, D. 1991. Consciousness explained. Boston: Little, Brown & Co. Deutsch, J. A., and D. Deutsch. 1963. Attention: Some theoretical considerations. Psychological Review 70: 80–90. Doesburg, S. M., J. J. Green, J. J. McDonald, and L. M. Ward. 2009. From local inhibition to long-range integration: A functional dissociation of alpha-band synchronization across cortical scales in visuospatial attention. Brain Research 1303: 97–110. Driver, J., and T. Noesselt. 2008. Multisensory interplay reveals crossmodal influences on ‘sensory-specific’ brain regions, neural responses, and judgments. Neuron 57: 11–23. Driver, J., and C. Spence. 2004. Crossmodal spatial attention: Evidence from human performance. In Crossmodal space and crossmodal attention, ed. C. Spence and J. Driver, 179–220. Oxford: Oxford Univ. Press. Dufour, A. 1999. Importance of attentional mechanisms in audiovisual links. Experimental Brain Research 126: 215–222. Eimer, M., J. van Velzen, and J. Driver. 2002. Cross-modal interactions between audition, touch, and vision in endogenous spatial attention: ERP evidence on preparatory states and sensory modulations. Journal of Cognitive Neuroscience 14: 254–271. Eimer, M., and E. Schröger. 1998. ERP effects of intermodal attention and cross-modal links in spatial attention. Psychophysiology 35: 313–327. Eriksen, C. W., and J. E. Hoffman. 1972. Temporal and spatial characteristics of selective encoding from visual displays. Perception & Psychophysics 12: 201–204. Farah, M. J., A. B. Wong, M. A. Monheit, and L. A. Morrow. 1989. Parietal lobe mechanisms of spatial attention—modality-specific or supramodal. Neuropsychologia 27: 461–470. Fechner, G. T. 1882. Revision der Hauptpunkte der Psychophysik. Leipzig: Breitkopf & Härtel. Folk, C. L., R. W. Remington, and J. C. Johnston. 1992. Involuntary covert orienting is contingent on attentional control settings. Journal of Experimental Psychology: Human Perception and Performance 18: 1030–1044. Foxe, J. J., and C. E. Schroeder. 2005. The case for feedforward multisensory convergence during early cortical processing. Neuroreport 16: 419–423. Fukuda, K., and E. K. Vogel, 2009. Human variation in overriding attentional capture. Journal of Neuroscience 29: 8726–8733. Fuller, S., R. Z. Rodriguez, and M. Carrasco. 2008. Apparent contrast differs across the vertical meridian: Visual and attentional factors. Journal of Vision 8: 1–16. Green, D. M., and J. A. Swets. 1966. Signal detection theory and psychophysics. New York: Wiley. Green, J. J., and J. J. McDonald. 2008. Electrical neuroimaging reveals timing of attentional control activity in human brain. PLoS Biology 6: e81. Heinze, H. J., G. R. Mangun, and S. A. Hillyard. 1990. Visual event-related potentials index perceptual accuracy during spatial attention to bilateral stimuli. In Psychophysiological Brain Research, ed. C. Brunia et al., 196–202. Tilburg, The Netherlands: Tilburg Univ. Press. Heinze, H. J., G. R. Mangun, W. Burchert et al. 1994. Combined spatial and temporal imaging of brain activity during visual selective attention in humans. Nature 372: 543–546. Helmholtz, H. V. 1866. Treatise on psychological optics, 3rd ed., Vols. 2 & 3. Rochester: Optical Society of America. Hillyard, S. A., G. V. Simpson, D. L. Woods, S. Vanvoorhis, and T. F. Münte. 1984. Event-related brain potentials and selective attention to different modalities. In Cortical Integration, ed. F. Reinoso-Suarez and C. Ajmone-Marsan, 395–414. New York: Raven Press. James, W. 1890. The principles of psychology. New York: Henry Holt. LaBerge, D. 1995. Attentional processing: The brain’s art of mindfulness. Cambridge, MA: Harvard Univ. Press.
Cross-Modal Spatial Cueing of Attention Influences Visual Perception
525
Ling, S., and M. Carrasco. 2007. Transient covert attention does alter appearance: A reply to Schneider 2006. Perception & Psychophysics 69: 1051–1058. Lu, Z. L., and B. A. Dosher. 1998. External noise distinguishes attention mechanisms. Vision Research 38: 1183–1198. Luce, P. A. 1986. A computational analysis of uniqueness points in auditory word recognition. Perception & Psychophysics 39: 155–158. Luck, S. J., H. J. Heinze, G. R. Mangun, and S. A. Hillyard. 1990. Visual event-related potentials index focussed attention within bilateral stimulus arrays: II. Functional dissociation of P1 and N1 components. Electroencephalography and Clinical Neurophysiology 75: 528–542. Luck, S. J., S. A. Hillyard, M. Mouloua, and H. L. Hawkins. 1996. Mechanisms of visual–spatial attention: Resource allocation or uncertainty reduction? Journal of Experimental Psychology: Human Perception and Performance 22: 725–737. Luck, S. J., S. A. Hillyard, M. Mouloua, M. G. Woldorff, V. P. Clark, and H. L. Hawkins. 1994. Effects of spatial cuing on luminance detectability: Psychophysical and electrophysiological evidence for early selection. Journal of Experimental Psychology: Human Perception and Performance 20: 887–904. Macaluso, E., C. D. Frith, and J. Driver. 2000. Modulation of human visual cortex by crossmodal spatial attention. Science 289: 1206–1208. McDonald, J. J., and J. J. Green. 2008. Isolating event-related potential components associated with voluntary control of visuo-spatial attention. Brain Research 1227: 96–109. McDonald, J. J., W. A. Teder-Sälejärvi, F. Di Russo, and S. A. Hillyard. 2003. Neural substrates of perceptual enhancement by cross-modal spatial attention. Journal of Cognitive Neuroscience 15: 10–19. McDonald, J. J., W. A. Teder-Sälejärvi, F. Di Russo, and S. A. Hillyard. 2005. Neural basis of auditory-induced shifts in visual time-order perception. Nature Neuroscience 8: 1197–1202. McDonald, J. J., W. A. Teder-Sälejärvi, D. Heraldez, and S. A. Hillyard. 2001. Electrophysiological evidence for the “missing link” in crossmodal attention. Canadian Journal of Experimental Psychology 55: 141–149. McDonald, J. J., W. A. Teder-Sälejärvi, and S. A. Hillyard. 2000. Involuntary orienting to sound improves visual perception. Nature 407: 906–908. McDonald, J. J., and L. M. Ward. 1999. Spatial relevance determines facilitatory and inhibitory effects of auditory covert spatial orienting. Journal of Experimental Psychology: Human Perception and Performance 25: 1234–1252. McDonald, J. J., and L. M. Ward. 2000. Involuntary listening aids seeing: Evidence from human electrophysiology. Psychological Science 11: 167–171. Meredith, M. A., J. W. Nemitz, and B. E. Stein. 1987. Determinants of multisensory integration in superior colliculus neurons: 1. Temporal factors. Journal of Neuroscience 7: 3215–3229. Meredith, M. A., and B. E. Stein. 1996. Spatial determinants of multisensory integration in cat superior colliculus neurons. Journal of Neurophysiology 75: 1843–1857. Mishra, J., A. Martinez, T. J. Sejnowski, and S. A. Hillyard. 2007. Early cross-modal interactions in auditory and visual cortex underlie a sound-induced visual illusion. Journal of Neuroscience 27: 4120–4131. Mishra, J., A. Martinez, and S. A. Hillyard. 2010. Effect of attention on early cortical processes associated with the sound-induced extra flash illusion. Journal of Cognitive Neuroscience 22: 1714–1729. Pashler, H. E. 1998. The psychology of attention. Cambridge, MA: MIT Press. Posner, M. I. 1978. Chronometric explorations of mind. Hillsdale, NJ: Lawrence Erlbaum. Posner, M. I., Y. Cohen, and R. D. Rafal. 1982. Neural systems control of spatial orienting. Philosophical Transactions of the Royal Society of London Series B-Biological Sciences 298: 187–198. Posner, M. I., and M. E. Raichle 1994. Images of mind. New York: W. H. Freeman. Posner, M. I., J. A. Walker, F. J. Friedrich, and R. D. Rafal. 1984. Effects of parietal injury on covert orienting of attention. Journal of Neuroscience 4: 1863–1874. Prime, D. J., J. J. McDonald, J. Green, and L. M. Ward. 2008. When cross-modal spatial attention fails. Canadian Journal of Experimental Psychology 62: 192–197. Prinzmetal, W., V. Long, and J. Leonhardt. 2008. Involuntary attention and brightness contrast. Perception & Psychophysics 70: 1139–1150. Prinzmetal, W., C. McCool, and S. Park. 2005. Attention: Reaction time and accuracy reveal different mechanisms. Journal of Experimental Psychology: General 134: 73–92. Rhodes, G. 1987. Auditory attention and the representation of spatial information. Perception & Psychophysics 42: 1–14. Santangelo, V., and C. Spence. 2008. Crossmodal attentional capture in an unspeeded simultaneity judgement task. Visual Cognition 16: 155–165.
526
The Neural Bases of Multisensory Processes
Schneider, K. A., and D. Bavelier. 2003. Components of visual prior entry. Cognitive Psychology 47: 333–366. Schneider, K. A., and M. Komlos. 2008. Attention biases decisions but does not alter appearance. Journal of Vision 8: 1–10. Schroeder, C. E., and J. Foxe. 2005. Multisensory contributions to low-level, ‘unisensory” processing. Current Opinion in Neurobiology 15: 454–458. Shaw, M. L. 1982. Attending to multiple sources of information: 1.The integration of information in decisionmaking. Cognitive Psychology 14: 353–409. Shaw, M. L. 1984. Division of attention among spatial locations: A fundamental difference between detection of letters and detection of luminance increments. In Attention and Performance X, ed. H. Bouma and D. G. Bouwhui, 109–121. Hillsdale, NJ: Erlbaum. Shimojo, S., S. Miyauchi, and O. Hikosaka. 1997. Visual motion sensation yielded by non-visually driven attention. Vision Research 37: 1575–1580. Shiu, L. P., and H. Pashler. 1994. Negligible effect of spatial precueing on identification of single digits. Journal of Experimental Psychology: Human Perception and Performance 20: 1037–1054. Shore, D. I., C. Spence, and R. M. Klein. 2001. Visual prior entry. Psychological Science 12: 205–212. Smith, P. L. 2000. Attention and luminance detection: Effects of cues, masks, and pedestals. Journal of Experimental Psychology: Human Perception and Performance 26: 1401–1420. Smith, P. L., and R. Ratcliff. 2009. An integrated theory of attention and decision making in visual signal detection. Psychological Review 116: 283–317. Soto-Faraco, S., J. McDonald, and A. Kingstone. 2002. Gaze direction: Effects on attentional orienting and crossmodal target responses. Poster presented at the annual meeting of the Cognitive Neuroscience Society, San Francisco, CA. Spence, C. J., and J. Driver. 1994. Covert spatial orienting in audition—exogenous and endogenous mechanisms. Journal of Experimental Psychology: Human Perception and Performance 20: 555–574. Spence, C., and J. Driver. 1997. Audiovisual links in exogenous covert spatial orienting. Perception & Psychophysics 59: 1–22. Spence, C., and J. J. McDonald. 2004. The crossmodal consequences of the exogenous spatial orienting of attention. In The handbook of multisensory processing, ed. G. A. Calvert, C. Spence, and B. E. Stein, 3–25. Cambridge, MA: MIT Press. Spence, C., J. J. McDonald, and J. Driver. 2004. Exogenous spatial cuing studies of human crossmodal attention and multisensory integration. In Crossmodal space and crossmodal attention, ed. C. Spence and J. Driver, 277–320. Oxford: Oxford Univ. Press. Sperling, G., and B. A. Dosher. 1986. Strategy and optimization in human information processing. In Handbook of Perception and Human Performance, ed. K. R. Boff, L. Kaufman, and J. P. Thomas, 1–65. New York: Wiley. Stein, B. E., and M. A. Meredith 1993. The merging of the senses. Cambridge, MA: MIT Press. Stein, B. E., T. R. Stanford, R. Ramachandran, T. J. Perrault, and B. A. Rowland. 2009. Challenges in quantifying multisensory integration: Alternative criteria, models, and inverse effectiveness. Experimental Brain Research 198: 113–126. Stelmach, L. B., and C. M. Herdman. 1991. Directed attention and perception of temporal-order. Journal of Experimental Psychology: Human Perception and Performance 17: 539–550. Stevens, H. C. 1904. A simple complication pendulum for qualitative work. American Journal of Psychology 15: 581. Stone, J. V., N. M. Hunkin, J. Porrill et al. 2001. When is now? Perception of simultaneity. Proceedings of the Royal Society of London Series B: Biological Sciences 268: 31–38. Störmer, V. S., J. J. McDonald, and S. A. Hillyard. 2009. Cross-modal cueing of attention alters appearance and early cortical processing of visual stimuli. PNAS 106: 22456–22461. Talsma, D., and M. G. Woldorff. 2005. Selective attention and multisensory integration: Multiple phases of effects on the evoked brain activity. Journal of Cognitive Neuroscience 17: 1098–1114. Tassinari, G., S. Aglioti, L. Chelazzi, A. Peru, and G. Berlucchi. 1994. Do peripheral non-informative cues induce early facilitation of target detection. Vision Research 34: 179–189. Teder-Sälejärvi, W. A., T. F. Münte, F. J. Sperlich, and S. A. Hillyard. 1999. Intra-modal and cross-modal spatial attention to auditory and visual stimuli. An event-related brain potential study. Cognitive Brain Research 8: 327–343. Titchener, E. N. 1908. Lectures on the elementary psychology of feeling and attention. New York: The MacMillan Company.
Cross-Modal Spatial Cueing of Attention Influences Visual Perception
527
Treisman, A., and G. Geffen. 1967. Selective attention: Perception or response? Quarterly Journal of Experimental Psychology 19: 1–18. Vibell, J., C. Klinge, M. Zampini, C. Spence, and A. C. Nobre. 2007. Temporal order is coded temporally in the brain: Early event-related potential latency shifts underlying prior entry in a cross-modal temporal order judgment task. Journal of Cognitive Neuroscience 19: 109–120. Ward, L. M., J. J. McDonald, and N. Golestani. 1998. Cross-modal control of attention shifts. In Visual attention, ed. R. D. Wright, 232–268. New York: Oxford Univ. Press. Ward, L. M., J. J. McDonald, and D. Lin. 2000. On asymmetries in cross-modal spatial attention orienting. Perception & Psychophysics 62: 1258–1264. Watt, R. J. 1991. Understanding vision. San Diego, CA: Academic Press. Worden, M. S., J. J. Foxe, N. Wang, and G. V. Simpson. 2000. Anticipatory biasing of visuospatial attention indexed by retinotopically specific-band electroencephalography increases over occipital cortex. Journal of Neuroscience 20 (RC63): 1–6. Wright, R. D., and L. M. Ward. 2008. Orienting of attention. New York: Oxford Univ. Press. Wundt, W. 1874. Grundzüge der physiologischen psychologies [Foundations of physiological psychology]. Leipzig, Germany: Wilhelm Engelmann.
27
The Colavita Visual Dominance Effect Charles Spence, Cesare Parise, and Yi-Chuan Chen
CONTENTS 27.1 Introduction........................................................................................................................... 529 27.2 Basic Findings on Colavita Visual Dominance Effect.......................................................... 531 27.2.1 Stimulus Intensity...................................................................................................... 531 27.2.2 Stimulus Modality..................................................................................................... 531 27.2.3 Stimulus Type............................................................................................................ 532 27.2.4 Stimulus Position....................................................................................................... 532 27.2.5 Bimodal Stimulus Probability................................................................................... 532 27.2.6 Response Demands.................................................................................................... 533 27.2.7 Attention.................................................................................................................... 533 27.2.8 Arousal...................................................................................................................... 534 27.2.9 Practice Effects.......................................................................................................... 535 27.3 Interim Summary.................................................................................................................. 537 27.4 Prior Entry and Colavita Visual Dominance Effect.............................................................. 537 27.5 Explaining the Colavita Visual Dominance Effect...............................................................540 27.5.1 Accessory Stimulus Effects and Colavita Effect.......................................................540 27.5.2 Perceptual and Decisional Contributions to Colavita Visual Dominance Effect...... 541 27.5.3 Stimulus, (Perception), and Response?...................................................................... 542 27.6 Biased (or Integrated) Competition and Colavita Visual Dominance Effect........................ 545 27.6.1 Putative Neural Underpinnings of Modality-Based Biased Competition................. 545 27.6.2 Clinical Extinction and Colavita Visual Dominance Effect..................................... 547 27.7 Conclusions and Questions for Future Research...................................................................548 27.7.1 Modeling the Colavita Visual Dominance Effect..................................................... 549 27.7.2 Multisensory Facilitation versus Interference........................................................... 549 References....................................................................................................................................... 550
27.1 INTRODUCTION Visually dominant behavior has been observed in many different species, including birds, cows, dogs, and humans (e.g., Partan and Marler 1999; Posner et al. 1976; Uetake and Kudo 1994; Wilcoxin et al. 1971). This has led researchers to suggest that visual stimuli may constitute “prepotent” stimuli for certain classes of behavioral responses (see Colavita 1974; Foree and LoLordo 1973; LoLordo 1979; Meltzer and Masaki 1973; Shapiro et al. 1980). One particularly impressive example of vision’s dominance over audition (and more recently, touch) has come from research on the Colavita visual dominance effect (Colavita 1974). In the basic experimental paradigm, participants have to make speeded responses to a random series of auditory (or tactile), visual, and audiovisual (or visuotactile) targets, all presented at a clearly suprathreshold level. Participants are instructed to make one response whenever an auditory (or tactile) target is presented, another response whenever a visual target is presented, and to make both responses whenever the auditory (or tactile) and visual targets 529
530
The Neural Bases of Multisensory Processes
are presented at the same time (i.e., on the bimodal target trials). Typically, the unimodal targets are presented more frequently than the bimodal targets (the ratio of 40% auditory—or tactile—targets, 40% visual targets, and 20% bimodal targets has often been used; e.g., Koppen and Spence 2007a, 2007b, 2007c). The striking result to have emerged from a number of studies on the Colavita effect is that although participants have no problem in responding rapidly and accurately to the unimodal targets, they often fail to respond to the auditory (or tactile) targets on the bimodal target trials (see Figure 27.1a and b). It is almost as if the simultaneous presentation of the visual target leads to the “extinction” of the participants’ perception of, and/or response to, the nonvisual target on a proportion of the bimodal trials (see Egeth and Sager 1977; Hartcher-O’Brien et al. 2008; Koppen et al. 2009; Koppen and Spence 2007c). Although the majority of research on the Colavita effect has focused on the pattern of errors made by participants in the bimodal target trials, it is worth noting that visual dominance can also show up in reaction time (RT) data. For example, Egeth and Sager (1977) reported that although participants responded more rapidly to unimodal auditory targets than to unimodal visual targets, this pattern of results was reversed on the bimodal target trials—that is, participants responded
60 50 40 30 20 10 0
Audiovisual Colavita Colavita effect
(b) % of responses
% of responses
(a)
Both responses Vision-only Audition-only
60 50 40 30 20 10 0
Response type
60 50 40 30 20 10 0
Audiovisual Colavita - Coffeine
Colavita effect
Both responses Vision-only Audition-only
Response type
Colavita effect
Both responses Vision-only
Response type
(d) % of responses
% of responses
(c)
Visuotactile Colavita
60 50 40 30 20 10 0
Touch-only
Audiovisual Colavita - Placebo
Colavita effect
Both responses Vision-only Audition-only
Response type
FIGURE 27.1 Results of experiments conducted by Elcock and Spence (2009) highlighting a significant Colavita visual dominance effect over both audition (a) and touch (b). Values reported in the graphs refer to the percentage of bimodal target trials in which participants correctly made both responses, or else made either a visual-only or auditory- (tactile-) only response. The order in which the two experiments were performed was counterbalanced across participants. Nine participants (age, 18–22 years) completed 300 experimental trials (40% auditory; 40% visual, and 20% bimodal; plus 30 unimodal practice trials) in each experiment. In audiovisual experiment (a), auditory stimulus consisted of a 4000-Hz pure tone (presented at 63dB), visual stimulus consisted of illumination of loudspeaker cone by an LED (64.3 cd/m2). In the visuotactile experiment (b), the stimulus was presented to a finger on the participant’s left hand, and the visual target now consisted of illumination of the same finger. Thus, auditory, visual, and tactile stimuli were presented from exactly the same spatial location. Participants were given 2500 ms from the onset of the target in which to respond, and intertrial interval was set at 650 ms. The Colavita effect was significant in both cases, that is, participants in audiovisual experiment made 45% more visual-only than auditory-only responses, whereas participants in visuotactile experiment made 41% more visual-only than tactile-only responses. (c and d) Results from Elcock and Spence’s Experiment 3, in which they investigated the effects of caffeine (c) versus a placebo pill (d) on the audio visual Colavita visual dominance effect. The results show that participants made significantly more visual-only than auditory-only responses in both conditions (24% and 29% more, respectively), although there was no significant difference between the magnitude of Colavita visual dominance effect reported in two cases.
The Colavita Visual Dominance Effect
531
more rapidly to the visual targets than to the auditory targets. Note that Egeth and Sager made sure that their participants always responded to both the auditory and visual targets on the bimodal trials by presenting each target until the participant had made the relevant behavioral response.* A similar pattern of results in the RT data has also been reported in a number of other studies (e.g., Colavita 1974, 1982; Colavita and Weisberg 1979; Cooper 1998; Koppen and Spence 2007a; Sinnett et al. 2007; Zahn et al. 1994). In this article, we will focus mainly (although not exclusively) on the Colavita effect present in the error data (in line with the majority of published research on this phenomenon). We start by summarizing the basic findings to have emerged from studies of the Colavita visual dominance effect conducted over the past 35 years or so. By now, many different factors have been investigated in order to determine whether they influence the Colavita effect: Here, they are grouped into stimulusrelated factors (such as stimulus intensity, stimulus modality, stimulus type, stimulus position, and bimodal stimulus probability) and task/participant-related factors (such as attention, arousal, task/ response demands, and practice). A range of potential explanations for the Colavita effect are evaluated, and all are shown to be lacking. A new account of the Colavita visual dominance effect is therefore proposed, one that is based on the “biased competition” model put forward by Desimone and Duncan (1995; see also Duncan 1996; Peers et al. 2005). Although this model was initially developed in order to provide an explanation for the intramodal competition taking place between multiple visual object representations in both normal participants and clinical patients (suffering from extinction), here we propose that it can be extended to provide a helpful framework in which to understand what may be going on the Colavita visual dominance effect. In particular, we argue that a form of cross-modal biased competition can help to explain why participants respond to the visual stimulus while sometimes failing to respond to the nonvisual stimulus on the bimodal target trials in the Colavita paradigm. More generally, it is our hope that explaining the Colavita visual dominance effect may provide an important step toward understanding the mechanisms underlying multisensory interactions. First, though, we review the various factors that have been hypothesized to influence the Colavita visual dominance effect.
27.2 BASIC FINDINGS ON COLAVITA VISUAL DOMINANCE EFFECT 27.2.1 Stimulus Intensity The Colavita visual dominance effect occurs regardless of whether the auditory and visual stimuli are presented at the same (subjectively matched) intensity (e.g., Colavita 1974; Koppen et al. 2009; Zahn et al. 1994) or the auditory stimulus is presented at an intensity that is rated subjectively as being twice that of the visual stimulus (see Colavita 1974, Experiment 2). Hartcher-O’Brien et al. (2008; Experiment 4) have also shown that vision dominates over touch under conditions in which the intensity of the tactile stimulus is matched to that of the visual stimulus (presented at the 75% detection threshold). Taken together, these results suggest that the dominance of vision over both audition and touch in the Colavita paradigm cannot simply be attributed to any systematic differences in the relative intensity of the stimuli that have been presented to participants in previous studies (but see also Gregg and Brogden 1952; O’Connor and Hermelin 1963; Smith 1933).
27.2.2 Stimulus Modality Although the majority of the research on the Colavita visual dominance effect has investigated the dominance of vision over audition, researchers have recently shown that vision also dominates over * That is, the visual target was only turned off once the participants made a visual response, and the auditory target was only turned off when the participants made an auditory response. This contrasts with Colavita’s (1974) studies, in which a participant’s first response turned off all the stimuli, and with other more recent studies in which the targets were only presented briefly (i.e., for 50 ms; e.g., Koppen and Spence 2007a, 2007b, 2007c, 2007d).
532
The Neural Bases of Multisensory Processes
touch in normal participants (Hartcher-O’Brien et al. 2008, 2010; Hecht and Reiner 2009; see also Gallace et al. 2007). Costantini et al. (2007) have even reported that vision dominates over touch in extinction patients (regardless of whether the two stimuli were presented from the same position, or from different sides; see also Bender 1952). Interestingly, however, no clear pattern of sensory dominance has, as yet, been observed when participants respond to simultaneously presented auditory and tactile stimuli (see Hecht and Reiner 2009; Occelli et al. 2010; but see Bonneh et al. 2008, for a case study of an autistic child who exhibited auditory dominance over both touch and vision). Intriguingly, Hecht and Reiner (2009) have recently reported that vision no longer dominates when targets are presented in all three modalities (i.e., audition, vision, and touch) at the same time. In their study, the participants were given a separate button with which to respond to the targets in each modality, and had to press one, two, or three response keys depending on the combination of target modalities that happened to be presented on each trial. Whereas vision dominated over both audition and touch in the bimodal target trials, no clear pattern of dominance was shown on the trimodal target trials (see also Shapiro et al. 1984, Experiment 3). As yet, there is no obvious explanation for this result.
27.2.3 Stimulus Type The Colavita visual dominance effect has been reported for both onset and offset targets (Colavita and Weisberg 1979; see also Osborn et al. 1963). The effect occurs both with simple stimuli (i.e., tones, flashes of light, and brief taps on the skin) and also with more complex stimuli, including pictures of objects and realistic object sounds, and with auditory and visual speech stimuli (see Koppen et al. 2008; Sinnett et al. 2007, 2008). The Colavita effect not only occurs when the target stimuli are presented in isolation (i.e., in an otherwise dark and silent room), but also when they are embedded within a rapidly presented stream of auditory and visual distractors (Sinnett et al. 2007). Interestingly, however, the magnitude of the Colavita visual dominance effect does not seem to be affected by whether or not the auditory and visual targets on the bimodal trials are semantically congruent (see Koppen et al. 2008).
27.2.4 Stimulus Position Researchers have also considered what effect, if any, varying either the absolute and/or relative location from which the stimuli are presented might have on performance in the Colavita task. The Colavita visual dominance effect occurs both when the auditory stimuli are presented over headphones and when they are presented from an external loudspeaker placed in front of the participant (Colavita 1974, 1982). Researchers have demonstrated that it does not much matter whether the participants look in the direction of the visual or auditory stimulus or else fixate on some other intermediate location (see Colavita et al. 1976). Vision’s dominance over both audition and touch has also been shown to occur regardless of whether the stimuli are presented from the same spatial location or from different positions (one on either side of fixation), although the Colavita effect is somewhat larger in the former case (see Hartcher-O’Brien et al. 2008, 2010; Koppen and Spence 2007c). Taken together, these results therefore show that varying either the absolute position (e.g., presenting the stimuli from the center vs. in the periphery) or relative position (i.e., presenting the various stimuli from the same or different positions) from which the target stimuli are presented has, at most, a relatively modest impact on the magnitude of the Colavita visual dominance effect (see also Johnson and Shapiro 1989).
27.2.5 Bimodal Stimulus Probability As already noted, studies on the Colavita visual dominance effect usually present far fewer bimodal targets than unimodal targets. Nevertheless, researchers have shown that a robust Colavita visual
The Colavita Visual Dominance Effect
533
dominance effect can still be obtained if the probability of each type of target is equalized (i.e., when 33.3% auditory, 33.3% visual, and 33.3% bimodal targets are presented; see Koppen and Spence 2007a). Koppen and Spence (2007d) investigated the effect of varying the probability of bimodal target trials on the Colavita visual dominance effect (while keeping the relative proportion of unimodal auditory and visual target trials matched).* They found that although a significant Colavita effect was demonstrated whenever the bimodal targets were presented on 60% or less of the trials, vision no longer dominated when the bimodal targets were presented on 90% of the trials (see also Egeth and Sager 1974; Manly et al. 1999; Quinlan 2000). This result suggests that the Colavita effect is not caused by stimulus-related (i.e., sensory) factors, since these should not have been affected by any change in the probability of occurrence of bimodal targets (cf. Odgaard et al. 2003, 2004, on this point). Instead, the fact that the Colavita effect disappears if the bimodal targets are presented too frequently (i.e., on too high a proportion of the trials) would appear to suggest that response-related factors (linked to the probability of participants making bimodal target responses) are likely to play an important role in helping to explain the Colavita effect (see also Gorea and Sagi 2000).
27.2.6 Response Demands The majority of studies on the Colavita visual dominance effect have been conducted under conditions in which participants were given a separate response key with which to respond to the targets presented in each sensory modality. Normally, participants are instructed to respond to the (relatively infrequent) bimodal targets by pressing both response keys. Similar results have, however, now also been obtained under conditions in which the participants are given a separate response key with which to respond to the bimodal targets (Koppen and Spence 2007a; Sinnett et al. 2007). This result rules out the possibility that the Colavita effect is simply caused by participants having to make two responses at more or less the same time. Surprisingly, Colavita (1974; Experiment 4) showed that participants still made a majority of visual responses after having been explicitly instructed to respond to the bimodal targets by pressing the auditory response key instead. Koppen et al. (2008) have also reported that the Colavita effect occurs when participants are instructed to press one button whenever they either see or hear a dog, another button whenever they see or hear a cat, and to make both responses whenever a cat and a dog are presented at the same time. Under such conditions, the visual presentation of the picture of one of these animals resulted in participants failing to respond to the sound of the other animal (be it the woofing of the dog or the meowing of the cat) on 10% more of the trials than they failed to respond to the identity of the visually presented animal. Taken together, these results therefore confirm the fact that the Colavita visual dominance effect occurs under a variety of different task demands/response requirements (i.e., it occurs no matter whether participants respond to the sensory modality or semantic identity of the target stimuli).
27.2.7 Attention Originally, researchers thought that the Colavita visual dominance effect might simply reflect a predisposition by participants to direct their attention preferentially toward the visual modality (Colavita 1974; Posner et al. 1976). Posner et al.’s idea was that people endogenously (or voluntarily) directed their attention toward the visual modality in order to make up for the fact that visual stimuli are generally less alerting than stimuli presented in the other modalities (but see Spence et al. 2001b, footnote 5). Contrary to this suggestion, however, a number of more recent studies have actually * Note that researchers have also manipulated the relative probability of unimodal auditory and visual targets (see Egeth and Sager 1977; Quinlan 2000; Sinnett et al. 2007). However, since such probability manipulations have typically been introduced in the context of trying to shift the focus of a participant’s attention between the auditory and visual modalities, they will be discussed later (see Section 27.2.7).
534
The Neural Bases of Multisensory Processes
shown that although the manipulation of a person’s endogenous attention can certainly modulate the extent to which vision dominates over audition, it cannot in and of itself be used to reverse the Colavita effect. That is, even when a participant’s attention is directed toward the auditory modality (i.e., by verbally instructing them to attend to audition or by presenting unimodal auditory targets much more frequently than unimodal visual targets), people still exhibit either visually dominant behavior or else their behavior shows no clear pattern of dominance (see Koppen and Spence 2007a, 2007d; Sinnett et al. 2007). These results therefore demonstrate that any predisposition that participants might have to direct their attention voluntarily (or endogenously) toward the visual modality cannot explain why vision always seems to dominate in the Colavita visual dominance effect. De Reuck and Spence (2009) recently investigated whether varying the modality of a secondary task would have any effect on the magnitude of the Colavita visual dominance effect. To this end, a video game (“Food boy” by T3Software) and a concurrent auditory speech stream (consisting of pairs of auditory words delivered via a central loudspeaker) were presented in the background while participants performed the two-response version of the Colavita task (i.e., pressing one key in response to auditory targets, another key in response to visual targets, and both response keys on the bimodal target trials; the auditory targets in this study consisted of a 4000-Hz pure tone presented from a loudspeaker cone placed in front of the computer screen, whereas the visual target consisted of the illumination of a red light-emitting device (LED), also mounted in front of the computer screen). In the condition involving the secondary visual task, the participants performed the Colavita task with their right hand while playing the video game with their left hand (note that the auditory distracting speech streams were presented in the background, although they were irrelevant in this condition and so could be ignored). The participants played the video game using a computer mouse to control a character moving across the bottom of the computer screen. The participants had to “swallow” as much of the food dropping from the top of the screen as possible, while avoiding any bombs that happened to fall. In the part of the study involving an auditory secondary task, the video game was run in the demonstration mode to provide equivalent background visual stimulation to the participants who now had to respond by pressing a button with their left hand whenever they heard an animal name in the auditory stream. The results showed that the modality of the secondary task (auditory or visual) did not modulate the magnitude of the Colavita visual dominance effect significantly, that is, the participants failed to respond to a similar number of the auditory stimuli regardless of whether they were performing a secondary task that primarily involved participants having to attend to the auditory or visual modality. De Reuck and Spence’s (2009) results therefore suggest that the Colavita visual dominance effect may be insensitive to manipulations of participants’ attention toward either the auditory or visual modality that are achieved by varying the requirements of a simultaneously performed secondary task (see Spence and Soto-Faraco 2009). Finally, Koppen and Spence (2007a) have shown that exogenously directing a participant’s attention toward either the auditory or visual modality via the presentation of a task-irrelevant nonpredictive auditory or visual cue 200 ms before the onset of the target (see Rodway 2005; Spence et al. 2001a; Turatto et al. 2002) has only a marginal effect on the magnitude of vision’s dominance over audition (see also Golob et al. 2001). Taken together, the results reported in this section therefore highlight the fact that although attentional manipulations (be they exogenous or endogenous) can sometimes be used to modulate, or even to eliminate, the Colavita visual dominance effect, they cannot be used to reverse it.
27.2.8 Arousal Early animal research suggested that many examples of visual dominance could be reversed under conditions in which an animal was placed in a highly aroused state (i.e., when, for example, fearful of the imminent presentation of an electric shock; see Foree and LoLordo 1973; LoLordo and
The Colavita Visual Dominance Effect
535
Furrow 1976; Randich et al. 1978). It has been reported that although visual stimuli tend to control appetitive behaviors, auditory stimuli tend to control avoidance behaviors in many species. Shapiro et al. (1984) extended the idea that changes in the level of an organism’s arousal might change the pattern of sensory dominance in the Colavita task to human participants (see also Johnson and Shapiro 1989; Shapiro and Johnson 1987). They demonstrated what looked like auditory dominance (i.e., participants making more auditory-only than visual-only responses in the Colavita task) under conditions in which their participants were aversively motivated (by the occurrence of electric shock, or to a lesser extent by the threat of electric shock, or tactile stimulation, presented after the participants’ response on a random 20% of the trials). It should, however, be noted that no independent measure of the change in a participant’s level of arousal (i.e., such as a change in their galvanic skin response) was provided in this study. What is more, Shapiro et al.’s (1984) participants were explicitly told to respond to the stimulus that they perceived first on the bimodal target trials, that is, the participants effectively had to perform a temporal order judgment (TOJ) task. What this means in practice is that their results (and those from the study of Shapiro and Johnson (1987) and Johnson and Shapiro (1989), in which similar instructions were given) may actually reflect the effects of arousal on “prior entry” (see Spence 2010; Van Damme et al. 2009b), rather than, as the authors argued, the effects of arousal on the Colavita visual dominance effect. Indeed, the latest research has demonstrated that increased arousal can lead to the prior entry of certain classes of stimuli over others (when assessed by means of a participant’s responses on a TOJ task; Van Damme et al. 2009b). In Van Damme et al.’s study, auditory and tactile stimuli delivered from close to one of the participant’s hands were prioritized when an arousing picture showing physical threat to a person’s bodily tissues was briefly flashed beforehand from the same (rather than opposite) location. Meanwhile, Van Damme et al. (2009a) have shown that, when participants are instructed to respond to both of the stimuli in the bimodal trials, rather than just to the stimulus that the participant happens to have perceived first, the effects of arousal on the Colavita visual dominance effect are far less clear-cut (we return later to the question of what role, if any, prior entry plays in the Colavita visual dominance effect). Elcock and Spence (2009) recently investigated the consequences for the Colavita effect of pharmacologically modulating the participants’ level of arousal by administering caffeine. Caffeine is known to increase arousal and hence, given Shapiro et al.’s (1984) research, ingesting caffeine might be expected to modulate the magnitude of the Colavita visual dominance effect (Smith et al. 1992).* To this end, 15 healthy participants were tested in a within-participants, double-blind study, in which a 200-mg caffeine tablet (equivalent to drinking about two cups of coffee) was taken 40 min before one session of the Colavita task and a visually identical placebo pill was taken before the other session (note that the participants were instructed to refrain from consuming any caffeine in the morning before taking part in the study). The Colavita visual dominance effect was unaffected by whether the participants had ingested the caffeine tablet or the placebo (see Figure 27.1c and d). Taken together, the results reported in this section would therefore appear to suggest that, contrary to Shapiro et al.’s early claim, the magnitude of the Colavita visual dominance effect is not affected by changes in a participant’s level of arousal.
27.2.9 Practice Effects The largest Colavita visual dominance effects have been reported in studies in which only a small number of bimodal target trials were presented. In fact, by far the largest effects on record were reported by Frank B. Colavita himself in his early research (see Koppen and Spence 2007a, Table 1, * Caffeine is a stimulant that accelerates physiological activity, and results in the release of adrenaline and the increased production of the neurotransmitter dopamine. Caffeine also interferes with the operation of another neurotransmitter: adenosine (Smith 2002; Zwyghuizen-Doorenbos et al. 1990).
536
The Neural Bases of Multisensory Processes Auditory-only responses Auditory dominance
Errors (% bimodal trials)
25
Visual-only responses Visual dominance
20 15 10 5 0 –600 Audition first
–400
–200
0
SOA (ms)
200
400
600 Vision first
FIGURE 27.2 Graph highlighting the results of Koppen and Spence’s (2007b) study of Colavita effect in which auditory and visual targets on bimodal target trials could be presented at any one of 10 SOAs. Although a significant visual dominance effect was observed at a majority of asynchronies around objective simultaneity, a significant auditory dominance effect was only observed at the largest auditory-leading asynchrony. Shaded gray band in the center of the graph represents the temporal window of audiovisual integration. Shaded areas containing the ear and the eye schematically highlight SOAs at which auditory and visual dominance, respectively, were observed. Note though (see text on this point) that differences between the proportion of auditory-only and visual-only responses only reached statistical significance at certain SOAs (that said, the trend in the data is clear). The error bars represent standard errors of means.
for a review). In these studies, each participant was only ever presented with a maximum of five or six bimodal targets (see Colavita 1974, 1982; Colavita et al. 1976; Colavita and Weisberg 1979). Contrast this with the smaller Colavita effects that have been reported in more recent research, where as many as 120 bimodal targets were presented to each participant (e.g., Hartcher-O’Brien et al. 2008; Koppen et al. 2008; Koppen and Spence 2007a, 2007c). This observation leads on to the suggestion that the Colavita visual dominance effect may be more pronounced early on in the experimental session (see also Kristofferson 1965).* That said, significant Colavita visual dominance effects have nevertheless still been observed in numerous studies where participants’ performance has been averaged over many hundreds of trials. Here, it may also be worth considering whether any reduction in the Colavita effect resulting from increasing the probability of (and/or practice with responding to) bimodal stimuli may also be related to the phenomenon of response coupling (see Ulrich and Miller 2008). That is, the more often two independent target stimuli happen to be presented at exactly the same time, the more likely it is that the participant will start to couple (i.e., program) their responses to the two stimuli together. In the only study (as far as we are aware) to have provided evidence relevant to the question of the consequence of practice on the Colavita visual dominance effect, the vigilance performance of a group of participants was assessed over a 3-h period (Osborn et al. 1963). The participants in this study had to monitor a light and sound source continuously for the occasional (once every 2½ min) brief (i.e., lasting only 41 ms) offset of either or both of the stimuli. The participants were instructed to press one button whenever the light was extinguished and another button whenever the sound was interrupted. The results showed that although participants failed to respond to more of the auditory than visual targets during the first 30-min session (thus showing a typical Colavita visual dominance effect), this pattern of results reversed in the final four 30-min sessions (i.e., participants made * Note that if practice were found to reduce the magnitude of the Colavita visual dominance effect, then this might provide an explanation for why increasing the probability of occurrence of bimodal target trials up to 90% in Koppen and Spence’s (2007d) study has been shown to eliminate the Colavita effect (see Section 27.2.5). Alternatively, however, increasing the prevalence (or number) of bimodal targets might also lead to the increased coupling of a participants’ responses on the bimodal trials (see main text for further details; Ulrich and Miller 2008).
The Colavita Visual Dominance Effect
537
more auditory-only than visual-only responses on the bimodal target trials; see Osborn et al. 1963; Figure 27.2). It is, however, unclear whether these results necessarily reflect the effects of practice on the Colavita visual dominance effect, or whether instead they may simply highlight the effects of fatigue or boredom after the participants had spent several hours on the task (given that auditory events are more likely to be responded to than visual events should the participants temporarily look away or else close their eyes).
27.3 INTERIM SUMMARY To summarize, the latest research has confirmed the fact that the Colavita visual dominance effect is a robust empirical phenomenon. The basic Colavita effect—defined here in terms of participants failing to respond to the nonvisual stimulus more often than they fail to respond to the visual stimulus on the bimodal audiovisual or visuotactile target trials—has now been replicated in many different studies, and by a number of different research groups (although it is worth noting that the magnitude of the effect has fluctuated markedly from one study to the next). That said, the Colavita effect appears to be robust to a variety of different experimental manipulations (e.g., of stimulus intensity, stimulus type, stimulus position, response demands, attention, arousal, etc.). Interestingly, though, while many experimental manipulations have been shown to modulate the size of the Colavita visual dominance effect, and a few studies have even been able to eliminate it entirely, only two of the studies discussed thus far have provided suggestive evidence regarding a reversal of the Colavita effect in humans (i.e., evidence that is consistent with, although not necessarily providing strong support for, auditory dominance; see Osborn et al. 1963; Shapiro et al. 1984). Having reviewed the majority of the published research on the Colavita visual dominance effect, and having ruled out accounts of the effect in terms of people having a predisposition to attend endogenously to the visual modality (see Posner et al. 1976), differences in stimulus intensity (Colavita 1974), and/or difficulties associated with participants having to make two responses at the same time on the bimodal target trials (Koppen and Spence 2007a), how should the effect be explained? Well, researchers have recently been investigating whether the Colavita effect can be accounted for, at least in part, by the prior entry of the visual stimulus to participants’ awareness (see Spence 2010; Spence et al. 2001; Titchener 1908). It is to this research that we now turn.
27.4 PRIOR ENTRY AND COLAVITA VISUAL DOMINANCE EFFECT Koppen and Spence (2007b) investigated whether the Colavita effect might result from the prior entry of the visual stimulus into participants’ awareness on some proportion of the bimodal target trials. That is, even though the auditory and visual stimuli were presented simultaneously in the majority of published studies of the Colavita effect, research elsewhere has shown that a visual stimulus may be perceived first under such conditions (see Rutschmann and Link 1964). In order to evaluate the prior entry account of the Colavita visual dominance effect, Koppen and Spence assessed participants’ perception of the temporal order of pairs of auditory and visual stimuli that had been used in another part of the study to demonstrate the typical Colavita visual dominance effect.* Psychophysical analysis of participants’ TOJ performance showed that when the auditory and visual stimuli were presented simultaneously, participants actually judged the auditory stimulus to have been presented slightly, although not significantly, ahead of the visual stimulus (i.e., contrary to what would have been predicted according to the prior entry account; but see Exner 1875 and Hirsh and Sherrick 1961, for similar results; see also Jaśkowski 1996, 1999; Jaśkowski et al. 1990). * Note the importance of using the same stimuli within the same pool of participants, given the large individual differences in the perception of audiovisual simultaneity that have been reported previously (Smith 1933; Spence 2010; Stone et al. 2001).
538
The Neural Bases of Multisensory Processes
It is, however, important to note that there is a potential concern here regarding the interpretation of Koppen and Spence’s (2007b) findings. Remember that the Colavita visual dominance effect is eliminated when bimodal audiovisual targets are presented too frequently (e.g., see Section 27.2.5). Crucially, Koppen and Spence looked for any evidence of the prior entry of visual stimuli into awareness in their TOJ study under conditions in which a pair of auditory and visual stimuli were presented on each and every trial. The possibility therefore remains that visual stimuli may only be perceived before simultaneously presented auditory stimuli under those conditions in which the occurrence of bimodal stimuli is relatively rare (cf. Miller et al. 2009). Thus, in retrospect, Koppen and Spence’s results cannot be taken as providing unequivocal evidence against the possibility that visual stimuli have prior entry into participants’ awareness on the bimodal trials in the Colavita paradigm. Ideally, future research will need to look for any evidence of visual prior entry under conditions in which the bimodal targets (in the TOJ task) are actually presented as infrequently as when the Colavita effect is demonstrated behaviorally (i.e., when the bimodal targets requiring a detection/discrimination response are presented on only 20% or so of the trials). Given these concerns over the design (and hence interpretation) of Koppen and Spence’s (2007b) TOJ study, it is interesting to note that Lucey and Spence (2009) were recently able to eliminate the Colavita visual dominance effect by delaying the onset of the visual stimulus by a fixed 50 ms with respect to the auditory stimuli on the bimodal target trials. Lucey and Spence used a betweenparticipants experimental design in which one group of participants completed the Colavita task with synchronous auditory and visual targets on the bimodal trials (as in the majority of previous studies), whereas for the other group of participants, the onset of the visual target was always delayed by 50 ms with respect to that of the auditory target. The apparatus and materials were identical to those used by Elcock and Spence (2009; described earlier) although the participants in Lucey and Spence’s study performed the three-button version of the audiovisual Colavita task (i.e., in which participants had separate response keys for auditory, visual, and bimodal targets). The results revealed that although participants made significantly more vision-only than auditoryonly responses in the synchronous bimodal condition (10.3% vs. 2.4%, respectively), no significant Colavita visual dominance effect was reported when the onset of the visual target was delayed (4.6% vs. 2.9%, respectively; n.s.). These results therefore demonstrate that the Colavita visual dominance effect can be eliminated by presenting the auditory stimulus slightly ahead of the visual stimulus. The critical question here, following on from Lucey and Spence’s results, is whether auditory dominance would have been elicited had the auditory stimulus led the visual stimulus by a greater interval. Koppen and Spence (2007b) have provided an answer to this question. In their study of the Colavita effect, the auditory and visual stimuli on the bimodal target trials were presented at one of 10 stimulus onset asynchronies (SOAs; from auditory leading by 600 ms through to vision leading by 600 ms). Koppen and Spence found that the auditory lead needed in order to eliminate the Colavita visual dominance effect on the bimodal target trials was correlated with the SOA at which participants reliably started to perceive the auditory stimulus as having been presented before the visual stimulus (defined as the SOA at which participants make 75% audition first responses; see Koppen and Spence 2007b; Figure 27.3). This result therefore suggests that the prior entry of the visual stimulus to awareness plays some role in its dominance over audition in the Colavita effect. That said, however, Koppen and Spence also found that auditory targets had to be presented 600 ms before visual targets in order for participants to make significantly more auditory-only than visual only responses on the bimodal target trials (although a similar nonsignificant trend toward auditory dominance was also reported at an auditory lead of 300 ms; see Figure 27.2). It is rather unclear, however, what exactly caused the auditorily dominant behavior observed at the 600 ms SOA in Koppen and Spence’s (2007b) study. This (physical) asynchrony between the auditory and visual stimuli is far greater than any shift in the perceived timing of visual relative to auditory stimuli that might reasonably be expected due to the prior entry of the visual stimulus to awareness when the targets were actually presented simultaneously (see Spence 2010). In fact, this
539
The Colavita Visual Dominance Effect
RTV
40 ms
V criterion
A(V) criterion A criterion
RTV
R V(
A)
=
V(A) criterion
R
R A(
RTA
R
A
RTV(A)
V
+
Neural activity
(b) Neural activity
(a)
V)
=
Time
Time
Stimulus onset
RTA Unimodal RT
RTA(V) Time
Neural activity
35 ms
+
Neural activity
(c)
Criterion
R
RA
R
V
V(
A)
Criterion R A(
V)
Time
Time
Neural activity
(d)
A(V) criterion V(A) criterion
V
)
R
Un im
V( A)
od
al R
Unimodal criterion
R A(
Time
FIGURE 27.3 (a) Schematic illustration of the results of Sinnett et al.’s (2008; Experiment 2) speeded target detection study. The figure shows how the presentation of an accessory sound facilitates visual RTs (RT V(A)), whereas the presentation of an accessory visual stimulus delays auditory RTs (RTA(V)). Note that unimodal auditory (RTA) and visual (RT V) response latencies were serendipitously matched in this study (V, visual target; A, auditory stimulus). (b) Schematic diagrams showing how the asymmetrical cross-modal accessory stimulus effects reported by Sinnett et al. might lead to more (and more rapid) vision-only than auditory-only responses on bimodal trials. Conceptually simple models outlined in panels (b) and (c) account for Sinnett et al.’s asymmetrical RT effect in terms of changes in the criterion for responding to auditory and visual targets (on bimodal as opposed to unimodal trials; (b) or in terms of asymmetrical cross-modal changes in the rate of information accrual (c). We plot the putative rate of information accrual (R) as a function of stimuli presented. However, the results of Koppen et al.’s (2009) recent signal detection study of Colavita effect has now provided evidence that is inconsistent with both of these simple accounts (see Figure 27.4). Hence, in panel (d), a mixture model is proposed in which the presentation of an accessory stimulus in one modality leads both to a change in criterion for responding to targets in the other modality (in line with the results of Koppen et al.’s, study) and also to an asymmetrical effect on the rate of information accrual in the other modality (see Koppen et al. 2007a; Miller 1986).
540
The Neural Bases of Multisensory Processes
SOA is also longer than the mean RT of participants’ responses to the unimodal auditory (440 ms) targets. Given that the mean RT for auditory only responses on the bimodal target trials was only 470 ms (i.e., 30 ms longer, on average, than the correct responses on the bimodal trials; see Koppen and Spence 2007b, Figure 1 and Table 1), one can also rule out the possibility that this failure to report the visual stimulus occurred on trials in which the participants made auditory responses that were particularly slow. Therefore, given that the visual target on the bimodal trials (in the 600 ms SOA vision-lagging condition) was likely being extinguished by an already-responded-to auditory target, one might think that this form of auditory dominance reflects some sort of refractory period effect (i.e., resulting from the execution of the participants’ response to the first target; see Pashler 1994; Spence 2008), rather than the Colavita effect proper. In summary, although Koppen and Spence’s (2007b) results certainly do provide an example of auditory dominance, the mechanism behind this effect is most probably different from the one causing the visual dominance effect that has been reported in the majority of studies (of the Colavita effect), where the auditory and visual stimuli were presented simultaneously (see also Miyake et al. 1986). Thus, although recent research has shown that delaying the presentation of the visual stimulus can be used to eliminate the Colavita visual dominance effect (see Koppen and Spence 2007b; Lucey and Spence 2009), and although the SOA at which participants reliably start to perceive the auditory target as having been presented first correlates with the SOA at which the Colavita visual dominance effect no longer occurs (Koppen and Spence 2007b), we do not, as yet, have any convincing evidence that auditory dominance can be observed in the Colavita paradigm by presenting the auditory stimulus slightly before the visual stimulus on the bimodal target trials (i.e., at SOAs where the visual target is presented before the participants have initiated/executed their response to the already-presented auditory target). That is, to date, no simple relationship has been demonstrated between the SOA on the audiovisual target trials in the Colavita paradigm and modality dominance. Hence, we need to look elsewhere for an explanation of vision’s advantage in the Colavita visual dominance effect. Recent progress in understanding what may be going on here has come from studies looking at the effect of accessory stimuli presented in one modality on participants’ speeded responding to targets presented in another modality (Sinnett et al. 2008), and from studies looking at the sensitivity and criterion of participants’ responses in the Colavita task (Koppen et al. 2009).
27.5 EXPLAINING THE COLAVITA VISUAL DOMINANCE EFFECT 27.5.1 Accessory Stimulus Effects and Colavita Effect One of the most interesting recent developments in the study of the Colavita effect comes from an experiment reported by Sinnett et al. (2008; Experiment 2). The participants in this study had to make speeded target detection responses to either auditory or visual targets. An auditory stimulus was presented on 40% of the trials, a visual stimulus was presented on a further 40% of the trials, and both stimuli were presented simultaneously on the remaining 20% of trials (i.e., just as in a typical study of the Colavita effect; note, however, that this task can also be thought of as a kind of go/ no-go task; see Egeth and Sager 1977; Miller 1986; Quinlan 2000). The participants responded significantly more rapidly to the visual targets when they were accompanied by an accessory auditory stimulus than when they were presented by themselves (see Figure 27.3a). By contrast, participants’ responses to the auditory targets were actually slowed by the simultaneous presentation of an accessory visual stimulus (cf. Egeth and Sager 1977). How might the fact that the presentation of an auditory accessory stimulus speeds participants’ visual detection/discrimination responses, whereas the presentation of a visual stimulus slows their responses to auditory stimuli be used to help explain the Colavita visual dominance effect? Well, let us imagine that participants set one criterion for initiating their responses to the relatively common unimodal visual targets and another criterion for initiating their responses to the equally common
The Colavita Visual Dominance Effect
541
unimodal auditory targets. Note that the argument here is phrased in terms of changes in the criterion for responding set by participants, rather than in terms of changes in the perceptual threshold, given the evidence cited below that behavioral responses can sometimes be elicited under conditions in which participants remain unaware (i.e., they have no conscious access to the inducing stimulus). According to Sinnett et al.’s (2008) results, the criterion for initiating a speeded response to the visual targets should be reached sooner on the relatively infrequent bimodal trials than on the unimodal visual trials, whereas it should be reached more slowly (on the bimodal than on the unimodal trials) for auditory targets. There are at least two conceptually simple means by which such a pattern of behavioral results could be achieved. First, the participants could lower their criterion for responding to the visual targets on the bimodal trials while simultaneously raising their criterion for responding to the auditory target (see Figure 27.3b). Alternatively, however, the criterion for initiating a response might not change but the presentation of the accessory stimulus in one modality might instead have a crossmodal effect on the rate of information accrual (R) within the other modality (see Figure 27.3c). The fact that the process of information accrual (like any other internal process) is likely to be a noisy one might then help to explain why the Colavita effect is only observed on a proportion of the bimodal target trials. Evidence that is seemingly consistent with both of these simple accounts can be found in the literature. In particular, evidence consistent with the claim that bimodal (as compared to unimodal) stimulation can result in a change in the rate of information accrual comes from an older go/no-go study reported by Miller (1986). Unimodal auditory and unimodal visual target stimuli were presented randomly in this experiment together with trials in which both stimuli were presented at one of a range of different SOAs (0–167 ms). The participants had to make a simple speeded detection response whenever a target was presented (regardless of whether it was unimodal or bimodal). Catch trials, in which no stimulus was presented (and no response was required), were also included. Analysis of the results provided tentative evidence that visual stimuli needed less time to reach the criterion for initiating a behavioral response (measured from the putative onset of response-related activity) compared to the auditory stimuli on the redundant bimodal target trials—this despite the fact that the initiation of response-related activation after the presentation of an auditory stimulus started earlier in time than following the presentation of a visual stimulus (see Miller 1986, pp. 340– 341). Taken together, these results therefore suggest that stimulus-related information accrues more slowly for auditory targets in the presence (vs. absence) of concurrent visual stimuli than vice versa, just as highlighted in Figure 27.3c. Similarly, Romei et al.’s (2009) recent results showing that looming auditory signals enhance visual excitability in a preperceptual manner can also be seen as being consistent with the information accrual account. However, results arguing for the inclusion of some component of criterion shifting into one’s model of the Colavita visual dominance effect (although note that the results are inconsistent with the simple criterion-shifting model put forward in Figure 27.3b) comes from a more recent study reported by Koppen et al. (2009).
27.5.2 Perceptual and Decisional Contributions to Colavita Visual Dominance Effect Koppen et al. (2009) recently explicitly assessed the contributions of perceptual (i.e., threshold) and decisional (i.e., criterion-related) factors to the Colavita visual dominance effect in a study in which the intensities of the auditory and visual stimuli were initially adjusted until participants were only able to detect them on 75% of the trials. Next, a version of the Colavita task was conducted using these near-threshold stimuli (i.e., rather than the clearly suprathreshold stimuli that have been utilized in the majority of previous studies). A unimodal visual target was presented on 25% of the trials, a unimodal auditory target on 25% of trials, a bimodal audiovisual target on 25% of trials (and no target was presented on the remaining 25% of trials). The task of reporting which target modalities (if any) had been presented in each trial was unspeeded and the participants were instructed to refrain from responding on those trials in which no target was presented.
542
The Neural Bases of Multisensory Processes
Analysis of Koppen et al.’s (2009) results using signal detection theory (see Green and Swets 1966) revealed that although the presentation of an auditory target had no effect on visual sensitivity, the presentation of a visual target resulted in a significant drop in participants’ auditory sensitivity (see Figure 27.4a; see also Golob et al. 2001; Gregg and Brogden 1952; Marks et al. 2003; Odgaard et al. 2003; Stein et al. 1996; Thompson et al. 1958). These results therefore show that the presentation of a visual stimulus can lead to a small, but significant, lowering of sensitivity to a simultaneously presented auditory stimulus, at least when the participants’ task involves trying to detect which target modalities (if any) have been presented.* Koppen et al.’s results suggest that only a relatively small component of the Colavita visual dominance effect may be attributable to the asymmetrical cross-modal effect on auditory sensitivity (i.e., on the auditory perceptual threshold) that results from the simultaneous presentation of a visual stimulus. That is, the magnitude of the sensitivity drop hardly seems large enough to account for the behavioral effects observed in the normal speeded version of the Colavita task. The more important result to have emerged from Koppen et al.’s (2009) study in terms of the argument being developed here was the significant drop in participants’ criterion for responding on the bimodal (as compared to the unimodal) target trials. Importantly, this drop was significantly larger for visual than for auditory targets (see Figure 27.4b). The fact that the criterion dropped for both auditory and visual targets is inconsistent with the simple criterion shifting account of the asymmetrical cross-modal effects highlighted by Sinnett et al. (2008) that were put forward in Figure 27.3b. In fact, when the various results now available are taken together, the most plausible model of the Colavita visual dominance effect would appear to be one in which an asymmetrical lowering of the criterion for responding to auditory and visual targets (Koppen et al. 2009), is paired with an asymmetrical cross-modal effect on the rate of information accrual (Miller 1986; see Figure 27.3d). However, although the account outlined in Figure 27.3d may help to explain why it is that a participant will typically respond to the visual stimulus first on the bimodal target trials (despite the fact that the auditory and visual stimuli are actually presented simultaneously), it does not explain why participants do not quickly recognize the error of their ways (after making a visiononly response, say), and then quickly initiate an additional auditory response.† The participants certainly had sufficient time in which to make a response before the next trial started in many of the studies where the Colavita effect has been reported. For example, in Koppen and Spence’s (2007a, 2007b, 2007c) studies, the intertarget interval was in the region of 1500–1800 ms, whereas mean vision-only response latencies fell in the 500–700 ms range.
27.5.3 Stimulus, (Perception), and Response? We believe that in order to answer the question of why participants fail to make any response to the auditory (or tactile) targets on some proportion of the bimodal target trials in the Colavita paradigm, one has to break with the intuitively appealing notion that there is a causal link between (conscious) perception and action. Instead, it needs to be realized that our responses do not always rely on our first becoming aware of the stimuli that have elicited those responses. In fact, according to Neumann (1990), the only causal link that exists is the one between a stimulus and its associated response. Neumann has argued that conscious perception should not always be conceptualized as a * Note here that a very different result (i.e., the enhancement of perceived auditory intensity by a simultaneously-presented visual stimulus) has been reported in other studies in which the participants simply had to detect the presence of an auditory target (see Odgaard et al. 2004). This discrepancy highlights the fact that the precise nature of a participant’s task constitutes a critical determinant of the way in which the stimuli presented in different modalities interact to influence human information processing (cf. Gondan and Fisher 2009; Sinnett et al. 2008; Wang et al. 2008, on this point). † Note here that we are talking about the traditional two-response version of the Colavita task. Remember that in the threeresponse version, the participant’s first response terminates the trial, and hence there is no opportunity to make a second response.
543
The Colavita Visual Dominance Effect (a) 3.5 Sensitivity (d' )
3 2.5
Target type
2
Unimodal Bimodal
1.5 1 0.5 0
(b)
Auditory
Visual
1.2
Criterion (c)
1 0.8 0.6 0.4 0.2 0
Auditory
Visual
Target modality
FIGURE 27.4 Summary of mean sensitivity (d' ) values (panel a) and criterion (c) (panel b) for unimodal auditory, unimodal visual, bimodal auditory, and bimodal visual targets in Koppen et al.’s (2009) signal detection study of the Colavita visual dominance effect. Error bars indicate the standard errors of means. The results show that although the simultaneous presentation of auditory and visual stimuli resulted in a reduction of auditory sensitivity (when compared to performance in unimodal auditory target trials), no such effect was reported for visual targets. The results also show highlight the fact presentation of a bimodal audiovisual target resulted in a significant reduction in the criteria (c) for responding, and that this effect was significantly larger for visual targets than for auditory targets. (Redrawn from Koppen, C. et al., Exp. Brain Res., 196, 353–360, 2009. With permission.)
necessary stage in the chain of human information processing. Rather, he suggests that conscious perception can, on occasion, be bypassed altogether. Support for Neumann’s view that stimuli can elicit responses in the absence of awareness comes from research showing, for example, that participants can execute rapid and accurate discrimination responses to masked target stimuli that they are subjectively unaware of (e.g., Taylor and McCloskey 1996). The phenomenon of blindsight is also pertinent here (e.g., see Cowey and Stoerig 1991). Furthermore, researchers have shown that people sometimes lose their memory for the second of two stimuli as a result of their having executed a response to the first stimulus (Crowder 1968; Müsseler and Hommel 1997a, 1997b; see also Bridgeman 1990; Ricci and Chatterjee 2004; Rizzolatti and Berti 1990). On the basis of such results, then, our suggestion is that a participant’s awareness (of the target stimuli) in the speeded version of the Colavita paradigm may actually be modulated by the responses that they happen to make (select or initiate) on some proportion of the trials, rather than necessarily always being driven by their conscious perception of the stimuli themselves (see also Hefferline and Perera 1963). To summarize, when participants try to respond rapidly in the Colavita visual dominance task, they may sometimes end up initiating their response before becoming aware of the stimulus (or stimuli) that have elicited that response. Their awareness of which stimuli have, in fact, been presented is then constrained by the response(s) that they actually happen to make. In other words, if (as a participant) I realize that I have made (or am about to make) a vision-only response, it would seem unsurprising that I only then become aware of the visual target, even if an auditory target had also been presented at the same time (although it perhaps reached the threshold for initiating a response
544
The Neural Bases of Multisensory Processes
more slowly than the visual stimulus; see above). Here, one might even consider the possibility that participants simply stop processing (or stop responding to) the target stimulus (or stimuli) after they have selected/triggered a response (to the visual target; i.e., perhaps target processing reflects a kind of self-terminating processing). Sinnett et al.’s (2008) research is crucial here in showing that, as a result of the asymmetrical cross-modal effects of auditory and visual stimuli on each other, the first response that a participant makes on a bimodal target trial is likely to be to a visual (rather than an auditory) stimulus. If this hypothesis regarding people’s failure to respond to some proportion of the auditory (or tactile) stimuli on the bimodal trials in the Colavita paradigm were to be correct, one would expect the fastest visual responses to occur on those bimodal trials in which participants make a visualonly response. Koppen and Spence’s (2007a; Experiment 3) results show just such a result in their three-response study of the Colavita effect (i.e., where participants made one response to auditory targets, one to visual targets, and a third to the bimodal targets; note, however, that the participants did not have the opportunity to respond to the visual and auditory stimuli sequentially in this study). In Koppen and Spence’s study, the visual-only responses on the bimodal target trials were actually significantly faster, on average (mean RT = 563 ms), than the visual-only responses on unimodal visual trials (mean RT = 582 ms; see Figure 27.5). This result therefore demonstrates that even though participants failed to respond to the auditory target, its presence nevertheless still facilitated their behavioral performance. Finally, the vision-only responses (on the bimodal trials) were also found, on average, to be significantly faster than the participants’ correct bimodal responses on the bimodal target trails (mean = 641 ms). Interestingly, however, participants’ auditory-only responses on the bimodal target trials in Koppen and Spence’s (2007a) study were significantly slower, on average, than on the unimodal auditory target trials (mean RTs of 577 and 539 ms, respectively). This is the opposite pattern of results to that seen for the visual target detection data (i.e., a bimodal slowing of responding for auditory targets paired with a bimodal speeding of responding to the visual targets). This result provides additional evidence for the existence of an asymmetrical cross-modal effect on the rate of information accrual). Indeed, taken together, these results mirror those reported by Sinnett et al. (2008) in their speeded target detection task, but note here that the data come from a version of the Colavita task instead. Thus, it really does seem as though the more frequent occurrence of visiononly as compared to auditory-only responses on the bimodal audiovisual target trials in the Colavita visual dominance paradigm is tightly linked to the speed with which a participant initiates his/ *
*
*
n.s.
Colavita effect
Target stimulus 0
539
563
577
582
641
Reaction time (ms)
FIGURE 27.5 Schematic timeline showing the mean latency of participants’ responses (both correct and incorrect responses) in Koppen et al.’s (2007a) three-button version of the Colavita effect. Significant differences between particular conditions of interest (p < .05) are highlighted with an asterisk. (See text for details.)
The Colavita Visual Dominance Effect
545
her response. When participants respond rapidly, they are much more likely to make an erroneous visual-only response than to make an erroneous auditory-only response.*
27.6 BIASED (OR INTEGRATED) COMPETITION AND COLAVITA VISUAL DOMINANCE EFFECT How can the asymmetric cross-modal effects of simultaneously presented auditory and visual targets on each other (that were highlighted in the previous section) be explained? We believe that a fruitful approach may well come from considering them in the light of the biased (or integrated) competition hypothesis (see Desimone and Duncan 1995; Duncan 1996). According to Desimone and Duncan, brain systems (both sensory and motor) are fundamentally competitive in nature. What is more, within each system, a gain in the activation of one object/event representation always occurs at a cost to others. That is, the neural representation of different objects/events is normally mutually inhibitory. An important aspect of Desimone and Duncan’s biased competition model relates to the claim that the dominant neural representation suppresses the neural activity associated with the representation of the weaker stimulus (see Duncan 1996). In light of the discussion in the preceding section (see Section 27.5.2), one might think of biased competition as affecting the rate of information accrual, changing the criterion for responding, and/or changing perceptual sensitivity (but see Gorea and Sagi 2000, 2002). An extreme form of this probabilistic winner-takes-all principle might therefore help to explain why it is that the presentation of a visual stimulus can sometimes have such a profound effect on people’s awareness of the stimuli coded by a different brain area (i.e., modality; see also Hahnloser et al. 1999). Modality-based biased competition can perhaps also provide a mechanistic explanation for the findings of a number of other studies of multisensory information processing. For example, over the years, many researchers have argued that people’s attention is preferentially directed toward the visual modality when pairs of auditory and visual stimuli are presented simultaneously (e.g., see Falkenstein et al. 1991; Golob et al. 2001; Hohnsbein and Falkenstein 1991; Hohnsbein et al. 1991; Oray et al. 2002). As Driver and Vuilleumier (2001, p. 75) describe the biased (or integrated) competition hypothesis: “ . . . multiple concurrent stimuli always compete to drive neurons and dominate the networks (and ultimately to dominate awareness and behavior).” They continue: “various phenomena of ‘attention’ are cast as emergent properties of whichever stimuli happen to win the competition.” In other words, particularly salient stimuli will have a competitive advantage and may thus tend to “attract attention” on purely bottom-up grounds. Visual stimuli might then, for whatever reason (see below), constitute a particularly salient class of stimuli. Such stimulus-driven competition between the neural activation elicited by the auditory (or tactile) and visual targets on bimodal target trials might also help to explain why the attentional manipulations that have been utilized previously have proved so ineffective in terms of reversing the Colavita visual dominance effect (see Koppen and Spence 2007d; Sinnett et al. 2007). That is, although the biasing of a participant’s attention toward one sensory modality (in particular, the nonvisual modality) before stimulus onset may be sufficient to override the competitive advantage resulting from any stimulus-driven biased competition (see McDonald et al. 2005; Spence 2010; Vibell et al. 2007), it cannot reverse it.
27.6.1 Putative Neural Underpinnings of Modality-Based Biased Competition Of course, accounting for the Colavita visual dominance effect in terms of biased competition does not itself explain why it is the visual stimulus that always wins the competition more frequently than the nonvisual stimulus. Although a satisfactory neurally inspired answer to this question will need * One final point to note here concerns the fact that when participants made an erroneous response on the bimodal target trials, the erroneous auditory-only responses were somewhat slower than the erroneous vision-only responses, although this difference failed to reach statistical significance.
546
The Neural Bases of Multisensory Processes
to await future research, it is worth noting here that recent research has highlighted the importance of feedback activity from higher order to early sensory areas in certain aspects of visual awareness (e.g., Lamme 2001; Lamme et al. 2000; Pascual-Leone and Walsh 2001; but see also Macknik 2009; Macknik and Martinez-Conde 2007, in press). It is also pertinent to note that far more of the brain is given over to the processing of visual stimuli than to the processing of stimuli from the other sensory modalities. For example, Sereno et al. (1995) suggest that nearly half of the cortex is involved in the processing of visual information. Meanwhile, Felleman and van Essen (1991) point out that in the macaque there are less than half the number of brain areas involved in the processing of tactile information as involved in the processing of visual information. In fact, in their authoritative literature review, they estimate that 55% of neocortex (by volume), is visual, as compared to 12% somatosensory, 8% motor, 3% auditory, and 0.5% gustatory. Given such statistics, it would seem probable that the visual system might have a better chance of setting-up such feedback activity following the presentation of a visual stimulus than would the auditory or tactile systems following the simultaneous presentation of either an auditory or tactile stimulus. Note that this account suggests that visual dominance is natural, at least for humans, in that it may have a hardwired physiological basis (this idea was originally captured by Colavita et al.’s (1976) suggestion that visual stimuli might be “prepotent”). It is interesting to note in this context that the amount of cortex given over to the processing of auditory and tactile information processing is far more evenly matched than for the competition between audition and vision, hence perhaps explaining the lack of a clear pattern of dominance when stimuli are presented in these two modalities at the same time (see Hecht and Reiner 2009; Occelli et al. 2010). It is also important to note here that progress in terms of explaining the Colavita effect at a neural level might also come from a more fine-grained study of the temporal dynamics of multisensory integration in various brain regions. In humans, the first wave of activity in primary auditory cortex in response to the presentation of suprathreshold stimuli is usually seen at a latency of about 10–15 ms (e.g., Liegeois-Chauvel et al. 1994; Howard et al. 2000; Godey et al. 2001; Brugge et al. 2003). Activity in primary visual cortex starts about 40–50 ms after stimulus presentation (e.g., Foxe et al. 2008; see also Schroeder et al. 1998), whereas for primary somatosensory cortex the figure is about 8–12 ms (e.g., Inui et al., 2004; see also Schroeder et al. 2001). Meanwhile, Schroeder and Foxe (2002, 2004) have documented the asymmetrical time course of the interactions taking place between auditory and visual cortex. Their research has shown that the visual modulation of activity in auditory cortex occurs several tens of milliseconds after the feedforward sweep of activation associated with the processing of auditory stimuli, under conditions where auditory and visual stimuli happen to be presented simultaneously from a location within peripersonal space (i.e., within arm’s reach; see Rizzolatti et al. 1997). This delay is caused by the fact that much of the visual input to auditory cortex is routed through superior temporal polysensory areas (e.g., Foxe and Schroeder 2002; see also Ghazanfar et al. 2005; Kayser et al. 2008; Smiley et al. 2007), and possibly also through prefrontal cortex. It therefore seems plausible to suggest that such delayed visual (inhibitory) input to auditory cortex might play some role in disrupting the setting-up of the feedback activity from higher (auditory) areas.* That said, Falchier et al. (2010) recently reported evidence suggesting the existence of a more direct routing of information from visual to auditory cortex (i.e., from V2 to caudal auditory cortex), hence potentially confusing the story somewhat. By contrast, audition’s influence on visual information processing occurs more rapidly, and involves direct projections from early auditory cortical areas to early visual areas. That is, direct projections have now been documented from the primary auditory cortex A1 to the primary visual cortex V1 (e.g., see Wang et al. 2008; note, however, that these direct connections tend to target * Note here also the fact that visual influences on primary and secondary auditory cortex are greatest when the visual stimulus leads the auditory stimulus by 20–80 ms (see Kayser et al. 2008), the same magnitude of visual leads that have also been shown to give rise to the largest Colavita effect (see Figure 2; Koppen and Spence 2007b).
The Colavita Visual Dominance Effect
547
peripheral, rather than central, locations in the visual field; that said, other projections may well be more foveally targeted). Interestingly, however, until very recently no direct connections had as yet been observed in the opposite direction (see Falchier et al. 2010). These direct projections from auditory to visual cortex may help to account for the increased visual cortical excitability seen when an auditory stimulus is presented together with a visual stimulus (e.g., Martuzzi et al. 2007; Noesselt et al. 2007; Rockland and Ojima 2003; Romei et al. 2007, 2009; see also Besle et al. 2009; Clavagnier et al. 2004; Falchier et al. 2003). Indeed, Bolognini et al. (2010) have recently shown that transcranic magnetic stimulation (TMS)-elicited phosphenes (presented near threshold) are more visible when a white noise burst is presented approximately 40 ms before the TMS pulse (see also Romei et al. 2009). It is also interesting to note here that when auditory and tactile stimuli are presented simulta neously from a distance of less than 1 m (i.e., in peripersonal space), the response in multisensory convergence regions of auditory association cortex is both rapid and approximately simultaneous for these two input modalities (see Schroeder and Foxe 2002, p. 193; see also Foxe et al. 2000, 2002; Murray et al. 2005; Schroeder et al. 2001). Such neurophysiological timing properties may then also help to explain why no clear Colavita dominance effect has as yet been reported between these two modalities (see also Sperdin et al. 2009).* That said, any neurally inspired account of the Colavita effect will likely also have to incorporate the recent discovery of feedforward multisensory interactions to early cortical areas taking place in the thalamus (i.e., via the thalamocortical loop; Cappe et al. 2009). Although any attempt to link human behavior to single-cell neurophysiological data in either awake and anesthetized primates is clearly speculative at this stage, we are nevertheless convinced that this kind of interdisciplinary approach will be needed if we are to develop a fuller understanding of the Colavita effect in the coming years. It may also prove fruitful, when trying to explain why it is that participants fail to make an auditory (or tactile) response once they have made a visual one to consider the neuroscience research on the duration (and decay) of sensory memory in the different modalities (e.g., Lu et al. 1992; Harris et al. 2002; Uusitalo et al. 1996; Zylberberg et al. 2009). Here, it would be particularly interesting to know whether there are any systematic modalityspecific differences in the decay rate of visual, auditory, and tactile sensory memory.
27.6.2 Clinical Extinction and Colavita Visual Dominance Effect It will most likely also be revealing in future research to explore the relationship between the Colavita visual dominance effect and the clinical phenomenon of extinction that is sometimes seen in clinical patients following lateralized (typically right parietal) brain damage (e.g., Baylis et al. 1993; Bender 1952; Brozzoli et al. 2006; Driver and Vuilleumier 2001; Farnè et al. 2007; Rapp and Hendel 2003; Ricci and Chatterjee 2004). The two phenomena share a number of similarities: Both are sensitive to the relative spatial position from which the stimuli are presented (Costantini et al. 2007; Hartcher-O’Brien et al. 2008, 2010; Koppen and Spence 2007c); both are influenced by the relative timing of the two stimuli (Baylis et al. 2002; Costantini et al. 2007; Koppen and Spence 2007b; Lucey and Spence 2009; Rorden et al. 1997); both affect perceptual sensitivity as well as being influenced by response-related factors (Koppen et al. 2009; Ricci and Chatterjee 2004; * It would be interesting here to determine whether the feedforward projections between primary auditory and tactile cortices are any more symmetrical than those between auditory and visual cortices (see Cappe and Barone 2005; Cappe et al. 2009; Hackett et al. 2007; Schroeder et al. 2001; Smiley et al. 2007, on this topic), since this could provide a neural explanation for why no Colavita effect has, as yet, been reported between the auditory and tactile modalities (Hecht and Reiner 2009; Occelli et al. 2010). That said, it should also be borne in mind that the nature of auditory-somatosensory interactions have recently been shown to differ quite dramatically as a function of the body surface stimulated (e.g., different audio–tactile interactions have been observed for stimuli presented close to the hands in frontal space vs. close to the back of the neck in rear space; see Fu et al. 2003; Tajadura-Jiminez et al. 2009; cf. Critchley 1953, p. 19). The same may, of course, also turn out to be true for the auditory–tactile Colavita effect.
548
The Neural Bases of Multisensory Processes
Sinnett et al. 2008; see also Gorea and Sagi 2002). The proportion of experimental trials on which each phenomenon occurs in the laboratory has also been shown to vary greatly between studies. In terms of the biased (or integrated) competition hypothesis (Desimone and Duncan 1995; Duncan 1996), extinction (in patients) is thought to reflect biased competition against stimuli from one side (Driver and Vuilleumier 2001; Rapp and Hendel 2003), whereas here we have argued that the Colavita effect reflects biased competition that favors the processing of visual stimuli. Although extinction has typically been characterized as a spatial phenomenon (i.e., it is the contralesional stimulus that normally extinguishes a simultaneously presented ipsilesional stimulus), it is worth noting that nonspatial extinction effects have also been reported (Costantini et al. 2007; Humphreys et al. 1995; see also Battelli et al. 2007). Future neuroimaging research will hopefully help to determine the extent to which the neural substrates underlying the Colavita visual dominance effect in healthy individuals and the phenomenon of extinction in clinical patients are similar (Sarri et al. 2006). Intriguing data here come from a neuroimaging study of a single patient with visual–tactile extinction reported by Sarri et al. In this patient, awareness of touch on the bimodal visuotactile trials was associated with increased activity in right parietal and frontal regions. Sarri et al. argued that the cross-modal extinction of the tactile stimulus in this patient resulted from increased competition arising from the functional coupling of visual and somatosensory cortex with multisensory parietal cortex. The literature on unimodal and cross-modal extinction suggests that the normal process of biased competition can be interrupted by the kinds of parietal damage that lead to neglect and/or extinction. It would therefore be fascinating to see whether one could elicit the same kinds of biases in neural competition (usually seen in extinction patients) in normal participants, simply by administering TMS over posterior parietal areas (see Driver and Vuilleumier 2001; Duncan 1996; Sarri et al. 2006). Furthermore, following on from the single-cell neurophysiological work conducted by Schroeder and his colleagues (e.g., see Schroeder and Foxe 2002, 2004; Schroeder et al. 2004), it might also be interesting to target superior temporal polysensory areas, and/or the prefrontal cortex in order to try and disrupt the modality-based biased competition seen in the Colavita effect (i.e., rather than the spatial or temporal competition that is more typically reported in extinction patients; see Battelli et al. 2007). There are two principle outcomes that could emerge from such a study, and both seem plausible: (1) TMS over one or more such cortical sites might serve to magnify the Colavita visual dominance effect observed in normal participants, based on the consequences of pathological damage to these areas observed in extinction patients; (2) TMS over these cortical sites might also reduce the magnitude of the Colavita effect, by interfering with the normal processes of biased competition, and/or by interfering with the late-arriving cross-modal feedback activity from visual to auditory cortex (see Section 27.6.1). It would, of course, also be very interesting in future research to investigate whether extinction patients exhibit a larger Colavita effect than normal participants in the traditional version of the Colavita task (cf. Costantini et al. 2007).
27.7 CONCLUSIONS AND QUESTIONS FOR FUTURE RESEARCH Research conducted over the past 35 years or so has shown the Colavita visual dominance effect to be a robust empirical phenomenon. However, traditional explanations of the effect simply cannot account for the range of experimental data that is currently available. In this article, we argue that the Colavita visual dominance effect may be accounted for in terms of Desimone and Duncan’s (1995; see also Duncan 1996) model of biased (or integrated) competition. According to the explanation outlined here, the Colavita visual dominance effect can be understood in terms of the cross-modal competition between the neural representations of simultaneously presented visual and auditory (or tactile) stimuli. Cognitive neuroscience studies would certainly help to further our understanding of the mechanisms underlying the Colavita effect. It would be particularly interesting, for example, to compare the pattern of brain activation on those trials in which participants fail to respond correctly to the nonvisual stimulus to the activation seen on those trials in which they respond appropriately
The Colavita Visual Dominance Effect
549
(cf. Fink et al. 2000; Golob et al. 2001; Sarri et al. 2006; ������������������������������������������� Schubert et al. 2006����������������������� ). Event-related potential studies could also help to determine just how early (or late, see Falkenstein et al. 1991; Quinlan 2000; Zahn et al. 1994) the processing of ignored and reported auditory (or tactile) stimuli differs (see Hohnsbein et al. 1991).
27.7.1 Modeling the Colavita Visual Dominance Effect There is also a considerable amount of interesting work to be done in terms of modeling the Colavita visual dominance effect. Cooper (1998) made a start on this more than a decade ago. He developed a computational model that was capable of simulating the pattern of participants’ RTs in the Colavita task. Cooper’s model consisted of separate modality-specific input channels feeding into a single “object representation network” (whose function involved activating specific response schemas— presumably equivalent to a target stimuli reaching the criterion for responding, as discussed earlier) in which the speed of each channel was dependent on the strength (i.e., weight) of the channel itself. By assuming that the visual channel was stronger than the auditory channel, the model was able to successfully account for the fact that although responses to auditory stimuli are faster than responses to visual stimuli in unimodal trials, the reverse pattern is typically found on bimodal target trials. The challenge for researchers in this area will be to try and develop models that are also capable of accounting for participants’ failure to respond to the nonvisual stimulus (i.e., the effect that has constituted the focus for the research discussed in this article; cf. Peers et al. 2005); such models might presumably include the assignment of different weights to visual and auditory cues, biases to preferentially respond to either visual or auditory stimuli, different gain/loss functions associated with responding, or failing to respond, to auditory and visual target stimuli, etc. It will be especially interesting here to examine whether the recent models of Bayesian multisensory integration (see Ernst 2005) that have proved so successful in accounting for many aspects of cross-modal perception, sensory dominance, and multisensory information processing, can also be used to account for the Colavita visual dominance effect.
27.7.2 Multisensory Facilitation versus Interference Finally, in closing, it is perhaps worth pausing to consider the Colavita effect in the context of so many other recent studies that have demonstrated the benefits of multisensory over unisensory stimulus presentation (e.g., in terms of speeding simple speeded detection responses; Nickerson 1973; Sinnett et al. 2008, Experiment 1; see also Calvert et al. 2004). To some, the existence of the Colavita effect constitutes a puzzling example of a situation in which multisensory stimulation appears to impair (rather than to facilitate) human performance. It is interesting to note here though that whether one observes benefits or costs after multisensory (as compared to unisensory) stimulation seems to depend largely on the specific requirements of the task faced by participants. For example, Sinnett et al. (2008; Experiment 2) reported the facilitation of simple speeded detection latencies on bimodal audiovisual trials (i.e., they observed a violation of the race model; Miller 1982, 1991) when their participants had to make the same simple speeded detection responses to auditory, visual, and audiovisual targets. By contrast, they observed an inhibitory effect when their participants had to respond to the targets in each modality by pressing a separate response key (i.e., the typical Colavita paradigm). However, this latter result is not really so surprising if one stops to consider the fact that in the Colavita task participants can really be thought of as performing two tasks at once: that is, in the traditional two-response version of the Colavita task, the participants perform both a speeded auditory target detection task as well as a speeded visual target detection task. Although on the majority of (unimodal) trials the participants only have to perform one task, on a minority of (bimodal) trials they have to perform both tasks at the same time (and it is on these
550
The Neural Bases of Multisensory Processes
trials that the Colavita effect occurs when the nonvisual stimulus is seemingly ignored).* By contrast, in the redundant target effect paradigm (see earlier), both stimuli are relevant to the same task (i.e., to making a simple speeded target detection response). Researchers have known for more than half a century that people find it difficult to perform two tasks at the same time (regardless of whether the target stimuli relevant to performing those tasks are presented in the same versus different sensory modalities (e.g., Pashler 1994; Spence 2008). One can therefore think of the Colavita paradigm in terms of a form of dual-task interference (resulting from modality-based biased competition at the response-selection level)—interference that appears to be intimately linked to the making of speeded responses to the target stimuli (however, see Koppen et al. 2009). More generally, it is important to stress that although multisensory integration may, under the appropriate conditions, give rise to improved perception/performance, the benefits may necessarily come at the cost of some loss of access to the component unimodal signals (cf. Soto-Faraco and Alsius 2007, 2009). In closing, it is perhaps worth highlighting the fact that the task-dependent nature of the consequences of multisensory integration that show up in studies related to the Colavita effect have now also been demonstrated in a number of different behavioral paradigms, in both humans (see Cappe et al. in press; Gondan and Fischer 2009; Sinnett et al. 2008; Spence et al. 2003) and monkeys (see Besle et al. 2009; Wang et al. 2008).
REFERENCES Battelli, L., A. Pascual-Leone, and P. Cavanagh. 2007. The ‘when’ pathway of the right parietal lobe. Trends in Cognitive Sciences 11: 204–210. Baylis, G. C., J. Driver, and R. D. Rafal. 1993. Visual extinction and stimulus repetition. Journal of Cognitive Neuroscience 5: 453–466. Baylis, G. C., S. L. Simon, L. L. Baylis, and C. Rorden. 2002. Visual extinction with double simultaneous stimulation: What is simultaneous? Neuropsychologia 40: 1027–1034. Bender, M. B. 1952. Disorders in perception. Springfield, IL: Charles Thomas. Besle, J., O. Bertrand, and M. H. Giard. 2009. Electrophysiological (EEG, sEEG, MEG) evidence for multiple audiovisual interactions in the human auditory cortex. Hearing Research 258(1–2): 143–151. Bolognini, N., I. Senna, A. Maravita, A. Pasqual-Leone, and L. B. Merabeth. 2010. Auditory enhancement of visual phosphene perception: The effect of temporal and spatial factors and of stimulus intensity. Neuroscience Letters 477: 109–114. Bonneh, Y. S., M. K. Belmonte, F. Pei, P. E. Iversen, T. Kenet, N. Akshoomoff, Y. Adini, H. J. Simon, C. I. Moore, J. F. Houde, and M. M. Merzenich. 2008. Cross-modal extinction in a boy with severely autistic behavior and high verbal intelligence. Cognitive Neuropsychology 25: 635–652. Bridgeman, B. 1990. The physiological basis of the act of perceiving. In Relationships between perception and action: Current approaches, ed. O. Neumann and W. Prinz, 21–42. Berlin: Springer. Brozzoli, C., M. L. Demattè, F. Pavani, F. Frassinetti, and A. Farnè. 2006. Neglect and extinction: Within and between sensory modalities. Restorative Neurology and Neuroscience 24: 217–232. Brugge, J. F., I. O. Volkov, P. C. Garell, R. A. Reale, and M. A. Howard 3rd. 2003. Functional connections between auditory cortex on Heschl’s gyrus and on the lateral superior temporal gyrus in humans. Journal of Neurophysiology 90: 3750–3763. Calvert, G. A., C. Spence, and B. E. Stein (eds.). 2004. The handbook of multisensory processes. Cambridge, MA: MIT Press. Cappe, C., and P. Barone, P. 2005. Heteromodal connections supporting multisensory integration at low levels of cortical processing in the monkey. European Journal of Neuroscience 22: 2886–2902.
* One slight complication here though relates to the fact that people typically start to couple multiple responses to different stimuli into response couplets under the appropriate experimental conditions (see Ulrich and Miller 2008). Thus, one could argue about whether participants’ responses on the bimodal target trials actually counts as a third single (rather than dual) task, but one that, in the two-response version of the Colavita task involves a bi-finger, rather than a unifingered response. When considered in this light, the interference of performance seen in the Colavita task does not seem quite so surprising.
The Colavita Visual Dominance Effect
551
Cappe, C., A. Morel, P. Barone, and E. M. Rouiller. 2009. The thalamocortical projection systems in primates: An anatomical support for multisensory and sensorimotor interplay. Cerebral Cortex 19: 2025–2037. Clavagnier, S., A. Falchier, and H. Kennedy. 2004. Long-distance feedback projections to area V1: Implications for multisensory integration, spatial awareness, and visual consciousness. Cognitive, Affective, and Behavioural Neuroscience 4: 117–126. Colavita, F. B. 1974. Human sensory dominance. Perception and Psychophysics 16: 409–412. Colavita, F. B. 1982. Visual dominance and attention in space. Bulletin of the Psychonomic Society 19: 261–262. Colavita, F. B., R. Tomko, and D. Weisberg. 1976. Visual prepotency and eye orientation. Bulletin of the Psychonomic Society 8: 25–26. Colavita, F. B., and D. Weisberg. 1979. A further investigation of visual dominance. Perception and Psychophysics 25: 345–347. Cooper, R. 1998. Visual dominance and the control of action. In Proceedings of the 20th Annual Conference of the Cognitive Science Society, ed. M. A. Gernsbacher and S. J. Derry, 250–255. Mahwah, NJ: Lawrence Erlbaum Associates. Costantini, M., D. Bueti, M. Pazzaglia, and S. M. Aglioti. 2007. Temporal dynamics of visuo-tactile extinction within and between hemispaces. Neuropsychology 21: 242–250. Cowey, A., and P. Stoerig. 1991. The neurobiology of blindsight. Trends in the Neurosciences 14: 140–145. Critchley, M. 1953. Tactile thought, with special reference to the blind. Brain 76: 19–35. Crowder, R. G. 1968. Repetition effects in immediate memory when there are no repeated elements in the stimuli. Journal of Experimental Psychology 78: 605–609. De Reuck, T., and C. Spence. 2009. Attention and visual dominance. Unpublished manuscript. Desimone, R., and J. Duncan. 1995. Neural mechanisms of selective visual attention. Annual Review of Neuroscience 18: 193–222. Driver, J., and P. Vuilleumier. 2001. Perceptual awareness and its loss in unilateral neglect and extinction. Cognition 79: 39–88. Duncan, J. 1996. Cooperating brain systems in selective perception and action. In Attention and performance XVI: Information integration in perception and communication, ed. T. Inui and J. L. McClelland, 549– 578. Cambridge, MA: MIT Press. Egeth, H. E., and L. C. Sager. 1977. On the locus of visual dominance. Perception and Psychophysics 22: 77–86. Elcock, S., and C. Spence. 2009. Caffeine and the Colavita visual dominance effect. Unpublished manuscript. Ernst, M. 2005. A Bayesian view on multimodal cue integration. In Perception of the human body from the inside out, ed. G. Knoblich, I. Thornton, M. Grosejan, and M. Shiffrar, 105–131. New York: Oxford Univ. Press. Exner, S. 1875. Experimentelle Untersuchung der einfachsten psychischen Processe (Experimental study of the most simple psychological processes). Archiv für die gesammte Physiologie des menschens und der Thiere (Pflüger’s Archive) 11: 403–432. Falchier, A., S. Clavagnier, P. Barone, and H. Kennedy. 2003. Anatomical evidence of multimodal integration in primate striate cortex. Journal of Neuroscience 22: 5749–5759. Falchier, A., C. E. Schroeder, T. A. Hackett, P. Lakatos, S. Nascimento-Silva, I. Ulbert, G. Karmos, and J. F. Smiley. 2010. Projection from visual areas V2 and prostriata to caudal auditory cortex in the monkey. Cerebral Cortex 20: 1529–1538. Falkenstein, M., J. Hohnsbein, J. Hoormann, and L. Blanke. 1991. Effects of crossmodal divided attention on late ERP components: II. Error processing in choice reaction tasks. Electroencephalography and Clinical Neurophysiology 78: 447–455. Farnè, A., C. Brozzoli, E. Làdavas, and T. Ro. 2007. Investigating multisensory spatial cognition through the phenomenon of extinction. In Attention and performance XXII: Sensorimotor foundations of higher cognition, ed. P. Haggard, Y. Rossetti, and M. Kawato, 183–206. Oxford: Oxford Univ. Press. Felleman, D. J., and D. C. Van Essen. 1991. Distributed hierarchical processing in primate cerebral cortex. Cerebral Cortex 1: 1–47. Fink, G. R., J. Driver, C. Rorden, T. Baldeweg, and R. J. Dolan. 2000. Neural consequences of competing stimuli in both visual hemifields: A physiological basis for visual extinction. Annals of Neurology 47: 440–446. Foree, D. D., and V. M. J. LoLordo. 1973. Attention in the pigeon: Differential effects of food-getting versus shock-avoidance procedures. Journal of Comparative and Physiological Psychology 85: 551–558. Foxe, J. J., I. A. Morocz, M. M. Murray, B. A. Higgins, D. C. Javitt, and C. E. Schroeder. 2000. Multisensory auditory–somatosensory interactions in early cortical processing revealed by high-density electrical mapping. Cognitive Brain Research 10: 77–83.
552
The Neural Bases of Multisensory Processes
Foxe, J. J., E. C. Strugstad, P. Sehatpour, S. Molholm, W. Pasieka, C. E., Schroeder, and M. E. McCourt. 2008. Parvocellular and magnocellular contributions to the initial generators of the visual evoked potential: High-density electrical mapping of the “C1” component. Brain Topography 21: 11–21. Foxe, J. J., G. R. Wylie, A. Martinez, C. E. Schroeder, D. C. Javitt, D. Guilfoyle, W. Ritter, and M. M. Murray. 2002. Auditory–somatosensory multisensory processing in auditory association cortex: An fMRI study. Journal of Neurophysiology 88: 540–543. Fu, K.-M. G., T. A. Johnston, A. S. Shah, L. Arnold, J. Smiley, T. A. Hackett, P. E. Garraghty, and C. E. Schroeder. 2003. Auditory cortical neurons respond to somatosensory stimulation. Journal of Neuroscience 23: 7510–7515. Gallace, A., H. Z. Tan, and C. Spence. 2007. Multisensory numerosity judgments for visual and tactile stimuli. Perception and Psychophysics 69: 487–501. Ghazanfar, A. A., J. X. Maier, K. L. Hoffman, and N. K. Logothetis. 2005. Multisensory integration of dynamic faces and voices in Rhesus monkey auditory cortex. Journal of Neuroscience 25: 5004–5012. Godey, B., D. Schwartz, J. B. de Graaf, P. Chauvel, and C. Liegeois-Chauvel. 2001. Neuromagnetic source localization of auditory evoked fields and intracerebral evoked potentials: A comparison of data in the same patients. Clinical Neurophysiology 112: 1850–1859. Golob, E. J., G. G. Miranda, J. K. Johnson, and A. Starr. 2001. Sensory cortical interactions in aging, mild cognitive impairment, and Alzheimer’s disease. Neurobiology of Aging 22: 755–763. Gondan, M., and V. Fischer. 2009. Serial, parallel, and coactive processing of double stimuli presented with onset asynchrony. Perception 38(Suppl.): 16. Gorea, A., and D. Sagi. 2000. Failure to handle more than one internal representation in visual detection tasks. Proceedings of the National Academy of Sciences of the United States of America 97: 12380–12384. Gorea, A., and D. Sagi, D. 2002. Natural extinction: A criterion shift phenomenon. Visual Cognition 9: 913–936. Green, D. M., and J. A. Swets. 1966. Signal detection theory and psychophysics. New York: Wiley. Gregg, L. W., and W. J. Brogden. 1952. The effect of simultaneous visual stimulation on absolute auditory sensitivity. Journal of Experimental Psychology 43: 179–186. Hackett, T. A., L. De La Mothe, I. Ulbert, G. Karmos, J. Smiley, and C. E. Schroeder. 2007. Multisensory convergence in auditory cortex: II. Thalamocortical connections of the caudal superior temporal plane. Journal of Comparative Neurology 502: 924–952. Hahnloser, R., R. J. Douglas, M. Mahowald, and K. Hepp. 1999. Feedback interactions between neuronal pointers and maps for attentional processing. Nature Neuroscience 2: 746–752. Harris, J. A., C. Miniussi, I. M. Harris, and M. E. Diamond. 2002. Transient storage of a tactile memory trace in primary somatosensory cortex. Journal of Neuroscience 22: 8720–8725. Hartcher-O’Brien, J., A. Gallace, B. Krings, C. Koppen, and C. Spence. 2008. When vision ‘extinguishes’ touch in neurologically-normal people: Extending the Colavita visual dominance effect. Experimental Brain Research 186: 643–658. Hartcher-O’Brien, J., C. Levitan, and C. Spence. 2010. Out-of-touch: Does vision dominate over touch when it occurs off the body? Brain Research 1362: 48–55. Hecht, D., and M. Reiner. 2009. Sensory dominance in combinations of audio, visual and haptic stimuli. Experimental Brain Research 193: 307–314. Hefferline, R. F., and T. B. Perera. 1963. Proprioceptive discrimination of a covert operant without its observation by the subject. Science 139: 834–835. Hirsh, I. J., and C. E. Sherrick Jr. 1961. Perceived order in different sense modalities. Journal of Experimental Psychology 62: 423–432. Hohnsbein, J., and M. Falkenstein. 1991. Visual dominance: Asymmetries in the involuntary processing of visual and auditory distractors. In Channels in the visual nervous system: Neurophysiology, psychophysics and models, ed. B. Blum, 301–313. London: Freund Publishing House. Hohnsbein, J., M. Falkenstein, and J. Hoormann. 1991. Visual dominance is reflected in reaction times and event-related potentials (ERPs). In Channels in the visual nervous system: Neurophysiology, psychophysics and models, ed. B. Blum, 315–333. London: Freund Publishing House. Howard, M. A., I. O. Volkov, R. Mirsky, P. C. Garell, M. D. Noh, M. Granner, H. Damasio, M. Steinschneider, R. A. Reale, J. E. Hind, and J. F. Brugge. 2000. Auditory cortex on the human posterior superior temporal gyrus. Journal of Comparative Neurology 416: 79–92. Humphreys, G. W., C. Romani, A. Olson, M. J. Riddoch, and J. Duncan. 1995. Nonspatial extinction following lesions of the parietal lobe in man. Nature 372: 357–359. Inui, K., X. Wang, Y. Tamura, Y. Kaneoke, and R. Kakigi. 2004. Serial processing in the human somatosensory system. Cerebral Cortex 14: 851–857.
The Colavita Visual Dominance Effect
553
Jaśkowski, P. 1996. Simple reaction time and perception of temporal order: Dissociations and hypotheses. Perceptual and Motor Skills 82: 707–730. Jaśkowski, P. 1999. Reaction time and temporal-order judgment as measures of perceptual latency: The problem of dissociations. In Cognitive contributions to the perception of spatial and temporal events, ed. G. Aschersleben, T. Bachmann, and J. Műsseler, 265–282. North-Holland: Elsevier Science. Jaśkowski, P., F. Jaroszyk, and D. Hojan-Jesierska. 1990. Temporal-order judgments and reaction time for stimuli of different modalities. Psychological Research 52: 35–38. Johnson, T. L., and K. L. Shapiro. 1989. Attention to auditory and peripheral visual stimuli: Effects of arousal and predictability. Acta Psychologica 72: 233–245. Kayser, C., C. I. Petkov, and N. K. Logothetis. 2008. Visual modulation of neurons in auditory cortex. Cerebral Cortex 18: 1560–1574. Koppen, C., A. Alsius, and C. Spence. 2008. Semantic congruency and the Colavita visual dominance effect. Experimental Brain Research 184: 533–546. Koppen, C., C. Levitan, and C. Spence. 2009. A signal detection study of the Colavita effect. Experimental Brain Research 196: 353–360. Koppen, C., and C. Spence. 2007a. Seeing the light: Exploring the Colavita visual dominance effect. Experimental Brain Research 180: 737–754. Koppen, C., and C. Spence. 2007b. Audiovisual asynchrony modulates the Colavita visual dominance effect. Brain Research 1186: 224–232. Koppen, C., and C. Spence. 2007c. Spatial coincidence modulates the Colavita visual dominance effect. Neuroscience Letters 417: 107–111. Koppen, C., and C. Spence. 2007d. Assessing the role of stimulus probability on the Colavita visual dominance effect. Neuroscience Letters 418: 266–271. Kristofferson, A. B. 1965. Attention in time discrimination and reaction time. NASA Contractors Report 194. Washington, D.C.: Office of Technical Services, U.S. Department of Commerce. Lamme, V. A. F. 2001. Blindsight: The role of feedforward and feedback corticocortical connections. Acta Psychologica 107: 209–228. Lamme, V. A. F., H. Supèr, R. Landman, P. R. Roelfsema, and H. Spekreijse. 2000. The role of primary visual cortex (V1) in visual awareness. Vision Research 40: 1507–1521. Liegeois-Chauvel, C., A. Musolino, J. M. Badier, P. Marquis, and P. Chauvel. 1994. Evoked potentials recorded from the auditory cortex in man: Evaluation and topography of the middle latency components. Electroencephalography and Clinical Neuroscience 92: 204–214. LoLordo, V. M. 1979. Selective associations. In Mechanisms of learning and motivation: A memorial to Jerzy Konorsky, ed. A. Dickinson and R. A. Boakes, 367–399. Hillsdale, NJ: Erlbaum. LoLordo, V. M., and D. R. Furrow. 1976. Control by the auditory or the visual element of a compound discriminative stimulus: Effects of feedback. Journal of the Experimental Analysis of Behavior 25: 251–256. Lu, Z.-L., S. J. Williamson, and L. Kaufman. 1992. Behavioral lifetime of human auditory sensory memory predicted by physiological measures. Science 258: 1669–1670. Lucey, T., and C. Spence. 2009. Visual dominance. Unpublished manuscript. Macaluso, E., and J. Driver. 2005. Multisensory spatial interactions: A window onto functional integration in the human brain. Trends in Neurosciences 28: 264–271. Macknik, S. L. 2009. The role of feedback in visual attention and awareness. Perception 38(Suppl.): 162. Macknik, S., and S. Martinez-Conde. 2007. The role of feedback in visual masking and visual processing. Advances in Cognitive Psychology 3: 125–152. Macknik, S., and S. Martinez-Conde. In press. The role of feedback in visual attention and awareness. In The new cognitive neurosciences, ed. M. S. A. Gazzaniga, 1163–1177. Cambridge, MA: MIT Press. Manly, T., I. H. Robertson, M. Galloway, and K. Hawkins. 1999. The absent mind: Further investigations of sustained attention to response. Neuropsychologia 37: 661–670. Marks, L. E., E. Ben-Artzi, and S. Lakatos. 2003. Cross-modal interactions in auditory and visual discrimination. International Journal of Psychophysiology 50: 125–145. Martuzzi, R., M. M. Murray, C. M. Michel, J. P. Thiran, P. P. Maeder, S. Clarke, and R. A. Meuli. 2007. Multisensory interactions within human primary cortices revealed by BOLD dynamics. Cerebral Cortex 17: 1672–1679. McDonald, J. J., W. A. Teder-Sälejärvi, F. Di Russo, and S. A. Hillyard. 2005. Neural basis of auditory-induced shifts in visual time-order perception. Nature Neuroscience 8: 1197–1202. Meltzer, D., and M. A. Masaki. 1973. Measures of stimulus control and stimulus dominance. Bulletin of the Psychonomic Society 1: 28–30.
554
The Neural Bases of Multisensory Processes
Miller, J. O. 1982. Divided attention: Evidence for coactivation with redundant signals. Cognitive Psychology 14: 247–279. Miller, J. O. 1986. Time course of coactivation in bimodal divided attention. Perception and Psychophysics 40: 331–343. Miller, J. O. 1991. Channel interaction and the redundant targets effect in bimodal divided attention. Journal of Experimental Psychology: Human Perception and Performance 17: 160–169. Miller, J., R. Ulrich, and B. Rolke. 2009. On the optimality of serial and parallel processing in the psychological refractory period paradigm: Effects of the distribution of stimulus onset asynchronies. Cognitive Psychology 58: 273–310. Miyake, S., S. Taniguchi, and K. Tsuji. 1986. Effects of light stimulus upon simple reaction time and EP latency to the click presented with different SOA. Japanese Psychological Research 28: 1–10. Murray, M. M., S. Molholm, C. M. Michel, D. J. Heslenfeld, W. Ritter, D. C. Javitt, C. E. Schroeder, C. E., and J. J. Foxe. 2005. Grabbing your ear: Auditory–somatosensory multisensory interactions in early sensory cortices are not constrained by stimulus alignment. Cerebral Cortex 15: 963–974. Müsseler, J., and B. Hommel. 1997a. Blindness to response-compatible stimuli. Journal of Experimental Psychology: Human Perception and Performance 23: 861–872. Müsseler, J., and B. Hommel. 1997b. Detecting and identifying response-compatible stimuli. Psychonomic Bulletin and Review 4: 125–129. Neumann, O. 1990. Direct parameter specification and the concept of perception. Psychological Research 52: 207–215. Nickerson, R. 1973. Intersensory facilitation of reaction time: Energy summation or preparation enhancement? Psychological Review 80: 489–509. Noesselt, T., J. W. Rieger, M. A. Schoenfeld, M. Kanowski, H. Hinrichs, H.-J. Heinze, and J. Driver. 2007. Audiovisual temporal correspondence modulates human multisensory temporal sulcus plus primary sensory cortices. Journal of Neuroscience 27: 11431–11441. Occelli, V., J. Hartcher O’Brien, C. Spence, and M. Zampini. 2010. Assessing the audiotactile Colavita effect in near and rear space. Experimental Brain Research 203: 517–532. O'Connor, N., and B. Hermelin. 1963. Sensory dominance in autistic children and subnormal controls. Perceptual and Motor Skills 16: 920. Odgaard, E. C., Y. Arieh, and L. E. Marks. 2003. Cross-modal enhancement of perceived brightness: Sensory interaction versus response bias. Perception and Psychophysics 65: 123–132. Odgaard, E. C., Y. Arieh, and L. E. Marks. 2004. Brighter noise: Sensory enhancement of perceived loudness by concurrent visual stimulation. Cognitive, Affective, and Behavioral Neuroscience 4: 127–132. Oray, S., Z. L. Lu, and M. E. Dawson. 2002. Modification of sudden onset auditory ERP by involuntary attention to visual stimuli. International Journal of Psychophysiology 43: 213–224. Osborn, W. C., R. W. Sheldon, and R. A. Baker. 1963. Vigilance performance under conditions of redundant and nonredundant signal presentation. Journal of Applied Psychology 47: 130–134. Partan, S., and P. Marler. 1999. Communication goes multimodal. Science 283: 1272–1273. Pascual-Leone, A., and V. Walsh. 2001. Fast backprojections from the motion to the primary visual area necessary for visual awareness. Science 292: 510–512. Pashler, H. 1994. Dual-task interference in simple tasks: Data and theory. Psychological Bulletin 116: 220–244. Peers, P. V., C. J. H. Ludwig, C. Rorden, R. Cusack, C. Bonfiglioli, C. Bundesen, J. Driver, N. Antoun, and J. Duncan. 2005. Attentional functions of parietal and frontal cortex. Cerebral Cortex 15: 1469–1484. Posner, M. I., M. J. Nissen, and R. M. Klein. 1976. Visual dominance: An information-processing account of its origins and significance. Psychological Review 83: 157–171. Quinlan, P. 2000. The “late” locus of visual dominance. Abstracts of the Psychonomic Society 5: 64. Randich, A., R. M. Klein, and V. M. LoLordo. 1978. Visual dominance in the pigeon. Journal of the Experimental Analysis of Behavior 30: 129–137. Rapp, B., and S. K. Hendel. 2003. Principles of cross-modal competition: Evidence from deficits of attention. Psychonomic Bulletin and Review 10: 210–219. Ricci, R., and A. Chatterjee. 2004. Sensory and response contributions to visual awareness in extinction. Experimental Brain Research 157: 85–93. Rizzolatti, G., and A. Berti. 1990. Neglect as a neural representation deficit. Revue Neurologique (Paris) 146: 626–634. Rizzolatti, G., L. Fadiga, L. Fogassi, and V. Gallese. 1997. The space around us. Science 277: 190–191. Rockland, K. S., and H. Ojima. 2003. Multisensory convergence in calcarine visual areas in macaque monkey. International Journal of Psychophysiology 50: 19–26.
The Colavita Visual Dominance Effect
555
Rodway, P. 2005. The modality shift effect and the effectiveness of warning signals in different modalities. Acta Psychologica 120: 199–226. Romei, V., M. M. Murray, C. Cappe, and G. Thut. 2009. Preperceptual and stimulus-selective enhancement of low-level human visual cortex excitability by sounds. Current Biology 19: 1799–1805. Romei, V., M. M. Murray, L. B. Merabet, and G. Thut. 2007. Occipital transcranial magnetic stimulation has opposing effects on visual and auditory stimulus detection: Implications for multisensory interactions. Journal of Neuroscience 27: 11465–11472. Rorden, C., J. B. Mattingley, H.-O. Karnath, and J. Driver. 1997. Visual extinction and prior entry: Impaired perception of temporal order with intact motion perception after unilateral parietal damage. Neuro psychologia 35: 421–433. Rutschmann, J., and R. Link. 1964. Perception of temporal order of stimuli differing in sense mode and simple reaction time. Perceptual and Motor Skills 18: 345–352. Sarri, M., F. Blankenburg, and J. Driver. 2006. Neural correlates of crossmodal visual–tactile extinction and of tactile awareness revealed by fMRI in a right-hemisphere stroke patient. Neuropsychologia 44: 2398–2410. Schroeder, C. E., and J. J. Foxe. 2002. The timing and laminar profile of converging inputs to multisensory areas of the macaque neocortex. Brain Research: Cognitive Brain Research 14: 187–198. Schroeder, C. E., and J. J. Foxe. 2004. Multisensory convergence in early cortical processing. In The handbook of multisensory processes, ed. G. A. Calvert, C. Spence, and B. E. Stein, 295–309. Cambridge, MA: MIT Press. Schroeder, C. E., R. W. Lindsley, C. Specht, A. Marcovici, J. F. Smiley, and D. C. Javitt. 2001. Somatosensory input to auditory association cortex in the macaque monkey. Journal of Neurophysiology 85: 1322–1327. Schroeder, C. E., A. D. Mehta, and S. J. Givre. 1998. A spatiotemporal profile of visual system activation revealed by current source density analysis in the awake macaque. Cerebral Cortex 8: 575–592. Schroeder, C. E., S. Molholm, P. Lakatos, W. Ritter, and J. J. Foxe. 2004. Human simian correspondence in the early cortical processing of multisensory cues. Cognitive Processing 5: 140–151. Schubert, R., F. Blankenberg, S. Lemm, A. Villringer, and G. Curio. 2006. Now you feel it, now you don’t: ERP correlates of somatosensory awareness. Psychophysiology 43: 31–40. Sereno, M. I., A. M. Dale, J. B. Reppas, K. K., Kwong, J. W. Belliveau, T. J. Brady, B. R. Rosen, and R. B. H. Tootell. 1995. Borders of multiple visual areas in humans revealed by functional magnetic resonance imaging. Science 268: 889–893. Shapiro, K. L., B. Egerman, and R. M. Klein. 1984. Effects of arousal on human visual dominance. Perception and Psychophysics 35: 547–552. Shapiro, K. L., W. J. Jacobs, and V. M. LoLordo. 1980. Stimulus–reinforcer ���������������������������������������������������� interactions in Pavlovian conditioning of pigeons: Implications for selective associations. Animal Learning and Behavior 8: 586–594. Shapiro, K. L., and T. L. Johnson. 1987. Effects of arousal on attention to central and peripheral visual stimuli. Acta Psychologica 66: 157–172. Sinnett, S., S. Soto-Faraco, and C. Spence. 2008. The co-occurrence of multisensory competition and facilitation. Acta Psychologica 128: 153–161. Sinnett, S., C. Spence, and S. Soto-Faraco. 2007. Visual dominance and attention: The Colavita effect revisited. Perception and Psychophysics 69: 673–686. Smiley, J., T. A. Hackett, I. Ulbert, G. Karmos, P. Lakatos, D. C. Javitt, and C. E. Schroeder. 2007. Multisensory convergence in auditory cortex: I. Cortical connections of the caudal superior temporal plane in Macaque monkey. Journal of Comparative Neurology 502: 894–923. Smith, A. 2002. Effects of caffeine on human behavior. Food Chemistry and Toxicology 40: 1243–1255. Smith, A. P., A. M. Kendrick, and A. L. Maben. 1992. Effects of breakfast and caffeine on performance and mood in the late morning and after lunch. Neuropsychobiology 26: 198–204. Smith, W. F. 1933. The relative quickness of visual and auditory perception. Journal of Experimental Psychology 16: 239–257. Soto-Faraco, S., and A. Alsius. 2007. Conscious access to the unisensory components of a cross-modal illusion. Neuroreport 18: 347–350. Soto-Faraco, S., and A. Alsius. 2009. Deconstructing the McGurk–MacDonald illusion. Journal of Experimental Psychology: Human Perception and Performance 35: 580–587. Spence, C. 2008. Cognitive neuroscience: Searching for the bottleneck in the brain. Current Biology 18: R965–R968. Spence, C. 2010. Prior entry: Attention and temporal perception. In Attention and time, ed. A. C. Nobre and J. T. Coull, 89–104. Oxford: Oxford Univ. Press. Spence, C., R. Baddeley, M. Zampini, R. James, and D. I. Shore. 2003. Crossmodal temporal order judgments: When two locations are better than one. Perception and Psychophysics 65: 318–328.
556
The Neural Bases of Multisensory Processes
Spence, C., M. E. R. Nicholls, and J. Driver. 2001a. The cost of expecting events in the wrong sensory modality. Perception and Psychophysics 63: 330–336. Spence, C., D. I. Shore, and R. M. Klein. 2001b. Multisensory prior entry. Journal of Experimental Psychology: General 130: 799–832. Spence, C., and S. Soto-Faraco. 2009. Auditory perception: Interactions with vision. In Auditory perception, ed. C. Plack, 271–296. Oxford: Oxford Univ. Press. Sperdin, H. F., C. Cappe, J. J. Foxe, and M. M. Murray. 2009. Early, low-level auditory–somatosensory multisensory interactions impact reaction time speed. Frontiers in Integrative Neuroscience 3(2): 1–10. Stein, B. E., N. London, L. K. Wilkinson, and D. P. Price. 1996. Enhancement of perceived visual intensity by auditory stimuli: A psychophysical analysis. Journal of Cognitive Neuroscience 8: 497–506. Stone, J. V., N. M. Hunkin, J. Porrill, R. Wood, V. Keeler, M. Beanland, M. Port, and N. R. Porter. 2001. When is now? Perception of simultaneity. Proceedings of the Royal Society (B) 268: 31–38. Tajadura-Jiménez, A., N. Kitagawa, A. Väljamäe, M. Zampini, M. M. Murray, and C. Spence. 2009. Auditory– somatosensory multisensory interactions are spatially modulated by stimulated body surface and acoustic spectra. Neuropsychologia 47: 195–203. Taylor, J. L., and D. I. McCloskey. 1996. Selection of motor responses on the basis of unperceived stimuli. Experimental Brain Research 110: 62–66. Thompson, R. F., J. F. Voss, and W. J. Brogden. 1958. Effect of brightness of simultaneous visual stimulation on absolute auditory sensitivity. Journal of Experimental Psychology 55: 45–50. Titchener, E. B. 1908. Lectures on the elementary psychology of feeling and attention. New York: Macmillan. Turatto, M., F. Benso, G. Galfano, L. Gamberini, and C. Umilta. 2002. Non-spatial attentional shifts between audition and vision. Journal of Experimental Psychology: Human Perception and Performance 28: 628–639. Uetake, K., and Y. Kudo. 1994. Visual dominance over hearing in feed acquisition procedure of cattle. Applied Animal Behaviour Science 42: 1–9. Ulrich, R., and J. Miller. 2008. Response grouping in the psychological refractory period (PRP) paradigm: Models and contamination effects. Cognitive Psychology 57: 75–121. Uusitalo, M. A., S. J. Williamson, and M. T. Seppä. 1996. Dynamical organisation of the human visual system revealed by lifetimes of activation traces. Neuroscience Letters 213: 149–152. Van Damme, S., G. Crombez, and C. Spence. 2009a. Is the visual dominance effect modulated by the threat value of visual and auditory stimuli? Experimental Brain Research 193: 197–204. Van Damme, S., A. Gallace, C. Spence, and G. L. Moseley. 2009b. Does the sight of physical threat induce a tactile processing bias? Modality-specific attentional facilitation induced by viewing threatening pictures. Brain Research 1253: 100–106. Vibell, J., C. Klinge, M. Zampini, C. Spence, and A. C. Nobre. 2007. Temporal order is coded temporally in the brain: Early ERP latency shifts underlying prior entry in a crossmodal temporal order judgment task. Journal of Cognitive Neuroscience 19: 109–120. Wang, Y., S. Celebrini, Y. Trotter, and P. Barone. 2008. Visuo-auditory interactions in the primary visual cortex of the behaving monkey: Electrophysiological evidence. BMC Neuroscience 9: 79. Wilcoxin, H. C., W. B. Dragoin, and P. A. Kral. 1971. Illness-induced aversions in rat and quail: Relative salience of visual and gustatory cues. Science 171: 826–828. Zahn, T. P., D. Pickar, and R. J. Haier. 1994. Effects of clozapine, fluphenazine, and placebo on reaction time measures of attention and sensory dominance in schizophrenia. Schizophrenia Research 13: 133–144. Zwyghuizen-Doorenbos, A., T. A. Roehrs, L. Lipschutz, V. Timms, and T. Roth. 1990. Effects of caffeine on alertness. Psychopharmacology 100: 36–39. Zylberberg, A., S. Dehaene, G. B. Mindlin, and M. Sigman. 2009. Neurophysiological bases of exponential sensory decay and top-down memory retrieval: A model. Frontiers in Computational Neuroscience 3(4): 1–16.
28
The Body in a Multisensory World Tobias Heed and Brigitte Röder
CONTENTS 28.1 Introduction........................................................................................................................... 557 28.2 Construction of Body Schema from Multisensory Information............................................ 558 28.2.1 Representing Which Parts Make Up the Own Body................................................. 558 28.2.2 Multisensory Integration for Limb and Body Ownership......................................... 559 28.2.3 Extending the Body: Tool Use................................................................................... 561 28.2.4 Rapid Plasticity of Body Shape................................................................................. 562 28.2.5 Movement and Posture Information in the Brain...................................................... 563 28.2.6 The Body Schema: A Distributed versus Holistic Representation............................564 28.2.7 Interim Summary...................................................................................................... 565 28.3 The Body as a Modulator for Multisensory Processing........................................................ 565 28.3.1 Recalibration of Sensory Signals and Optimal Integration....................................... 565 28.3.2 Body Schema and Peripersonal Space....................................................................... 566 28.3.3 Peripersonal Space around Different Parts of the Body............................................ 568 28.3.4 Across-Limb Effects in Spatial Remapping of Touch............................................... 569 28.3.5 Is the External Reference Frame a Visual One?........................................................ 570 28.3.6 Investigating the Body Schema and Reference Frames with Electrophysiology...... 572 28.3.7 Summary................................................................................................................... 574 28.4 Conclusion............................................................................................................................. 574 References....................................................................................................................................... 575
28.1 INTRODUCTION It is our body through which we interact with the environment. We have a very clear sense about who we are in the sense that we know where our body ends, and what body parts we own. Above that, we usually are (or can easily become) aware of where each of our body parts is currently located, and most of our movements seem effortless, whether performed under conscious control or not. When we think about ourselves, we normally perceive our body as a stable entity. For example, when we go to bed, we do not expect that our body will be different when we wake up the next morning. Quite contrary to such introspective assessment, the brain has been found to be surprisingly flexible in updating its representation of the body. As an illustration, consider what happens when an arm or leg becomes numb after you have sat or slept in an unsuitable position for too long. Touching the numb foot feels very strange, as if you touch someone else’s foot. When you lift a numb hand with the other hand, it feels far too heavy. Somehow, it feels as if the limb does not belong to the own body. Neuroscientists have long been fascinated with how the brain represents the body. It is usually assumed that there are several different types of body representations, but there is no consensus about what these representations are, or how many there may be (de Vignemont 2010; Gallagher 1986; Berlucchi and Aglioti 2010; see also Dijkerman and de Haan 2007 and commentaries thereof). 557
558
The Neural Bases of Multisensory Processes
The most common distinction is that between a body schema and a body image. The body schema is usually defined as a continuously updated sensorimotor map of the body that is important in the context of action, informing the brain about what parts belongs to the body, and where those parts are currently located (de Vignemont 2010). In contrast, the term body image is usually used to refer to perceptual, emotional, or conceptual knowledge about the body. However, other taxonomies have been proposed (see Berlucchi and Aglioti 2010; de Vignemont 2010), and the use of the terms body schema and body image has been inconsistent. This chapter will not present an exhaustive debate about these definitions, and we refer the interested reader to the articles cited above for detailed discussion; in this article, we will use the term body schema with the sensorimotor definition introduced above, referring to both aspects of what parts make up the body, and where they are located. The focus of this chapter will be on the importance of multisensory processing for representing the body, as well as on the role of body representations for multisensory processing. On one hand, one can investigate how the body schema is constructed and represented in the brain, and Section 28.2 will illustrate that the body schema emerges from the interaction of multiple sensory modalities. For this very reason, one can, on the other hand, ask how multisensory interactions between the senses are influenced by the fact that the brain commands a body. Section 28.3, therefore, will present research on how the body schema is important in multisensory interactions, especially for spatial processing.
28.2 CONSTRUCTION OF BODY SCHEMA FROM MULTISENSORY INFORMATION 28.2.1 Representing Which Parts Make Up the Own Body There is some evidence suggesting that an inventory of the normally existing body parts is genetically predetermined. Just like amputees, people born without arms and/or legs can have vivid sensations of the missing limbs, including the feeling of using them for gestural movements during conversation and for finger-aided counting (Ramachandran 1993; Ramachandran and Hirstein 1998; Brugger et al. 2000; Saadah and Melzack 1994; see also Lacroix et al. 1992). This phenomenon has therefore been termed phantom limbs. Whereas the existence of a phantom limb in amputees could be explained with the persistence of experience-induced representations of this limb after the amputation, such an explanation does not hold for congenital phantom limbs. In one person with congenital phantom limbs, transcranial magnetic stimulation (TMS) over primary motor, premotor, parietal, and primary sensory cortex evoked sensations and movements of the congenital phantom limbs (Brugger et al. 2000). This suggests that the information about which parts make up the own body is distributed across different areas of the brain. There are not many reports of congenital phantoms in the literature, and so the phenomenon may be rare. However, the experience of phantom limbs after the loss of a limb, for example, due to amputation, is very common. It has been reported (Simmel 1962) that the probability of perceiving phantom limbs gradually increases with the age of limb loss from very young (2 of 10 children with amputations below the age of 2) to the age of 9 years and older (all of 60 cases), suggesting that developmental factors within this age interval may be crucial for the construction of the body schema (and, in turn, for the occurrence of phantom limbs). The term “phantom limb” refers to limbs that would normally be present in a healthy person. In contrast, a striking impairment after brain damage, for example, to the basal ganglia (Halligan et al. 1993), the thalamus (Bakheit and Roundhill 2005), or the frontal lobe (McGonigle et al. 2002), is the report of one or more supernumerary limbs in addition to the normal limbs. The occurrence of a supernumerary limb is usually associated with the paralysis of the corresponding real limb, which is also attributable to the brain lesion. The supernumerary limb is vividly felt, and patients confabulate to rationalize why the additional limb is present (e.g., it was attached by the clinical staff during
The Body in a Multisensory World
559
sleep), and why it is not visible (e.g., it was lost 20 years ago) (Halligan et al. 1993; Sellal et al. 1996; Bakheit and Roundhill 2005). It has therefore been suggested that the subjective presence of a supernumerary limb may result from cognitive conflicts between different pieces of sensory information (e.g., visual vs. proprioceptive) or fluctuations in the awareness about the paralysis, which in turn may be resolved by assuming the existence of two (or more) limbs rather than one (Halligan et al. 1993; Ramachandran and Hirstein 1998). Whereas a patient with a phantom or a supernumerary limb perceives more limbs than he actually owns, some brain lesions result in the opposite phenomenon of patients denying the ownership of an existing limb. This impairment, termed somatoparaphrenia, has been reported to occur after temporo-parietal (Halligan et al. 1995) or thalamic-temporo-parietal damage (Daprati et al. 2000)—notably all involving the parietal lobe, which is thought to mediate multisensory integration for motor planning. Somatoparaphrenia is usually observed in conjunction with hemineglect and limb paralysis (Cutting 1978; Halligan et al. 1995; Daprati et al. 2000) and has been suggested to reflect a disorder of body awareness due to the abnormal sensorimotor feedback for the (paralyzed) limb after brain damage (Daprati et al. 2000). Lesions can also affect the representation of the body and self as a whole, rather than just affecting single body parts. These experiences have been categorized into three distinct phenomena (Blanke and Metzinger 2009). During out-of-body experiences, a person feels to be located outside of her real body and to look at herself, often from above. In contrast, during an autoscopic illusion, the person localizes herself in her real body, but sees an illusory body in extrapersonal space (e.g., in front of herself). Finally, during heautoscopy, a person sees a second body and feels to be located in both, either at the same time, or in sometimes rapid alternation. In patients, such illusions have been suggested to be related to damage to the temporo-parietal junction (TPJ) (Blanke et al. 2004), and an out-of-body experience was elicited by stimulation of an electrode implanted over the TPJ for presurgical assessment (Blanke et al. 2002). Interestingly, whole body illusions can coincide with erroneous visual perceptions about body parts, for example, an impression of limb shortening or illusory flexion of an arm. It has therefore been suggested that whole body illusions are directly related to the body schema, resulting from a failure to integrate multisensory (e.g., vestibular and visual) information about the body and its parts, similar to the proposed causes of supernumerary limbs (Blanke et al. 2004). In sum, many brain regions are involved in representing the configuration of the body; some aspects of these representations seem to be innate, and are probably refined during early development. Damage to some of the involved brain regions can lead to striking modifications of the perceived body configuration, as well as to illusions about the whole body.
28.2.2 Multisensory Integration for Limb and Body Ownership Although the previous section suggests that some aspects of the body schema may be hardwired, the example of the sleeping foot with which we started this chapter suggests that the body schema is a more flexible representation. Such fast changes of the body’s representation have been demonstrated with an ingenious experimental approach: to mislead the brain as to the status of ownership of a new object and to provoke its inclusion into the body schema. This trick can be achieved by using rubber hands: a rubber hand is placed in front of a participant in such a way that it could belong to her own body, and it is then stroked in parallel with the participant’s real, hidden hand. Most participants report that they feel the stroking at the location of the rubber hand, and that they feel as if the rubber hand were their own (Botvinick and Cohen 1998). One of the main determinants for this illusion to arise is the synchrony of the visual and tactile stimulation. In other words, the touches felt at the own hand and those seen to be delivered to the rubber hand must match. It might in fact be possible to trick the brain into integrating other than handlike objects into its body schema using this synchronous stroking technique: when the experimenter stroked not a rubber hand but a shoe placed on the table (Ramachandran and Hirstein 1998) or even the table surface (Armel and
560
The Neural Bases of Multisensory Processes
Ramachandran 2003), participants reported that they “felt” the touch delivered to their real hand to originate from the shoe and the table. Similarly, early event-related potentials (ERPs) in response to tactile stimuli were enhanced after synchronous stimulation of a rubber hand as well as of a nonhand object (Press et al. 2008). Even more surprisingly, participants in Armel and Ramachandran’s study displayed signs of distress and an increased skin conductance response when the shoe was hit with a hammer, or a band-aid was ripped off the table surface. Similar results, that is, signs of distress, were also observed when the needle of a syringe was stabbed into the rubber hand, and these behavioral responses were associated with brain activity in anxiety-related brain areas (Ehrsson et al. 2007). Thus, the mere synchrony of visual events at an object with the tactile sensations felt at the hand seem to have led to some form of integration of the objects (the rubber hand, the shoe, or the table surface) into the body schema, resulting in physiological and emotional responses usually reserved for the real body. It is important to understand that participants in the rubber hand illusion (RHI) do not feel additional limbs; rather, they feel a displacement of their own limb, which is reflected behaviorally by reaching errors after the illusion has manifested itself (Botvinick and Cohen 1998; Holmes et al. 2006; but see Kammers et al. 2009a, 2009c, and discussion in de Vignemont 2010), and by an adjustment of grip aperture when finger posture has been manipulated during the RHI (Kammers et al. 2009b). Thus, a new object (the rubber hand) is integrated into the body schema, but is interpreted as an already existing part (the own but hidden arm). The subjective feeling of ownership of a rubber hand has also been investigated using functional magnetic resonance imaging (fMRI). Activity emerged in the ventral premotor cortex and (although, statistically, with only a tendency for significance) in the superior parietal lobule (SPL) (Ehrsson et al. 2004). In the monkey, both of these areas respond to peripersonal stimuli around the hand and head. Activity related to multisensory integration—synchrony of tactile and visual events, as well as the alignment of visual and proprioceptive information about arm posture—was observed in the SPL, presumably in the human homologue of an area in the monkey concerned with arm reaching [the medial intraparietal (MIP) area]. Before the onset of the illusion, that is, during its buildup, activity was seen in the intraparietal sulcus (IPS), in the dorsal premotor cortex (PMd), and in the supplementary motor area (SMA), which are all thought to be part of an arm-reaching circuit in both monkeys and humans. Because the rubber arm is interpreted as one’s own arm, the illusion may be based on a recalibration of perceived limb position, mediated parietally, according to the visual information about the rubber arm (Ehrsson et al. 2004; Kammers et al. 2009c). As such, the integration of current multisensory information about the alleged position of the hand must be integrated with long-term knowledge about body structure (i.e., the fact that there is a hand to be located) (de Vignemont 2010; Tsakiris 2010). Yet, as noted earlier, an integration of a non-body-like object also seems possible in some cases. Besides the illusory integration of a shoe or the table surface due to synchronous stimulation, an association of objects with the body has been reported in a clinical case of a brain-lesioned patient who denied ownership of her arm and hand; when she wore the wedding ring on that hand, she did not recognize it as her own. When it was taken off the neglected hand, the patient immediately recognized the ring as her own (Aglioti et al. 1996). Such findings might therefore indicate an involvement of higher cognitive processes in the construction of the body schema. It was mentioned in the previous section that brain damage can lead to misinterpretations of single limbs (say, an arm or a leg), but also of the whole body. Similarly, the rubber hand paradigm has been modified to study also the processes involved in the perception of the body as a whole and of the feeling of self. Participants viewed a video image of themselves filmed from the back (Ehrsson 2007) or a virtual reality character at some distance in front of them (Lenggenhager et al. 2007). They could see the back of the figure in front of them being stroked in synchrony with feeling their own back being stroked. This manipulation resulted in the feeling of the self being located outside the own body and of looking at oneself (Ehrsson 2007). Furthermore, when participants were displaced from their viewing position and asked to walk to the location at which they felt “themselves” during the illusion, they placed themselves in between the real and the virtual body’s
The Body in a Multisensory World
561
locations (Lenggenhager et al. 2007). Although both rubber hand and whole body illusions use the same kind of multisensory manipulation, the two phenomena have been proposed to tap into different aspects of body processing (Blanke and Metzinger 2009): whereas the rubber hand illusion leads to the attribution of an object into the body schema, the whole body illusion manipulates the location of a global “self” (Blanke and Metzinger 2009; Metzinger 2009), and accordingly the firstperson perspective (Ehrsson 2007). This distinction notwithstanding, both illusions convincingly demonstrate how the representation of the body in the brain is determined by the integration of multisensory information. To sum up, our brain uses the synchrony of multisensory (visual and tactile) stimulation to determine body posture. Presumably, because touch is necessarily located on the body, such synchronous visuo-tactile stimulation can lead to illusions about external objects to belong to our body, and even to mislocalization of the location of the whole body. However, the illusion is not of a new body part having been added, but rather of a non-body object taking the place of an already existing body part (or, in the case of the whole body illusion, the video image indicating our body’s location).
28.2.3 Extending the Body: Tool Use At first sight, the flexibility of the body schema demonstrated with the rubber hand illusion and the whole body illusion may seem impedimental rather than useful. However, a very common situation in which such integration may be very useful is the use of tools. Humans, and to some extent also monkeys, use tools to complement and extend the abilities and capacity of their own body parts to act upon their environment. In this situation, visual events at the tip of the tool (or, more generally, at the part of the tool used to manipulate the environment) coincides with tactile information received at the hand—a constellation that is very similar to the synchronous stroking of a non-body object and a person’s hand. Indeed, some neurons in the intraparietal part of area PE (PEip) of monkeys respond to tactile stimuli to the hand, as well as to visual stimuli around the tactile location (see also Section 28.3.2). When the monkey was trained to use a tool to retrieve otherwise unreachable food, the visual receptive fields (RFs), which encompassed only the hand when no tool was used, now encompassed both the hand and the tool (Iriki et al. 1996). In a similar manner, when the monkey learned to observe his hand in a monitor rather than seeing it directly, the visual RFs now encompassed the monitor hand (Obayashi et al. 2000). These studies have received some methodological criticism (Holmes and Spence 2004), but their results are often interpreted as some form of integration of the tool into the monkey’s body schema. Neurons with such RF characteristics might therefore be involved in the mediation of rapid body schema modulations illustrated by the rubber hand illusion in humans. Although these monkey findings are an important step toward understanding tool use and its relation to the body schema, it is important to note that the mechanisms discovered in the IPS cannot explain all phenomena involved either in tool use or in ownership illusions. For example, it has been pointed out that a tool does not usually feel as an own body part, even when it is frequently used, as for example, a fork (Botvinick 2004). Such true ownership feelings may rather be restricted to body part–shaped objects such as a prosthesis or a rubber hand, given that they are located in an anatomically plausible location (Graziano and Gandhi 2000; Pavani et al. 2000). For the majority of tools, one might rather feel that the sensation of a touch is projected to the action-related part of the tool (usually the tip), such as one may feel the touch of the pen to occur between the paper and the pen tip, and not at the fingers holding the pen (see also Yamamoto and Kitazawa 2001b; Yamamoto et al. 2005). Accordingly, rather than the tool being integrated into the body schema, it may be that tool use results in the directing of attention toward that part in space that is relevant for the currently performed action. Supporting such interpretations, it has recently been shown that visual attention was enhanced at the movement endpoint of the tool as well as at the movement endpoint of the hand when a reach was planned with a tool. Attention was not enhanced, however, in between those locations along the tool (Collins et al. 2008). Similarly, cross-modal (visual–tactile) interactions have been shown to be enhanced at the tool tip and at the hand, but not
562
The Neural Bases of Multisensory Processes
in locations along the tool (Holmes et al. 2004; Yue et al. 2009). Finally, in a recent study, participants were asked to make tactile discrimination judgments about stimuli presented to the tip of a tool. Visual distractors were presented in parallel to the tactile stimuli. fMRI activity in response to the visual distractors near the end of the tool was enhanced in the occipital cortex, compared to locations further away from the tool (Holmes et al. 2008). These findings were also interpreted to indicate an increase of attention at the tool tip, due to the use of the tool. Experimental results such as these challenge the idea of an extension of the body schema. Other results, in contrast, do corroborate the hypothesis of an extension of the body schema due to tool use. For example, tool use resulted in a change of the perceived distance between two touches to the arm, which was interpreted to indicate an elongated representation of the arm (Cardinali et al. 2009b). It has recently been pointed out that the rubber hand illusion seems to consist of several dissociable aspects (Longo et al. 2008), revealed by the factor-analytic analysis of questionnaire related to the experience of the rubber hand illusion. More specific distinctions may need to be made about the different processes (and, as a consequence, the different effects found in experiments) involved in the construction of the body schema, and different experimental paradigms may tap into only a subset of these processes. In sum, multisensory signals are not only important for determining what parts we perceive to be made of. Multisensory mechanisms are also important in mediating the ability to use tools. It is currently under debate if tools extend the body schema by integrating the tool as a body part, or if other multisensory processes, for example, a deployment of attention to the space manipulated by the tool, are at the core of our ability to use tools.
28.2.4 Rapid Plasticity of Body Shape The rubber hand illusion demonstrates that what the brain interprets as the own body can be rapidly adjusted to the information that is received from the senses. Rapid changes of the body schema are, however, not restricted to the inventory of body parts considered to belong to the body, or their current posture. They also extend to the body’s shape. We already mentioned that the representation of the arm may be prolonged after tool use (Cardinali et al. 2009b). An experience most of us have had is the feeling of an increased size of an anesthesized body part, for example, the lip during a dentist’s appointment (see also Türker et al. 2005; Paqueron et al. 2003). Somewhat more spectacular, when a participant holds the tip of his nose with his thumb and index finger while his biceps muscle is vibrated to induce the illusion of the arm moving away from the body, many participants report that they perceive their nose to elongate to a length of up to 30 cm (sometimes referred to as the Pinocchio illusion; Lackner 1988). A related illusion can be evoked when an experimenter leads the finger of a participant to irregularly tap the nose of a second person (seated next to the participant), while he synchronously taps the participant’s own nose (Ramachandran and Hirstein 1997; see also discussion in Ramachandran and Hirstein 1998). Both illusions are induced by presenting the brain with mismatching information about touch and proprioception. They demonstrate that, despite the fact that our life experience would seem to preclude sudden elongations of the nose (or any other body part, for that matter), the body schema is readily adapted when sensory information from different modalities (here, tactile and proprioceptive) calls for an integration of initially mismatching content. The rubber hand illusion has been used also to investigate effects of the perception of body part size. Participants judged the size of a coin to be bigger when the illusion was elicited with a rubber hand bigger than their own, and to be smaller when the rubber hand was smaller (Bruno and Bertamini 2010). The rubber hand illusion thus influenced tactile object perception. This influence was systematic: as the real object held by the participants was always the same size, their finger posture was identical in all conditions. With the illusion of a small hand, this posture would indicate a relatively small distance between the small fingers. In contrast, with the illusion of a big hand, the same posture would indicate a larger distance between the large fingers.
The Body in a Multisensory World
563
Similarly, visually perceived hand size has also been shown to affect grip size, although more so when the visual image of the hand (a projection of an online video recording of the hand) was bigger than normal (Marino et al. 2010). The rubber hand illusion has also been used to create the impression of having an elongated arm by having participants wear a shirt with an elongated sleeve from which the rubber hand protruded (Schaefer et al. 2007). By recording magnetoencephalographic (MEG) responses to tactile stimuli to the illusion hand, this study also demonstrated an involvement of primary somatosensory cortex in the illusion. These experiments demonstrate that perception of the body can be rapidly adjusted by the brain, and that these perceptual changes in body shape affect object perception as well as hand actions.
28.2.5 Movement and Posture Information in the Brain The rubber hand illusion shows how intimately body part ownership and body posture are related: in this illusion, an object is felt to belong to the own body, but at the same time, posture of the real arm is felt to be at the location of the rubber arm. In the same way, posture is, of course, intimately related to movement, as every movement leads to a change in posture. However, different brain areas seem to be responsible for perceiving movement and posture. The perception of limb movement seems to depend on the primary sensory and motor cortex as well as on the premotor and supplementary motor cortex (reviewed by Naito 2004). This is true also for the illusory movement of phantom limbs, which is felt as real movement (Bestmann et al. 2006; Lotze et al. 2001; Roux et al. 2003; Brugger et al. 2000). However, the primary motor cortex may play a crucial role in movement perception. One can create an illusion of movement by vibration of the muscles responsible for the movement of a body part, for example, the arm or hand. When a movement illusion is created for one hand, then this illusion transfers to the other hand if the palms of the two hands touch. For both hands, fMRI activity increased in primary motor cortex, suggesting a primary role of this motor-related structure also for the sensation of movement (Naito et al. 2002). In contrast, the current body posture seems to be represented quite differently from limb movement perception. Proprioceptive information arrives in the cortex via the somatosensory cortex. Accordingly, neuronal responses in secondary somatosensory cortex (SII) to tactile stimuli to a monkeys hand were shown to be modulated by the monkey’s arm posture (Fitzgerald et al. 2004). In humans, the proprioceptive drift associated with the rubber hand illusion—that is, the change of the subjective position of the own hand toward the location of the rubber hand—was correlated with activity in SII acquired with PET (Tsakiris et al. 2007). (SII) was also implicated in body schema functions by a study in which participants determined the laterality of an arm seen on a screen by imagining to turn their own arm until it matched the seen one, as compared to when they determined the onscreen arm’s laterality by imagining its movement toward the appropriate location on a body that was also presented on the screen (Corradi-Dell’Acqua et al. 2009). SII was thus active during the imagination of specifically one’s own posture when making a postural judgment. However, many other findings implicate hierarchically higher, more posterior parietal areas in the maintenance of a posture representation. When participants were asked to reach with their hand to another body part, activity increased in the SPL after a posture change as compared to when participants repeated a movement they had just executed before. This posture change effect was observed both when the reaching hand changed its posture, as well as when participants reached with one hand to the other, and the target hand rather than the reaching hand changed its posture (Pellijeff et al. 2006). Although the authors interpreted their results as reflecting postural updating, they may instead be attributable to reach planning. However, a patient with an SPL lesion displayed symptoms that corroborate the view that the SPL is involved in the maintenance of a continuous postural model of the body (Harris and Wolpert 1998). This patient complained that her arm and leg felt like they drifted and then faded, unless she could see them. This subjective feeling was accompanied by an inability to retain grip force as well as a loss of tactile perception of a vibratory
564
The Neural Bases of Multisensory Processes
stimulus after it was displayed for several seconds. Because the patient’s deficit was not a general disability to detect tactile stimulation or perform hand actions, these results seem to imply that it is the maintenance of the current postural state of the body that was lost over time unless new visual, tactile, or proprioceptive information forced an update of the model. The importance of the SPL for posture control is also evident from a patient who, after SPL damage, lost her ability to correctly interact with objects requiring whole body coordination, such as sitting on a chair (Kase et al. 1977). Still further evidence for an involvement of the SPL in posture representation comes from experiments in healthy participants. When people are asked to judge the laterality of a hand presented in a picture, these judgments are influenced by the current hand posture adopted by the participant: the more unnatural it would be to align the own hand with the displayed hand, the longer participants take to respond (Parsons 1987; Ionta et al. 2007). A hand posture change during the hand laterality task led to an activation in the SPL in fMRI (de Lange et al. 2006). Hand crossing also led to a change in intraparietal activation during passive tactile stimulation (Lloyd et al. 2003). Finally, recall that fMRI activity during the buildup of the rubber hand illusion, thought to involve postural recalibration due to the visual information about the rubber arm, was also observed in the SPL. These findings are consistent with data from neurophysiological recordings in monkeys showing that neurons in area 5 (Sakata et al. 1973) in the superior parietal lobe as well as neurons in area PEc (located just at the upper border of the IPS and extending into the sulcus to border MIP; Breveglieri et al. 2008) respond to complex body postures, partly involving several limbs. Neurons in these areas respond to tactile, proprioceptive, and visual input (Breveglieri et al. 2008; Graziano et al. 2000). Furthermore, some area 5 neurons fire most when the felt and the seen position of the arm correspond rather than when they do not (Graziano 1999; Graziano et al. 2000). These neurons respond not only to vision of the own arm, but also to vision of a fake arm, if it is positioned in an anatomically plausible way such that they look as if they might belong to the animal’s own body, reminiscent of the rubber hand illusion in humans. Importantly, some neurons fire most when the visual information of the fake arm matches the arm posture of the monkey’s real, hidden arm, but reduce their firing rate when vision and proprioception do not match. To summarize, body movement and body posture are represented by different brain regions. Movement perception relies on the motor structures of the frontal lobe. Probably, the most important brain region for the representation of body posture, in contrast, is the SPL. This region is known to integrate signals from different sensory modalities, and damage to it results in dysfunctions of posture perception and actions requiring postural adaptations. However, other brain regions are involved in posture processing as well.
28.2.6 The Body Schema: A Distributed versus Holistic Representation The evidence reviewed so far has shown that what has been subsumed under the term body schema is not represented as one single, unitary entity in the brain—even if, from a psychological standpoint, it would seem to constitute an easily graspable and logically coherent concept. However, as has often proved to be the case in psychology and in the neurosciences, what researchers have hypothesized to be functional entities for the brain’s organization is not necessarily the way nature has indeed evolved the brain. The organization of the parietal and frontal areas seems to be modular, and they appear to be specialized for certain body parts (Rizzolatti et al. 1998; Grefkes and Fink 2005; Andersen and Cui 2009), for example, for hand grasping, arm reaching, and eye movements. Similarly, at least in parts of the premotor cortex, RFs for the different sensory modalities are body part–centered (e.g., around the hand; see also Section 28.3.2), suggesting that, possibly, other body part–specific areas may feature coordinate frames anchored to those body parts (Holmes and Spence 2004). As a consequence, the holistic body schema that we subjectively experience has been proposed to emerge from the interaction of multiple space-, body-, and action-related brain areas (Holmes and Spence 2004).
The Body in a Multisensory World
565
28.2.7 Interim Summary The first part of this chapter has highlighted how important the integration of multisensory information is for body processing. We showed that a representation of our body parts is probably innate, and that lesions to different brain structures such as the parietal and frontal lobes as well as subcortical structures can lead to malfunctions of this representation. Patients can perceive lost limbs as still present, report additional limbs to the normal ones, and deny the ownership of a limb. We went on to show how the integration of multisensory (usually visual and tactile) information is used in an online modification or “construction” of the body schema. In the rubber hand illusion, synchronous multisensory information leads to the integration of an external object into the body schema in the sense that the location of the real limb is felt to be at the external object. Multisensory information can also lead to adjustments of perceived body shape, as in the Pinocchio illusion. Information about body parts—their movement and their posture—are represented in a widespread network in the brain. Whereas limb movement perception seems to rely on motor structures, multisensory parietal areas are especially important for the maintenance of a postural representation. Finally, we noted that the current concept of the body schema in the brain is that of an interaction between many body part–specific representations.
28.3 THE BODY AS A MODULATOR FOR MULTISENSORY PROCESSING The first part of this chapter has focused on the multisensory nature of the body schema with its two aspects of what parts make up the body, and where those parts are located in space and in relation to one another. These studies form the basis for an exploration of the specific characteristics of body processing and its relevance for perception, action, and the connection of these two processes. The remainder of this article, therefore, will adopt the opposite view than the first part: it will assume the existence of a body schema and explore its influence on multisensory processing. One of the challenges for multisensory processing is that information from the different senses is received by sensors that are arranged very differently from modality to modality. In vision, light originating from neighboring spatial locations falls on neighboring rods and cones on the retina. When the eyes move, light from the same spatial origin falls on different sensors on the retina. Visual information is therefore initially eye-centered. Touch is perceived through sensors all over the skin. Because the body parts constantly move in relation to each other, a touch to the same part of the skin can correspond to very different locations in external, visual space. Similar challenges arise for the spatial processing in audition, but we will focus here on vision and touch.
28.3.1 Recalibration of Sensory Signals and Optimal Integration In some cases, the knowledge about body posture and movement is used to interpret sensory information. For example, Lackner and Shenker (1985) attached a light or a sound source to each hand of their participants who sat in a totally dark room. They then vibrated the biceps muscles of the two arms; recall that muscle vibration induces the illusion of limb movement. In this experimental setup, participants perceived an outward movement of the two arms. Both the lights and the sound were perceived as moving with the apparent location of the hands, although the sensory information on the retina and in the cochlea remained identical throughout these manipulations. Such experimental findings have lead to the proposal that the brain frequently recalibrates the different senses to ensure that the actions carried out with the limbs are in register with the external world (Lackner and DiZio 2000). The brain seems to use different sensory input to do this, depending on the experimental situation. In the rubber hand illusion, visual input about arm position apparently overrules proprioceptive information about the real position of the arm. In other situations, such as in the arm vibration illusion, proprioception can overrule vision.
566
The Neural Bases of Multisensory Processes
Although winner-take-all schemes for such dominance of one sense over another have been proposed (e.g., Ramachandran and Hirstein 1998), there is ample evidence that inconsistencies in the information from the different senses does not simply lead to an overruling of one by the other. Rather, the brain seems to combine the different senses to come up with a statistically optimal estimate of the true environmental situation, allowing for statistically optimal movements (Körding and Wolpert 2004; Trommershäuser et al. 2003) as well as perceptual decisions (Ernst and Banks 2002; Alais and Burr 2004). Because in many cases one of our senses outperforms the others in a specific sensory ability—for example, spatial acuity is superior in vision (Alais and Burr 2004), and temporal acuity is best in audition (Shams et al. 2002; Hötting and Röder 2004)—many experimental results have been interpreted in favor of an “overrule” hypothesis. Nevertheless, it has been demonstrated, for example, in spatial tasks, that the weight the brain assigns to the information received through a sensory channel is directly related to its spatial acuity, and that audition (Alais and Burr 2004) and touch (Ernst and Banks 2002) will overrule vision when visual acuity is sufficiently degraded. Such integration is probably involved also in body processing and in such phenomena as the rubber hand and Pinocchio illusions. In sum, the body schema influences how multisensory information is interpreted by the brain. The weight that a piece of multisensory information is given varies with its reliability (see also de Vignemont 2010).
28.3.2 Body Schema and Peripersonal Space Many neurons in a brain circuit involving the ventral intraparietal (VIP) area and the ventral premotor cortex (PMv) feature tactile RFs, mainly around the monkey’s mouth, face, or hand. These tactile RF are supplemented by visual and sometimes auditory RFs that respond to the area up to ~30 cm around the body part (Fogassi et al. 1996; Rizzolatti et al. 1981a, 1981b; Graziano et al. 1994, 1999; Duhamel et al. 1998; Graziano and Cooke 2006). Importantly, when either the body part or the eyes are moved, the visual RF is adjusted online such that the tactile and the visual modality remain aligned within a given neuron (Graziano et al. 1994). When one of these neurons is electrically stimulated, the animal makes defensive movements (Graziano and Cooke 2006). Because of these unique RF properties, the selective part of space represented by this VIP–PMv circuit has been termed the peripersonal space, and it has been suggested to represent a defense zone around the body. Note that the continuous spatial adjustment of the visual to the tactile RF requires both body posture and eye position to be integrated in a continuous manner. Two points therefore become immediately clear: first, the peripersonal space and the body schema are intimately related (see also Cardinali et al. 2009a); and second, as the body schema, the representation of peripersonal space includes information from several (if not all) sensory modalities. As is the case with the term “body schema,” the term “peripersonal space” has also been defined in several ways. It is sometimes used to denote the space within arm’s reach (see, e.g., Previc 1998). For the purpose of this review, “peripersonal space” will be used to denote the space directly around the body, in accord with the findings in monkey neurophysiology. Different approaches have been taken to investigate if peripersonal space is similarly represented in humans as in monkeys. One of them has been the study of patients suffering from extinction. These patients are usually able to report single stimuli in all spatial locations, but fail to detect contralesional stimuli when these are concurrently presented with ipsilesional stimuli (Ladavas 2002). The two stimuli can be presented in two different modalities (Ladavas et al. 1998), indicating that the process that is disrupted by extinction is multisensory in nature. More importantly, extinction is modulated in some patients by the distance of the distractor stimulus (i.e., the ipsilesional stimulus that extinguishes the contralesional stimulus) from the hand. For example, in some patients a tactile stimulus to the contralesional hand is extinguished by an ipsilesional visual stimulus to a much higher degree when it is presented in the peripersonal space of the patient’s ipsilesional hand than when it is presented far from it (di Pellegrino and Frassinetti 2000). Therefore, extinction is
The Body in a Multisensory World
567
modulated by two manipulations that are central to neurons representing peripersonal space in monkeys: (1) extinction can be multisensory and (2) it can dissociate between peripersonal and extrapersonal space. In addition, locations of lesions associated with extinction coincide (at least coarsely) with the brain regions associated with peripersonal spatial functions in monkeys (Mort et al. 2003; Karnath et al. 2001). The study of extinction patients has therefore suggested that a circuit for peripersonal space exists in humans, analogously to the monkey. The peripersonal space has also been investigated in healthy humans. One of the important characteristics of the way the brain represents peripersonal space is the alignment of visual and tactile events. In an fMRI study in which participants had to judge if a visual stimulus and a tactile stimulus to the hand were presented from the same side of space, hand crossing led to an increase of activation in the secondary visual cortex, indicating an influence of body posture on relatively low-level sensory processes (Misaki et al. 2002). In another study, hand posture was manipulated in relation to the eye: rather than changing hand posture itself, gaze was directed such that a tactile stimulus occurred either in the right or the left visual hemifield. The presentation of bimodal visual–tactile stimuli led to higher activation in the visual cortex in the hemisphere contralateral to the visual hemifield of the tactile location, indicating that the tactile location was remapped with respect to the visual space and then influenced visual cortex (Macaluso et al. 2002). These influences of posture and eye position on early sensory cortex may be mediated by parietal cortex. For example, visual stimuli were better detected when a tactile stimulus was concurrently presented (Bolognini and Maravita 2007). This facilitatory influence of the tactile stimulus was best when the hand was held near the visual stimulus, both when this implied a normal or a crossed hand posture. However, hand crossing had a very different effect when neural processing in the posterior parietal cortex was impaired by repetitive TMS: now a tactile stimulus was most effective when it was delivered to the hand anatomically belonging to that side of the body at which the visual stimulus was presented; when the hands were crossed, a right hand stimulus, for example, facilitated a right-side visual stimulus, although the hand was located in the left visual space (Bolognini and Maravita 2007). This result indicates that after disruption of parietal processing, body posture was no longer taken into account during the integration of vision and touch, nicely in line with the findings about the role of parietal cortex for posture processing (see Section 28.2.5). A more direct investigation of how the brain determines if a stimulus is located in the peri personal space was undertaken in an fMRI study that independently manipulated visual and proprioceptive cues about hand posture to modulate the perceived distance of a small visual object from the participants’ hand. Vision of the arm could be occluded, and the occluded arm was then located near the visual object (i.e., peripersonally) or far from it; the distance from the object could be determined by the brain only by using proprioceptive information. Alternatively, vision could be available to show that the hand was either close or far from the stimulus. Ingeniously, the authors manipulated these proprioceptive and visual factors together by using a rubber arm: when the real arm was held far away from the visual object, the rubber hand could be placed near the object so that visually the object was in peripersonal space (Makin et al. 2007). fMRI activity due to these manipulations was found in posterior parietal areas. There was some evidence that for the determination of posture in relation to the visual object, proprioceptive signals were more prominent in the anterior IPS close to the somatosensory cortex, and that vision was more prominent in more posterior IPS areas, closer to visual areas. Importantly, however, all of these activations were located in the SPL and IPS, the areas that have repeatedly been shown to be relevant for the representation of posture and of the body schema. Besides these neuroimaging approaches, behavioral studies have also been successful in investigating the peripersonal space and the body schema. One task that has rendered a multitude of findings is a cross-modal interference paradigm, the cross-modal congruency (CC) task (reviewed by Spence et al. 2004b). In this task, participants receive a tactile stimulus to one of four locations; two of these locations are located “up” and two are located “down” (see Figure 28.1). Participants are asked to judge the elevation of the tactile stimulus in each trial, regardless of its side (left or right).
568
The Neural Bases of Multisensory Processes
tactile stimuli visual distractors
FIGURE 28.1 Standard cross-modal congruency task. Tactile stimuli are presented to two locations on the hand (often index finger and thumb holding a cube; here, back and palm of the hand). In each trial, one of the tactile stimuli is presented concurrently with one of the visual distractor stimuli. Participants report if tactile stimulus came from an upper or a lower location. Although they are to ignore visual distractors, tactile judgment is biased toward location of the light. This influence is biggest when the distractor is presented at the same hand as tactile stimulus, and reduced when the distractor occurs at the other hand.
However, a to-be-ignored visual distractor stimulus is presented with every tactile target stimulus, also located at one of the four locations at which the tactile stimuli can occur. The visual distractor is independent of the tactile target; it can therefore occur at a congruent location (tactile and visual stimulus have the same elevation) or at an incongruent location (tactile and visual stimulus have opposing elevations). Despite the instruction to ignore the visual distractors, participants’ reaction times and error probabilities are influenced by them. When the visual distractors are congruent, participants perform faster and with higher accuracy than when the distractors are incongruent. The difference of the incongruent minus the congruent conditions (e.g., in RT and in accuracy) is referred to as the CC effect. Importantly, the CC effect is larger when the distractors are located close to the stimulated hands rather than far away (Spence et al. 2004a). Moreover, the CC effect is larger when the distractors are placed near rubber hands, but only if those are positioned in front of the participant in such a way that, visually, they could belong to the participant’s body (Pavani et al. 2000). The CC effect is also modulated by tool use in a similar manner as by rubber hands; when a visual distractor is presented in far space, the CC effect is relatively small, but it increases when a tool is held near the distractor (Maravita et al. 2002; Maravita and Iriki 2004; Holmes et al. 2007). Finally, the CC effect is increased during the whole body illusion (induced by synchronous stroking; see Section 28.2.2) when the distractors are presented on the back of the video image felt to be the own body, compared to when participants see the same video image and distractor stimuli, but without the induction of the whole body illusion (Aspell et al. 2009). These findings indicate that cross-modal interaction, as indexed in the CC effect, is modulated by the distance of the distractors from what is currently represented as the own body (i.e., the body schema) and thus suggest that the CC effect arises in part from the processing of peripersonal space. To summarize, monkey physiology, neuropsychological findings, and behavioral research suggest that the brain specially represents the space close around the body, the peripersonal space. There is a close relationship between the body schema and the representation of peripersonal space, as body posture must be taken into account to remap, from moment to moment, which part of external space is peripersonal.
28.3.3 Peripersonal Space around Different Parts of the Body All of this behavioral research—just as much as the largest part of all neurophysiological, neuro psychological, and neuroimaging research—has explored peripersonal space and the body schema using stimulation to and near the hands. The hands may be considered special in that they are used for almost any kind of action we perform. Processing principles revealed for the hands may therefore not generalize to other body parts. As an example, hand posture, but not foot posture, has been reported to influence the mental rotation of these limbs (Ionta et al. 2007; Ionta and Blanke 2009; but see Parsons 1987). Moreover, monkey work has demonstrated multisensory neurons with peripersonal spatial characteristics only for the head, hand, and torso, but neurons with equivalent
The Body in a Multisensory World
569
characteristics for the lower body have so far not been reported (Graziano et al. 2002). The peripersonal space representation may thus be limited to body parts that are important for the manipulation of objects under (mainly) visual control. To test this hypothesis in humans, body schema–related effects such as the CC effect, which have been conducted for the hands, must be investigated for other body parts. The aforementioned study of the CC effect during the whole body illusion (Aspell et al. 2009; see also Section 28.2.2) demonstrated a peripersonal spatial effect near the back. The CC effect was observable also when stimuli were delivered to the feet (Schicke et al. 2009), suggesting that a representation of the peripersonal space exists also for the space around these limbs. If the hypothesis is correct that the body schema is created from body part–specific representations, one might expect that the representation of the peripersonal space of the hand and that of the foot do not interact. To test this prediction, tactile stimuli were presented to the hands while visual distractors were flashed either near the participant’s real foot, near a fake foot, or far from both the hand and the foot. The cross-modal interference of the visual distractors, indexed by the CC effect, was larger when they were presented in the peripersonal space of the real foot than when they were presented near the fake foot or in extrapersonal space (Schicke et al. 2009). The spatial judgment of tactile stimuli at the hand was thus modulated when a visual distractor appeared in the peripersonal space of another body part. This effect cannot be explained with the current concept of peripersonal space as tactile RFs encompassed by visual RFs. These results rather imply either a holistic body schema representation, or, more probably, interactions beyond simple RF overlap between the peripersonal space representations of different body parts (Holmes and Spence 2004; Spence et al. 2004b). In sum, the peripersonal space is represented not just for the hands, but also for other body parts. Interactions between the peripersonal spatial representations of different body parts challenge the concept of peripersonal space being represented merely by overlapping RFs.
28.3.4 Across-Limb Effects in Spatial Remapping of Touch The fact that visual distractors in the CC paradigm have a higher influence when they are presented in the peripersonal space implies that the brain matches the location of the tactile stimulus with that of the visual one. The tactile stimulus is registered on the skin; to match this skin location to the location of the visual stimulus requires that body posture be taken into account and the skin location be projected into an external spatial reference frame. Alternatively, the visual location of the distractor could be computed with regard to the current location of the tactile location, that is, with respect to the hand, and thus be viewed as a projection of external onto the somatotopic space (i.e., the skin). This remapping of visual–tactile space has been more thoroughly explored by manipulating hand posture. As in the standard CC task described earlier, stimuli were presented to the two hands and the distractors were placed near the tactile stimuli (Spence et al. 2004a). However, in half of the trials, participants crossed their hands. If spatial remapping occurs in this task, then the CC effect should be high whenever the visual distractor is located near the stimulated hand. In contrast, if tactile stimuli were not remapped into external space, then a tactile stimulus on the right hand should always be influenced most by a right-hemifield visual stimulus, independent of body posture. The results were clear-cut: when the hands were crossed, the distractors that were now near the hand were most effective. In fact, in this experiment the CC effect pattern of left and right distractor stimuli completely reversed, which the authors interpreted as a “complete remapping of visuotactile space” (p. 162). Spatial remapping could thus be viewed as a means of integrating spatial information from the different senses in multisensory contexts. However, spatial remapping has also been observed in purely tactile tasks that do not involve any distractor stimuli of a second modality. One example is the temporal order judgment (TOJ) task, in which participants judge which of two tactile stimuli occurred first. Performance in this task is impaired when participants cross their hands (Yamamoto
570
The Neural Bases of Multisensory Processes
and Kitazawa 2001a; Shore et al. 2002; Röder et al. 2004; Schicke and Röder 2006; Azanon and Soto-Faraco 2007). It is usually assumed that the performance deficit after hand crossing in the TOJ task is due to a conflict between two concurrently active reference frames: one anatomical and one external (Yamamoto and Kitazawa 2001a; Röder et al. 2004; Schicke and Röder 2006). The right–left coordinate axes of these two reference frames are opposed to each other when the hands are crossed; for example, the anatomically right arm is located in the externally left hemispace during hand crossing. This remapping takes place despite the task being purely tactile, and despite the detrimental effect of using the external reference frame in the task. Remapping of stimulus location by accounting for current body posture therefore seems to be an automatically evoked process in the tactile system. In the typical TOJ task, the two stimuli are applied to the two hands. It would therefore be possible that the crossing effect is simply due to a confusion regarding the two homologous limbs, rather than to the spatial location of the stimuli. This may be due to a coactivation of homologous brain areas in the two hemispheres (e.g., in SI or SII), which may make it difficult to assign the two concurrent tactile percepts to their corresponding visual spatial locations. However, a TOJ crossing effect was found for tactile stimuli delivered to the two hands, to the two feet, or to one hand and the contralateral foot (Schicke and Röder 2006). In other words, participants were confused not only about which of the two hands or the two feet was stimulated first, but they were equally impaired in deciding if it was a hand or a foot that received the first stimulus. Therefore, the tactile location originating on the body surface seems to be remapped into a more abstract spatial code for which the original skin location, and the somatotopic coding of primary sensory cortex, is no longer a dominating feature. In fact, it has been suggested that the location of a tactile stimulus on the body may be reconstructed by determining which body part currently occupies the part of space at which the tactile stimulus has been sensed (Kitazawa 2002). The externally anchored reference frame is activated in parallel with a somatotopic one, and their concurrent activation leads to the observed behavioral impairment. To summarize, remapping of stimulus location in a multisensory experiment such as the CC paradigm is a necessity for aligning signals from different modalities. Yet, even when stimuli are purely unimodal, and the task would not require a recoding of tactile location into an external coordinate frame, such a transformation nonetheless seems to take place. Thus, even for purely tactile processing, posture information (e.g., proprioceptive and visual) is automatically integrated.
28.3.5 Is the External Reference Frame a Visual One? The representation of several reference frames is, of course, not unique to the TOJ crossing effect. In monkeys, the parallel existence of multiple reference frames has been demonstrated in the different subareas of the IPS, for example, in VIP (Schlack et al. 2005), which is involved in the representation of peripersonal space, in MIP, which is involved in arm reaching (Batista et al. 1999), and in LIP, which is engaged in saccade planning (Stricanne et al. 1996). Somewhat counterintuitively, many neurons in these areas do not represent space in a reference frame that can be assigned to one of the sensory systems (e.g., a retinotopic one for vision, a head-centered one for audition) or a specific limb (e.g., a hand-centered reference frame for hand reach planning). Rather, there are numerous intermediate coding schemes present in the different neurons (Mullette-Gillman et al. 2005; Schlack et al. 2005). However, such intermediate coding has been shown to enable the transformation of spatial codes between different reference frames, possibly even in different directions, for example, from somatotopic to eye-centered and vice versa (Avillac et al. 2005; Pouget et al. 2002; Cohen and Andersen 2002; Xing and Andersen 2000). Similar intermediate coding has been found in posture-related area 5, which codes hand position in an intermediate manner between eye- and hand-centered coordinates (Buneo et al. 2002). Further downstream, in some parts of MIP, arm reaching coordinates may, in contrast, be represented fully in eye-centered coordinates, independent of whether the sensory target for reaching is visual (Batista et al. 1999; Scherberger et al.
The Body in a Multisensory World
571
2003; Pesaran et al. 2006) or auditory (Cohen and Andersen 2000). In addition to these results from monkeys, an fMRI experiment in humans has suggested common spatial processing of visual and tactile targets for saccade as well as for reach planning (Macaluso et al. 2007). Still further downstream, in the motor-related PMv, which has been proposed to form the peripersonal space circuit together with VIP, visual RFs are aligned with hand position (Graziano and Cooke 2006). These findings have led to the suggestion that the external reference frame involved in tactile localization is a visual one, and that remapping occurs automatically to aid the fusion of spatial information of the different senses. Such use of visual coordinates may be helpful not only for action planning (e.g., the reach of the hand toward an object), but also for an efficient online correction of motor error with respect to the visual target (Buneo et al. 2002; Batista et al. 1999). A number of variants of the TOJ paradigm have been employed to study the visual origin of the external reference frame in humans. For example, the crossing effect could be ameliorated when participants viewed uncrossed rubber hands (with their real hands hidden), indicating that visual (and not just proprioceptive) cues modulate spatial remapping (Azanon and Soto-Faraco 2007). In the same vein, congenitally blind people did not display a TOJ crossing effect, suggesting that they do not by default activate an external reference frame for tactile localization (Röder et al. 2004). Congenitally blind people also outperformed sighted participants when the use of an anatomically anchored reference frame was advantageous to solve a task, whereas they performed worse than the sighted when an external reference frame was better suited to solve a task (Röder et al. 2007). Importantly, people who turned blind later in life were influenced by an external reference frame in the same manner as sighted participants, indicating that spatial remapping develops during ontogeny when the visual system is available, and that the lack of automatic coordinate transformations into an external reference frame is not simply an unspecific effect of long-term visual deprivation (Röder et al. 2004, 2007). In conclusion, the use of an external reference frame seems to be induced by the visual system, and this suggests that the external coordinates used in the remapping of sensory information are visual coordinates. Children did not show a TOJ crossing effect before the age of ~5½ years (Pagel et al. 2009). This late use of external coordinates suggests that spatial remapping requires a high amount of learning and visual–tactile experience during interaction with the environment. One might therefore expect remapping to take place only in regions that are accessible to vision. In the TOJ paradigm, one would thus expect a crossing effect when the hands are held in front, but no such crossing effect when the hands are held behind the back (as, because of the lack of tactile–visual experience in that part of space, no visual–tactile remapping should take place). At odds with these predictions, Kobor and colleagues (2006) observed a TOJ crossing effect (although somewhat reduced) also behind the back. We conducted the same experiment in our laboratory, and found that the size of the crossing effect did not differ in the front and in the back [previously unpublished data; n = 11 young, healthy, blindfolded adults, just noticeable difference (JND) for correct stimulus order: front uncrossed: 66 ± 10 ms; uncrossed back: 67 ± 11 ms; crossed front: 143 ± 39 ms; crossed back: 138 ± 25 ms; ANOVA main effect of part of space and interaction of hand crossing with hand of space, both F1,10