Atoms of Mind
W.R. Klemm
Atoms of Mind The “Ghost in the Machine” Materializes
W.R. Klemm College of Veterinary Medicine and Biome VMA Bldg. Room 107 4458 TAMU College Station, TX 77843 USA
[email protected] ISBN 978-94-007-1096-2 e-ISBN 978-94-007-1097-9 DOI 10.1007/978-94-007-1097-9 Springer Dordrecht Heidelberg London New York Library of Congress Control Number: 2011927167 © Springer Science+Business Media B.V. 2011 No part of this work may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, microfilming, recording or otherwise, without written permission from the Publisher, with the exception of any material supplied specifically for the purpose of being entered and executed on a computer system, for exclusive use by the purchaser of the work. Printed on acid-free paper Springer is part of Springer Science+Business Media (www.springer.com)
The human mind is a vast inner universe. Like everything else in the cosmos, mind materializes from atoms – atoms beautifully orchestrated by the laws of chemistry, mathematics, and physics. Those laws themselves are too rational, elegant, and powerful to have arisen spontaneously by chance from nothingness. Only a Master Creator could have produced such an intelligent design.
Other Books By Bill Klemm Better Grades, Less Effort Core Ideas in Neuroscience Blame Game. How To Win It Thank You Brain for All You Remember. What You Forgot Was My Fault. Dillos. Road Kill on the Long Road of Evolution Global Peace Through the Global University System Understanding Neuroscience Brainstem Mechanisms of Behavior Science, The Brain, and Our Future Discovery Processes in Modern Biology Applied Electronics for Veterinary Medicine and Animal Physiology Animal Electroencephalography
To all those who supported, enabled, and inspired me to pursue a career of trying to learn how the brain works: parents, wife, and children teachers and professors neuroscientists, past and present colleagues and competitors Everybody I meet has something to teach me. My brain tries to learn from them all.
This 1,400 g of mush we call the human brain is the most magnificent creation in the universe. This brain engages the world on its own levels of conscious, subconscious, and even non-conscious mind. This brain creates the only reality it can truly know by what it attends to and detects. Such attention is immeasurably enriched by the self-generated creation of a conscious mind, operating as an avatar on behalf of the body and of itself. It sleeps, it dreams, and most important of all, when awake and fully human, it can be a free-will mind of its own.
Preface
Some pre-publication reviewers have said this book is not scholarly enough. Others said it was too scholarly. To the scholars I say, “I don’t intend this to be a book only for scholars, because human mind is too important to be left to specialists.” I would add “If you can overlook not being dazzled with brilliance, you will nonetheless find provocative ideas, and if you do research in this area, some of these ideas can drive your research.” This applies particularly to topics such as the nature of information coding, free will, consciousness, and dreaming. To those who think the book is too scholarly I say, “Hell, I am trying to understand and explain the human mind, the most complicated thing in the known universe. I have a right to expect readers to have some science background. A few college-level science courses that include biology should suffice.” Why write such a book? … because consciousness is the cardinal experience of all humans, scholars and non-scholars. We all have a most personal vested interest in knowing what this unique capability of mind is all about. While it is true that some other higher species may experience some degree of consciousness, every indication is that these animals do not participate in introspection, as all humans are compelled to do.
How to Read This Book I, like most authors, would like you to read this book from front to back. In our notso-humble view of our “deathless prose,” we authors think you will miss too much if you skip around. This book is not to be a primer on psychology or neurophysiology. There are plenty of books that do that. This book is about how the brain creates mind and how it engages biology to think. The chapter titles reflect the flavor of how I approach this subject. No single book that I know of, technical or otherwise, approaches the issue of thought in this way. You can, of course, skip around, because each chapter has a distinct theme. How much you skip around will depend on your time, inclination, and educational ix
x
Preface
background. Even so, no matter how much education you have, I think it is certain that every chapter presents at least some materials in ways you may not have considered. Chapter 1 opens with a discourse about science and religion. Religion is universal in all human cultures. Whether rational or not, religious beliefs are an inherent property of human mind. The great clash of science and religion arose in the nineteenth century with the theory of evolution. In the twentieth century, we learned most of the fundamentals of how brains work. In the twenty-first century, expected insights about the material nature of mind will make it very hard to avoid a reassessment of “old time” religion. To set the stage for such reassessment, the chapter presents an orientation around what brains do and a couple of basic principles about how brain’s react to the world. Chapter 2 stimulates reflection on what it means to think, not as a psychologist would do, but as a blood-and-guts physiologist (like me) would do. These are two quite different ways at looking at brain function. Given the plethora of psychology books, there is no need for me to take that approach. Chapter 3 defines “thought” in physiological terms. Here is where I explain that the carrier of thought is atoms, more precisely sodium and potassium atoms that have become ions by giving up surplus electrons. These ions, not electrons, provide pulses of electrical current that are the electrical carriers of thought. Though Chaps. 2–4 may contain more academic content than some readers care about, it would be helpful to at least scan this material because it sets the groundwork for all the rest of the book. The theories of consciousness that are later developed will not be properly understood without understanding the general ideas of these chapters. You can gloss over some of the details, but at least try to get the general ideas. This is, after all, a book on how minds are created and how they work. Nobody should expect reading about that to be easy. But it should be gratifying. Humans crave understanding when the facts and ideas are not too complicated. We are especially are driven to understand ourselves, our inner soul so to speak. Unfortunately, understanding ourselves is harder than even rocket science. Yet, I maintain the ordinary human minds can understand mind, with sufficient effort to learn and with clear teaching that arises from other ordinary minds. Here then in this book is my ordinary mind’s attempt to provide clear explanation. Chapter 4 presents textbook-like summaries of anatomy, physiology, and biochemistry. But the whole presentation is in the context of thinking and does not treat these ideas as independent ends unto themselves. In Chap. 5 explanations begin to be illustrated by specific kinds of thinking where we know some of the underlying biology. This includes things such as how we localize sounds or position in space, how we recognize faces, how the brain computes movement trajectories, and how values are attached to actions. In all such kinds of thought, it will become clear that nerve impulses flowing in defined interacting circuits are central to thinking processes. Chapter 6 moves us beyond individual nerve cells (“neurons”) and their local circuits to more global functions that help the brain’s various parts operate
Preface
xi
c ooperatively and synergistically. These ideas help to show how “mind” is more than the sum of brain parts. No one who has reason to pick up this book will want to miss Chaps. 7 and 8. This material is the climax of all that has gone before. These chapters explore what we know about the biology of consciousness. The vexing problem of “free will” is tackled. Finally, four quite different theories for consciousness are presented, concluding with the ideas that I think are most consonant with all that has been presented in the earlier chapters. I present my own ideas on such things as how consciousness is produced and why we have dream sleep. Think about it and enjoy.
Acknowledgements
No book of this kind can be written without a heritage of the writings of great thinkers. I have tried to acknowledge these sources as much as it seemed practical for a book that can be understood by non-specialists. Certainly many people did not get their fair share of recognition. For that I apologize.
xiii
Contents
1 The Quest.................................................................................................... Mysticism & Religion Versus Reason & Science........................................ What Brains Do............................................................................................ Deconstructing and Representing Sensory Information.......................... The Brain as “Information” Processor..................................................... The Brain as a Timing Device................................................................. The Brain as a Decision-Making System................................................. References....................................................................................................
1 3 10 11 14 14 15 17
2 Thinking About Thinking......................................................................... Defining Thought Biologically.................................................................... First Principles............................................................................................. The Brain’s Three Minds............................................................................. Brains as Liquid-State Electronic Devices................................................... Brains vs. Computers................................................................................... The Currency of Thought............................................................................. Brain Creation of Consciousness................................................................. Hallucinatory Consciousness................................................................... Dream Consciousness.............................................................................. Human Mind Is in the Brain........................................................................ Circuits and Networks.............................................................................. Manifestations of Thought........................................................................... Biochemistry............................................................................................ Electroencephalogram (EEG).................................................................. Brain Scans.............................................................................................. Behavior................................................................................................... The Brain as a System.................................................................................. Nonlinearity Matters.................................................................................... Cell-Level Consequences of Nonlinearity............................................... Cognitive Consequences of Nonlinearity................................................
19 19 22 23 25 26 28 34 34 36 37 38 39 40 40 42 44 45 47 47 48
xv
xvi
Contents
Inhibition Matters......................................................................................... Bodies Think Too......................................................................................... Physiological and Behavioral Readiness..................................................... Triggering Consciousness............................................................................ Where Consciousness Comes from............................................................. References....................................................................................................
49 51 51 59 60 61
3 Kinds of Thought....................................................................................... Non-conscious Thought............................................................................... Spinal Cord.............................................................................................. Brainstem................................................................................................. Other Functions of the Non-conscious Mind........................................... Subconscious Thought................................................................................. Cerebellum............................................................................................... Limbic System......................................................................................... Reward..................................................................................................... Subconsciously Driven Behavior............................................................. Bias.......................................................................................................... Access by the Conscious Mind................................................................ Unmasking the Subconscious.................................................................. Existential Emotions................................................................................ Conscious Thought...................................................................................... What It Means to Be Conscious............................................................... Dreams Are Made of This........................................................................ Conscious Identity................................................................................... Conscious Influences on the Subconscious.............................................. The Wholeness of Multiple Minds.............................................................. Making Two Minds Into One................................................................... Making Four Minds Into One.................................................................. References....................................................................................................
63 63 63 65 66 72 72 73 76 79 80 81 85 85 88 89 92 94 94 96 96 97 98
4 Carriers and Repositories of Thought..................................................... Brain Structure............................................................................................. Properties of Neurons.................................................................................. Receptive Fields....................................................................................... Labeled Lines........................................................................................... Neuronal Circuits and Networks.............................................................. Topographical Mapping........................................................................... Cortical Columns..................................................................................... Connectivity............................................................................................. Brain Physiology.......................................................................................... Post-synaptic Field Potentials.................................................................. The Nerve Impulse................................................................................... Impulses in Shared Circuitry...................................................................
101 101 103 103 104 107 112 113 114 115 115 121 124
Contents
xvii
Rate Code................................................................................................. Complex Spikes....................................................................................... Interval Code............................................................................................ Serial Dependency in Interspike Intervals............................................... Compounded Voltage Fields.................................................................... Stimulus-Bound Oscillation..................................................................... Clustered Firing and Oscillation.............................................................. Biochemistry................................................................................................ Release of Transmitter............................................................................. Receptor Binding..................................................................................... Second Messengers.................................................................................. Elementary Thinking Mechanisms.............................................................. Analog Computing................................................................................... Gating....................................................................................................... Oscillation................................................................................................ What Do Oscillations Do?....................................................................... Rhythmic Change in Excitability................................................................. References....................................................................................................
126 126 127 127 133 136 137 139 139 141 142 142 142 143 144 147 151 153
5 Examples of Specific Ways of Thinking................................................... Time Processing........................................................................................... Sound Localization...................................................................................... Locating Body Position in Space................................................................. Relations to Phase of Hippocampal Theta Rhythm................................. Spatial Scale-Sensitive Neurons.............................................................. Multiple Place Fields for Each Neuron.................................................... Grid Cells................................................................................................. Face Recognition......................................................................................... Visual Motion Computation......................................................................... Attaching Value to Actions.......................................................................... Common Denominators............................................................................... References....................................................................................................
157 157 159 160 161 163 163 164 165 166 167 169 169
6 Global Interactions.................................................................................... Memories..................................................................................................... Coding for Memory................................................................................. Consolidation........................................................................................... Location of Stored Memories.................................................................. Keeping Memories from Being Jumbled................................................. Network Plasticity........................................................................................ Modularity.................................................................................................... Module Interactions..................................................................................... Cerebral Lateralization................................................................................. Combinatorics..............................................................................................
171 171 173 174 177 178 179 181 184 185 190
xviii
Contents
The Electroencephalogram: Its Rise and Fall, and Recent Rise.................. The Importance of Oscillation and Synchrony............................................ Synchrony................................................................................................ Function of Synchrony............................................................................. EEG Coherence and Consciousness............................................................ Phase Shifting During Movements.......................................................... Phase Shifting During Thought............................................................... Cross-Frequency Coherence.................................................................... Ultraslow Oscillations.............................................................................. References....................................................................................................
194 198 199 202 205 210 211 215 216 218
7 On the Nature of Consciousness............................................................... The Value of Consciousness........................................................................ Sense of Identity.......................................................................................... Maps in the Brain......................................................................................... The Binding Problem................................................................................... How I Think We Think When Conscious.................................................... Working Memory Biology....................................................................... Sleep vs. Consciousness............................................................................... A Humpty-Dumpty Theory for Why We Dream......................................... A Summarizing Metaphor........................................................................ Fitting Known Phenomena into the New Explanation............................. Compulsions................................................................................................ Free Will...................................................................................................... Free Will Debates......................................................................................... The Zombie Argument............................................................................. A New Critique of Zombian Research..................................................... Follow-up Studies.................................................................................... Twelve Interpretive Issues........................................................................ Proposal for Next Generation of Experiments......................................... Common-Experience Examples of Free Will.......................................... Personal Responsibility............................................................................ The Purpose of Free Will......................................................................... References....................................................................................................
223 225 228 231 233 234 234 238 239 246 246 250 251 253 253 256 257 260 268 270 271 275 280
8 Theories of Consciousness......................................................................... Bayesian Probability.................................................................................... Chaos Theory............................................................................................... The Problem of Fast Transients............................................................... Fractal Dimension.................................................................................... The Take-Home Message About Chaos Theory...................................... The Quantum Theory of Consciousness...................................................... A Brief Description of Quantum Theory................................................. Specific Possible Explanations of Consciousness....................................
283 283 286 293 294 294 295 297 301
Contents
Quantum Metaphors................................................................................. Conclusions About Quantum Theories of Consciousness....................... Circuit Impulse Pattern Theory of Consciousness....................................... A Little Common Sense Please................................................................ Neocortex as the Origin of Consciousness.............................................. CIP Representations of Consciousness.................................................... The Created and Remembered “I” of Consciousness.............................. What CIPs of Consciousness Represent.................................................. Engagement of Meta-circuits................................................................... Consciousness as Brain-Constructed CIP Avatar.................................... Learning by the Avatar............................................................................. The Avatar and Its Sense of Self.............................................................. How Does the Avatar Produce Consciousness?....................................... Implications of the Brain-Constructed Avatar......................................... Unleashing the Avatar.............................................................................. How the Avatar Knows It Knows............................................................. Testability of the CIP Avatar Theory....................................................... Specific Test Designs............................................................................... References....................................................................................................
xix
307 308 310 310 311 315 316 319 321 322 325 326 327 328 330 330 333 337 339
9 Conclusions................................................................................................. 343 Index.................................................................................................................. 347
1
The Quest
I sit here watching my dog, Zoe, dream. I know she is dreaming because she exhibits signs that neuroscientists have determined are associated with dreaming in humans. Actually, the bodily signs of dreaming were first discovered in cats, not humans. The signs include twitching of limbs, darting of eyes beneath the eyelids and changes in facial expression. If there were any doubt about Zoe’s dreaming, as soon as she awakens, she sometimes runs to the window as if to see her dream continue in the real world. I have to wonder how Zoe’s little brain represents the real world when she is awake and how it re-creates an inner world during her dreams. Like all dogs, I doubt that Zoe understands the full distinctions between the real world and her dream world. She probably does know that it can be entertaining to dream. Being half hound, she is definitely lazy, and dreaming gives her the opportunity to indulge all sorts of experiences without exerting herself. Maybe that’s why she seems to seek out opportunities to sleep and dream. Zoe is a rather smart dog. I know her little brain is thinking when she is awake. No doubt that brain thinks when she dreams. The issue for me is: how does her brain do its thinking? How does my brain and yours do it? Over the last 100 years, great strides have been made in understanding how the brain is built with nerve cells (neurons) and fiber pathways and supporting cells called glia. Scientists know much about how neurons carry and propagate information and how they transmit that to other neurons. They know a lot about which parts of the brain specialize in specific functions. This book will briefly explain such things. I have been blessed to do neuroscience research for roughly 50 years, the era when so many great things have been discovered. Most of the key facts needed to understand the mind have been discovered: neurons, nerve impulses, synapses and their electrical functions, neurotransmitters, receptor molecules and secondmessenger systems associated with the junctions between neurons (synapses), circuits, and networks. In this time, I have come to reach a widely shared view that brain creates mind, yet mind programs brain, which in turn influences the properties of mind that the
W.R. Klemm, Atoms of Mind: The “Ghost in the Machine” Materializes, DOI 10.1007/978-94-007-1097-9_1, © Springer Science+Business Media B.V. 2011
1
2
1 The Quest
brain generates. Brain and mind are inextricably linked. The beauty of such a system is that brain and mind constitute a learning system in which the system itself can be enhanced by the learning. Brain is like a computer that can program itself and even change its “hardware” to accommodate the changes. What scientists do not have is an overarching theory to explain thought, especially conscious thought. Neuroscientists have not been able to construct an equation equivalent to Einstein’s equation for the equivalence of energy and mass or Schrodinger’s equation for Quantum Mechanics. Nor is any prospect in sight for finding such an equation. Mind is probably not even be reducible to an equation. Neuroscience textbooks generally tell readers much more than they want to know yet fail to explain what really matters – how brains think. Typical textbooks devote a thousand or more pages without explaining facts in the context of thought. Many textbooks don’t even use the word “thought” or try to define it. Yet what can be more important to understand? The quest to explain the mind may be beyond the realm of science. But the quest will go on. I think that the Holy Grail of brain science, perhaps of all science, is to discover how the brain produces thought, especially conscious thought. Despite the fact that the last 100 years or so has been the Golden Age of neuroscience, our understanding is still incomplete. Great discoveries have indeed been made, as manifested in the dozens of neuroscience Nobel Prizes (Langley 1999) that have been awarded for research involving the nervous system or in physical sciences that contributed to important nervous system discoveries. Nineteenth century biology taught us about evolution. Twentieth century neuroscience taught us the basic mechanisms of brain function. Twenty-first century neuroscience will teach us how mind works. The explanation is likely to be materialistic, showing that mind is not something “out there” but “in here,” inside the brain. Moreover, I contend that the mind exists “in here” in our brain, not as some kind of spiritual ghost but as a physical property of brain. Mind surely seems to come from atoms. The impact of the theory of evolution on spiritual and religious beliefs continues to be profound to this day, as evidenced by all the angst and debate over teaching evolution in schools. How then will we come to grips with the realization that there is no ghost in our brain, that our mind is generated by brain matter, that mind is in fact matter? Along with twenty-first century science, we will need a twenty-first century religion. Science and religion may be compatible, as long as we assume that science reveals how God works. With evolution and with materialistic mind, even God has to have a method. To think God just snaps his fingers on day six is simplisticin the extreme. The method for creating a conscious mind capable of free will is the great marvel of creation. This book describes my view of what “mind” is, how it emerges from brain, and how it “thinks” at various levels of operation. These levels include non-conscious mind (as in spinal/brainstem reflexes and neuroendocrine controls), subconscious mind, and conscious mind. In this book’s attempt to explain conscious mind, there is considerable critique of arguments over the widely held view of scholars that free
Mysticism & Religion Versus Reason & Science
3
will is an illusion. Finally, the book summarizes current leading theories for consciousness (Bayesian probability, Chaos theory, and quantum mechanics (QM), followed by a theory based on patterns of nerve impulses in circuits that are dynamically interlaced into larger networks. Though none of these theories is universally accepted, all clearly point to the likelihood that twenty-first century neuroscience will disclose conscious mind to be rooted in anatomy and electrochemistry. To complete such an explanation may require new discoveries about the material nature of our world. Dark matter, dark energy, string theory, or parallel universes come to mind. Science may reveal that some of the things we now consider spiritual are actually natural materialistic forces that have not yet been discovered.
Mysticism & Religion Versus Reason & Science The presupposition of mind having a material, biological basis may be offensive to a segment of the religious community. To them I say that one of the least mysterious ways God works in the world is through the laws of chemistry and physics that govern the universe and all of biology. Educated believers simply must believe that these laws were created by God as a way to create the universe and even human mind. Otherwise, what are the laws for? Others may say that the mind is just too complex to have arisen through evolution. To them I say that the nervous system has been around on the order of 600 million years. The human brain may date back at least as far as a million years. That is a long time for natural selection forces to promote a brain with all the advantages of human competence and conscious self-awareness. My own bias is that science has the capacity to study what we traditionally call spiritual matters. We coin the word “spiritual” to represent things that we cannot explain in materialistic terms. But that does not explain what spiritual is – just what it is not. Many of the phenomena that we call spiritual may well arise from materialistic phenomena that scientists have not yet explained. One neuroscientist, Paul Nunez, for example, has suggested that some yet to-be-discovered information field might interact with brains such that brains act like a kind of “antenna,” analagous to the way the retina of the eye can be thought of as an antenna that detects the part of the electromagnetic spectrum we call light. With 6 billion people in the world, that would mean that there would have to be 6 billion packets of unique personal information floating around out there in the ether. Modern physics, especially QM and the discoveries of dark matter (Fig. 1.1) and dark energy, have already shown that we don’t really know what “material” is. QM is so weird, as for example entanglement and tunneling (see Chap. 8), that it might even be a basis for what we would otherwise think of as non-material consciousness. String theory posits 11 parallel dimensions or universes. There is the possibility that our own conscious mind could be either preserved in real time in analogous form or mirrored in another universe.
4
1 The Quest
Fig. 1.1 Black and white version of a false-color rendition of dark matter (hazy areas on the right and left of center) associated with colliding galaxy clusters (Photo courtesy of NASA)
The lay public, and even many religiously oriented scientists, want to think of mind in the context of something spiritual, something “out there,” outside the brain. But there are no facts to support such a notion. There is no objective, scientific way (at present) to verify or refute religious ideas, such as the existence of a creator God or the existence of a spiritual soul that influences and is influenced by the mind. Science has no evidence for the soul that it can test by experiment and the traditional methods of science. The brain, likewise, cannot test the question of God with its usual operational mode of sensory evidence and reason. Therefore, ideas on God seem to just emerge mysteriously from conscious intrinsic brain operations in the minds of those who proselytize. This seems to be a biological imperative, evidenced in all cultures, from the most primitive to the most advanced. Indeed some theorists suggest that religious belief has biologically adaptive value and is a product of evolution’s natural selection forces. Religion is the human device for explaining the inexplicable. Why do we exist? How did we originate? What is the purpose of life? We have few factual answers. The human brain feels compelled to generate answers. When provided with information that senses can detect, brains can analyze and reach conclusions and decisions. But what does the brain do with incomplete information? As with so many situations where the brain tries to operate with incomplete information, the brain “makes it up.” That is, the brain constructs what it thinks will “fill the gaps.” Our predilection for filling in the gaps expresses itself in children with their notions of fairy tales, Easter bunny, and Santa Claus. Because the brain, in all human
Mysticism & Religion Versus Reason & Science
5
cultures, is not satisfied with missing religious information, brains make up belief systems (this also includes theoretical physics). One of the ways to illustrate how brain’s fill in the blanks of missing information is to have people read text in which many of the words have been removed. The brain can actually do a fairly good job of getting the meaning of text with removed words, depending of course on how many and which words were removed. Another example comes from how conscious mind interprets ambiguous figures. Both alternative images in an ambiguous figure, such as the vase/face illusion, are incomplete, and the brain imposes missing information to make sense of the ambiguity. The strongest support for atheism comes from science, which is why many people believe that science and religion cannot be reconciled. This book reveals that science has not explained everything about mind and leaves open the God question and spiritual issues such as near-death experiences, the soul, and the like. Attempts to reconcile religion and science are typically based on tortuous logic and, in my opinion, are futile at the present. Our notions about a creator God are too primitive and our understanding about science is too incomplete. Materialistic scientists know that there are physical realities they do not understand, yet they don’t seem willing to admit that atheism is not well defended by logic. Physicists, especially, are prone to be enamored with sophisticated mathematical explanations for creation, especially spontaneous creation of multiple parallel universes. But mathematics doesn’t create anything. Math describes, not creates. Usually, it does not even explain. Take Einstein’s glorious equation, E = mc2. It describes the relationship of energy to matter, but does not explain how one converts to the other. Stephen Hawking (Hawking and Miodinow 2010), the heir to Einstein’s scientific stardom, and many of his fellow physicists, like to say that you don’t need God to explain how in the beginning hydrogen, helium, and a few atoms of lithium spontaneously had the capacity to create the universe – and us – via the laws of chemistry and physics. These fundamental laws are so finely tuned as to enable the spontaneous creation of multiple universes and in ours, the stars, planets, and life itself. Are we to believe that all of this is accident, coming from nothing? Hawking elaborates by saying that the laws of gravity and quantum mechanics alone can explain creation of something out of nothing. Aside from the fact that even physicists don’t fully understand the weirdness of QM, we are still left with the issue that gravity and QM are something. They are not nothing. What created them? What created the laws of chemistry and physics? Does QM explain how QM was created? The alternative view is that the laws of chemistry and physics were created by God as the way to create a universe and at least one earth full of living things. The widespread nature of religion in most cultures in most historical periods leads us to conclude that the notion of spirituality has been built into the biology of humans. Some neuroscientists postulate a “God spot” in the brain. No such spot has been found. Nor do I expect that it exists. Formal studies of brain function in religious people when they pray or have other religious experiences indicate that the spiritual feelings arise from simultaneous action of a number of brain areas that normally participate in non-spiritual thoughts and feelings.
6
1 The Quest
Religious ideas are typically thought of as outside the realm of science, being created by family, folklore, and culture. While many religions have a notion of one creator God, their concepts of what that God must be like are quite different. Even within the same religion, the concept of God varies. But the spontaneous multiverse theory of physics is not universally accepted by physicists either. I cannot resolve this debate with facts. We are left to believe what we want to believe. In this book, however, the intent is to elucidate “mind.” In the process, I hope to show that is materialistic. Neuroscientist Mario Beauregard and journalist Denyse O’Leary have written a whole book to argue the point (Beauregard and O’Leary 2007). The Spiritual Brain documents many apparent mystical experiences, but that does not prove that such experiences have no material basis. These authors use the existence of such mental phenomena as intuition, will power, and the medical “placebo” effect to argue that mind is spiritual, not material. Their argument seems specious. They do not explain how spirit can change neuronal activity. They dismiss the point that mind can affect brain because it originates in brain and can modify and program neural processes because mind itself is a neural process. Science may someday be able to examine what we today call spiritual matters. Consider the possibility that “spirit” is actually some physical property of the universe that scientists do not yet understand. Take the case of dark energy. In 1998, two teams of researchers deduced from observing exploding stars that the universe is not only expanding but doing so at an accelerating rate. Forces of gravity should be slowing down expansion, and indeed do seem to hold each galaxy together. But the galaxies are flying away from each other at incredible, accelerating speed. The only sensible way to explain accelerating expansion is to invoke a form of energy, “dark” energy that we don’t otherwise know how to observe, that is pushing galaxies farther apart in a nonlinear way. Some of that dark energy must be within us. But where? What does it do? What about “dark matter?” Astronomers see light being bent, presumably by gravity, yet this light bending occurs in regions of space where there is no observable matter to generate the gravitational force. This unseen matter is also inferred because it is the only way to account for the rotational speed of galaxies, orbital speed of galaxies in clusters, and the temperature distribution of hot gas in galaxies. Quantitatively, dark matter dominates the matter in the universe (Fig. 1.2). Recent discoveries make it very likely that the vast majority of “stuff ” in the universe is “dark energy” (70%) and “dark matter” (25%), neither of which are like the ordinary matter (5%) that we sort of understand. Some recent estimates suggest these figures are too low. In fact, the latest analyses suggest that 83% of the matter in the universe is dark matter. Moreover, galaxies differ in their amount of dark matter, depending on the size of the galaxy (Bhattacharjee 2010). To me, the really interesting questions deal with possible interactions of dark matter and detectable matter. Are they totally independent? Or do they interact in some way? Is any dark matter inside of us? Is regular matter mirrored in dark matter? Is any part of us mirrored in dark matter? Similar questions could be asked about dark energy.
Mysticism & Religion Versus Reason & Science Fig. 1.2 The ordinary matter of atoms with quarks and electrons only constitutes about 5% of all the “stuff ” in the universe. The other 95% is known to exist as dark matter and dark energy
7 Ordinary Matter
Dark Matter
Dark Energy
Finally, there is the string theory of parallel universes. No experimental evidence has been yet discovered for string theory, but it is the best candidate for unifying quantum physics and general relativity. String theory holds that ultimate reality exists not as particles but as miniscule vibrating “strings” whose oscillations give rise to all the particles and energy in the universe – and, nobody mentions, in our brain! Mathematically, string theory only works correctly if there are 11 dimensions or “universes.” The requirement for oscillation should resonate with our emerging understanding of the role of oscillation in brain function (see Chap. 4). If such universes exist, where are they “out there?” Or are they embedded “in here?” Or does the matter of our bodies simultaneously exist in more than one universe? Perhaps what happens in our own inner universe of the brain is mirrored in another universe. These exotic ideas are gradually coming within the scope of experimental science. The new Large Hadron Collider particle accelerator on the Swiss-French border is designed in part to test string theory among other things. If the theory is correct, the collider should generate a host of exotic particles we never knew existed. Another line of evidence might come from the Planck satellite to be launched by the European space satellite consortium. Some string theory models predict that there is a specific geometry in space that will bend light in specific ways that the satellite is designed to detect. String theory is not accepted by all physicists. But most agree that the known facts of physics do not fit any alternative unifying theory. Whatever theory emerges from accumulating evidence, it will, like Darwin’s theory of evolution, revolutionize our thinking about life and religion. The most obvious impact of the new science may be on our understanding of the mind-brain enigma. Explaining mind in terms of current scientific understanding is intellectually frustrating and unsatisfying. New science may help us make more sense of mind, scientifically and even in terms of religion. Indeed, religion may become to be partially defined by science.
8
1 The Quest
Is it religious heresy, for example, to consider the possibility that the “Holy Spirit” is a form of God-created dark energy that interacts with the energetics of the brain? While it is not appropriate for science to judge what we call spiritual matters, it is entirely appropriate for science to explore unseen physical domains yet to be discovered and understood. Such research serves the interest of science and perhaps even of religion. The final religious revelation may come from science. However, scientists will probably be the last to make a connection, because they often resist considering the implications of their discoveries for religion. They either reject religion or ignore the conflict. Modern physics, especially QM and the discoveries of dark matter and dark energy, have already shown that scientists don’t really know what “material” is. Civilized thought has come a long way since the ancient days of Greek philosophers. But maybe we should re-visit their view that there is “true” reality hidden by what we think is reality. Today, physicists are starting to see previously unseen realities. Such unseen realities may well include an unknown kind of matter and energy that give rise to mind. Maybe there is a counterpart mind, operating in parallel that electrodes and amplifiers cannot detect. Many scientists are materialistic, do not believe in “spirits” or God, and insist that the mind is created by the laws of chemistry and physics. Such scientists gloss over the question of where do the laws of chemistry and physics come from. Such a belief system is biased, logically unwarranted and arrogant. Such scientists violate the true spirit of science. Scientists do not know everything, and they don’t understand everything they think they know. The near-death experience is a glaring example of something that seems to be spiritual yet very real. The mind-body enigma reaches a zenith in the reports of outof-body experiences in people who have recovered from being clinically dead. Though electrical brain waves were not always recorded during near-death experiences, it is a good bet that when clinical signs indicate this transient death, the brain waves cease. Science has no way to explain how there can be brain function when there is no electrical activity in the brain. But there are just too many reports of such experiences for science to ignore. Books have been written about people who have been resuscitated from cardiac arrest and who report bizarre visions of tunnels or bright lights, or feel themselves hovering over their body, or feel sensations of overwhelming love. Sam Parnia, a physician at New York Presbyterian Medical Center claims that about 10% of patients who recover from cardiac arrest report some kind of cognitive process while they are clinically dead. That is just too many people to ignore. Though science cannot dismiss such reports, it cannot do much about investigating them either. Science has no theory and no tools to examine such phenomena. Nonetheless, there is interest in obtaining more unequivocal evidence that such experiences are real. Perhaps in the science of the future, we may discover enough about dark matter, dark energy, or multi-dimensional universes that will help explain the mystery of the near-death experience. Moral behavior and religious beliefs may grow out of an innate biological need for what we call “conscience.” Where does this sense of right and wrong come
Mysticism & Religion Versus Reason & Science
9
from? At the most basic level, conscience is based on taboos, and these are evident in all societies, especially the most primitive. Some taboos are said to be born of biological advantage, such as the taboo against incest, which reduces the chance of spreading defecting genes throughout a population. Other taboos are, of course, clearly learned from prevailing peer pressure and the collective culture. Where do these taboos originate? Religionists might insist that they come as edicts from God, as captured in visions and revelations recorded in Scripture. Skeptics might say that such supposed edicts arise from hallucinations or perverse ideation of people who function as prophets or priests, as exemplified by Mayan/Aztec notions of serving their gods by human sacrifice. Injunctions to “love thy neighbor as thyself ” are said to come directly from the mouth of God. You could just as easily claim that this a logical deduction of a conscious mind that makes an assumption that there is a creator God and that God loves his creation. If you don’t make such an assumption, your belief system might revolve around “So what? Kill the bastards.” Indeed, much of human history seems to have been dominated by such a belief system. Explaining religion in terms of science will be hard to find satisfying. Our schools have taught that science is cold and impersonal. Teachers seldom discuss science as a manifestation of God’s action in the world. Theologians are even less likely to think of science that way. The fundamental difference between religious and scientific ways of looking at the world is that scientific conclusions are supposed to be based on experiment and replicable data. In religion, believers are free to construct their belief system independent of verifiable fact. For the time being, scientists are confined to speculating that religious belief is constructed by the conscious mind. That is, our minds make a set of assumptions and create a belief system that logically derives from it, as illustrated by the explanation mentioned earlier for “love thy neighbor.” Coupled with the cultural evolution of taboos and morals has been the development of the notion of a spiritual “soul” and afterlife. Such a concept creates theoretical problems, for now we must not only grapple with the relationship of brain to mind but of mind to soul. Soul and mind may be similar, but they may not operate in the same way. Certainly, the way this book will suggest that minds work in this world is currently impossible to explain for an afterlife. Options include the possibility that organic human mind: (1) may be mirrored in another parallel universe, or (2) may be duplicated in this world via indestructible dark matter, dark energy, or some yet-to-be-discovered features of QM or general relativity. When examined in the light of clear scientific evidence, fundamentalist religious beliefs are increasingly hard to defend. A literal interpretation of everything in the Bible is just not logically defensible. The conflict between religion and science seems to be magnified by modern neuroscience. This difficulty may only grow as neuroscience reveals more and more about how brains think, especially conscious thinking. Likewise, scientists may have great difficulty in accepting those religious beliefs that are not supported by experimental evidence, and this problem is exacerbated by new discoveries supporting a materialist view of mind.
10
1 The Quest
You will see that this book is about the scientific, not spiritual, basis of mind. That is not to say there is no such thing as soul, spirit, Holy Spirit, or other “supernatural” existence that interacts with our conscious and subconscious minds. But science cannot address such matters now, because its tools and methods cannot test the assumptions of religious faith. All the major religions, from Buddhism, Hinduism, Judaism to Christianity to Islam, appeared well before the advent of science. Religions hold that God communicates with humans via the prophets in language comprehensible to an ancient age. God speaks to humans today through ministers, priests, rabbis, imams, and other “holy men” in the language of today. But God also speaks to humans in the language of science, which is unfortunately a language not widely used. Of those who do understand that language, scientists, many are atheists or agnostic, because religion seems trapped in the primitive state of its origins. Unlike science, religion does not invite challenge and re-examination. Science is on the verge of incredible new discoveries. These range from reconciling quantum mechanics with general relativity to understanding dark matter and dark energy and perhaps to proving the existence of multiple, parallel universes. Scientists of religious faith know better than anyone that the magnificence of God’s creation is yet to be fully revealed. Will discoveries about a materialistic basis of mind require us to abandon notions about a creator God? No. Religious people today who accept evolution believe that it is God’s method for creating life. When we are confronted with compelling evidence, such as provided in this book, that mind IS matter, religious people should find it no less difficult to assume that even God has to have a method for creating the human mind, especially a mind that can freely chose, including the choice to grow spiritually. I endorse the advice of neuroscientist Paul Nunez: We must continuously remind the faithful of how much we know, and the scientists of how little we know.
I think it is time for a twenty-first century religion, but I have little idea how to do that. It is also time for a twenty-first century neuroscience – this book makes some suggestions to move that along.
What Brains Do I think that the best way to understand the biology of thought is to begin by considering what it is that brains do. First, I emphasize that brains are embodied. Everything they do is done in the context of the body. Five things stand out as fundamental. Brains: (1) deconstruct and represent sensory information, (2) process information, (3) track time and sequence its processes, (4) make decisions and issue commands, and (5) generate consciousness. Many biological phenomena are involved in these basic processes. Neuroscientists talk incessantly about such things as neurotransmitters, molecular receptors, and gene inducers and repressors. But what I want to emphasize in this
What Brains Do
11
book is that the information and messages that constitute mind are represented, coded, and propagated throughout the brain and body. These processes are accomplished by nerve impulses, which are voltage spikes created by flow of certain electrically charged atoms that move through a neuron and spread from neuron to neuron. It is impulses that code for and spread the messages of bodily sensations. It is impulses that flow through neuronal assemblies in brain to determine what, why, how, and when the body needs to do something. It is impulses that execute actions. The active agents of mind are nerve impulses. It is impulse patterns that constitute “thought.” This point will be explored in more detail in Chap. 2, under the topic of “The currency of thought.”
Deconstructing and Representing Sensory Information What is most amazing is that the information coming into the brain is represented by nerve impulses that represent sensory information by breaking it down into small fragments – deconstructed so to speak. The Nobel Prize was given to David Hubel and Torsten Wiesel in 1981 for the elegant way they proved that visual scenes are deconstructed in the brain, wherein very small fragments in the scene are parceled out as impulse representations to individual neurons scattered throughout the visual cortex. In their particular studies, images of small lines were projected on to the eyes of monkeys, while at the same time the experimenters recorded spike discharges in the visual cortex as they moved an electrode up and down. Some cells responded to the line, while others responded only if the line were rotated slightly. Through painstaking study it became apparent that every neuron in the visual cortex had a preferred line orientation to which it responded while being silent to other orientations. Similarly, color responsiveness was shown to be distributed among highly selective neurons. Other studies have shown that deconstruction processes operate with other sensations as well.
Sensory Representations of Deconstructed Stimuli David Hubel (1926–) and Torsten Wiesel (1924–)
Much of what we know about the physiology of the primary visual cortex can be attributed to the 1981 Nobel-prize winning studies of Canadian David Hubel and Swede Torsten Wiesel, a renowned pair of researchers who worked at the Wilmer Institute of Ophthalmology and later Harvard Medical School. Hubel and Wiesel’s research focused on four main areas: the receptive fields of individual cells of the lateral geniculate nucleus, which projects into the primary visual cortex; the grouping of these cells into functional layers and columns; the development of new methods involving lesioning to trace pathways; (continued)
12
1 The Quest
Sensory Representations of Deconstructed Stimuli David Hubel (1926–) and Torsten Wiesel (1924–) (continued)
and finally, the environmental influence on the early development of vision. Taken as a whole, their work helped to prove that the brain represents the outside world by extracting fragments of the sensation and representing them with impulse firing patterns in multiple, distributed parts of the brain. In turn this has lead to the theory that perception of the original sensation must necessarily involve some sort of “binding” mechanism whereby the distributed fragments are reconstituted as a virtual representation of the original sensation. The nature of these mechanisms remains a concept under study.
As Hubel recalls in his Nobel Laureate speech, the team began experimenting in their first area within a week of their arrival at the Wilmer Institute. Though at first they suffered from lack of appropriate equipment, the team was able to make some novel innovations to compensate for these shortcomings. Additionally, their dedication was undeniable: Hubel recalls nights that the team was so absorbed in their experiments that they lasted until the early morning hours, and would only be put on hold when Wiesel would begin to talk to Hubel in Swedish. Within about a month, they had made their first big discovery, which led to them to theorize orientation, direction, and speed selectivity as properties of the receptive fields of cells in the LGN area of the thalamus, a relay station for visual information being passed on to the visual cortex. In simple terms, these theories state that cells respond to specific orientation (angle), direction of movement, and speed of an object that is moved across the visual field. Corresponding to the discovery of receptive field properties of these cells was Hubel and Wiesel’s research on how cells are arranged in the LGN. Experimentation led the team to classify cells in the LGN into two main categories: simple and complex cells. Simple cells are mainly in layer IV of the (continued)
What Brains Do
13
complex and have receptive fields characterized by a specific orientation preference; they respond best to stimuli when an object is closest to this orientation and have specific “on” and “off ” regions. Complex cells, however, lack distinct “on” and “off ” regions in their receptive field, and although they are even more selective about object orientation than simple cells, they can respond to stimuli presented throughout their receptive fields and additionally display directional selectivity. The ability of complex cells to respond to stimuli throughout their fields suggests that complex cells receive input from a number of simple cells. Hubel and Wiesel also conceived the model of the orientational column, after performing a number of tedious electrode penetrations that measured cells in the LGN. An orientational column is exactly what its name suggests: a column of cells stretching through layers of the visual cortex that corresponds with a certain favored orientation. Neighboring orientational columns vary by only a few degrees in their favored orientation, such that parallel movement through the visual cortex, which Hubel and Wiesel also modeled, yields a steady linear output between the areas where orientation reverses. What is now known as the cortical module was also first described by Hubel and Wiesel. These modules, as Hubel described them, are equipped with all they need to view and process their part of the visual landscape, including two complete ocular dominance columns as well as two complete sets of all possible angles of orientation. This discrete and compact unit is neatly repeated across the visual cortex for each area of the visual field. Information from these such units and indeed, from across many cortical areas, is the very information that appears to be bound together when forming the image of the “mind’s eye,” to accomplish the feat of perception. Details about the concept and mechanisms of binding, however, are still vague. One other concept explored and elaborated upon by Hubel and Wiesel is the idea of a “critical period” of visual development in early postnatal weeks and months. Evidence advocating this notion was provided in experiments where the team sutured shut one eye of a number of postnatal monkeys and kittens at various stages of development and for various lengths of time. It became clear from these experiments that the environment can modulate visual development during such a “critical period,” so that learning experiences determine neural circuit capabilities during certain stages of brain development. However, it should also be noted that within that range of capability, circuits can be re-arranged by new learning. Hubel and Wiesel’s various discoveries remain a wellspring of information on the processes of the visual cortex. Perhaps more importantly, however, their discoveries have provoked new rounds of research as scientists eagerly seek to learn more of the intricacies of sensation and perception. (continued)
14
1 The Quest
Sensory Representations of Deconstructed Stimuli David Hubel (1926–) and Torsten Wiesel (1924–) (continued)
Sources: “David H. Hubel Nobel Lecture.” Nobelprize.org. The Nobel Foundation. 2 June 2008. http://nobelprize.org/nobel_prizes/medicine/laureates/1944/gasser-lecture.html Bear, M. (2001). Neuroscience: Exploring the brain, 2nd ed. Pennsylvania: Lippincott Williams & Wilkins. Magoum, H., & Marshall, L. (1998). Discoveries in the human brain. New Jersey: Humana Press, Inc.
If such sensory information becomes stored in memory, it should be apparent that the memory itself is also distributed in multiple parallel circuits. Recall of the memory would necessarily involve a somewhat reverse process that would permit reconstruction of the memory from its distributed elements. Memory, then, is not so much a thing in a place, but rather a process in a population, as neuroscientist E. Roy John concluded many years ago.
The Brain as “Information” Processor All that has been discovered about brain anatomy, biochemistry, and physiology leads to a prevailing view that the brain is an information processor. Information is thought by most neuroscientists to be encoded by the neural pathways, biochemical processes, and electrical activity that act as carriers of the information. These themes will be visited later at various parts of the book.
The Brain as a Timing Device When scientists get stuck in trying to understand something, they create metaphors, as I am about to do to illustrate timing in the brain. You might think of the brain as a clock, the old-fashioned kind, not a digital clock. Clocks consist of multiple gears of different sizes. One large gear may turn slowly, but a smaller gear to which it is connected turns faster. Each gear oscillates, that is, it turns through a repeated cycle. Each gear’s oscillation has a defined time relationship, or phase, to all the other gears in the system. The whole series of gears is constructed to achieve uniform passage of time, as manifested in hand movements of the clock. Even though any given gear may not be running on that time, it has a definable time relationship to it and makes its own contribution to the overall clock function. The movement of clock hands is not evident from observing the independent motion of most of the gears. Scientists would thus call clock time as an “epiphenomenon,” or “emergent property,” and that is the way neuroscientists commonly
What Brains Do
15
think about thought. This is most particularly evident when it comes to trying to understand conscious thought. The key issue about consciousness is not so much whether it is an epiphenomenon but that it emerges from underlying neuronal activity, the most immediately important of which are what I call “circuit impulse patterns” (CIPs), as will be explained in due course. The emergent property of a clock is not a fixed point in time but continuously progresses – as does the process of thought. In the brain’s emergent property of thought, each pattern of activity also has a definable time relationship to all the other patterns, but the time relationships may not be fixed. In brain, the oscillation frequency and phase relations of electrical activity shift within and among oscillating circuits. I contend that such changes will change the nature of the thought and indeed are a key component of thought itself (explored further in Chap. 4). There is another aspect of timing that ought to be mentioned. Patients with devastating brain disorders such as Parkinson’s and Huntington’s diseases greatly underestimate the passage of time. Poor timing is a characteristic of several psychiatric conditions, including schizophrenia, autism, and attention deficit disorder. We are now discovering that soldiers returning from Iraq and Afghanistan with traumatic brain injury show signs of faulty timing. The ability to track time correctly seems to decline in all older people as they age. Diseases that disrupt time tracking can create a mixed-up world of jumbled events, making it difficult to make sense of the world. David Eagleman (2004), a neuroscientist at Baylor College of Medicine, has been studying the time-tracking deficits of patients with various neurological disorders. No doubt, the timing dysfunctions contribute to the cognition problems in these patients. One thing about timing that receives little attention is the role of consciousness. The subconscious mind can focus attention for only about 2.5 s. With conscious effort, however, focus can be sustained for many minutes. I suspect that sustained concentration arises from oscillatory circuitry, most likely operating at high speed. As events occur in real time, the brain (particularly the cerebellum, basal ganglia and prefrontal cortex) time stamp it and track the sequence of events. One advantage of this network function, in addition to archiving order of events, is to identify cause and effect; i.e. what comes first is most like to be a cause of what follows. This may be done mostly subconsciously.
The Brain as a Decision-Making System Brains evolved to make decisions (Fig. 1.3). Of course, in primitive animals, such decisions are made at a nonconscious level, because primitive organisms do not have a cerebral cortex, which is necessary to sustain conscious awareness (see Chap. 2). At this point I should explain why I am using the term “non-conscious” when many people would use the word “unconscious.” The reason is to make the distinction that conscious and subconscious mind can become unconscious, as when one is given anesthesia or knocked out by a blow to the head. Non-conscious mind
16
1 The Quest Signal Detection is encoded/propagated when there is
Monitoring (attending)
Basic Cognitive Processes
enables recogintion of sensory context comparison of new with old
Evaluation (impute meaning) deduction/induction lead to
Memory
informational biasing
Attitude/Emotion
Decision typically requires can lead directly to
Planning should include
Sequencing enables adaptive
Motor Command (Behavior) Fig. 1.3 This diagram illustrates how I think about “thinking,” which is everything between detecting features of the environment with sense organs and the muscle groups that create the behavior
always exists as long as life itself does, and does not come and go with changes in physiological state. This is explained further in the next chapter. Even single cell organisms, such as Paramecium, which have no brain, nonetheless have sensory and motor capabilities that allow them to “decide” whether to approach something that may be important (like food) and avoid things that may be harmful (such as noxious chemicals). The goal of thinking is to make appropriate decisions. Once made, a brain directs the implementation of the decision, sometimes by the simple means that we call reflexes. For complex behaviors, brains must plan and sequence the commands necessary to orchestrate behavior, specifying when and how to move. Note that all of this occurs in the context of the environment and the stimuli that triggered thought in the first place. Only in simple animals, like invertebrates, is there a simple “command neuron” that executes decisions by driving whole sets of muscles into coherent behavior. Command neurons elicit complex sequences of behavior involving flying, walking, swimming, and feeding. For this reason, they are commonly also called “central pattern generators.” In the crayfish, for example, activation of a single command
References
17
neuron can evoke a complex escape response that involves dozens of muscles. In higher animals, there are no clearly identified command neurons, but specific groups of neurons can trigger preprogrammed motor acts. Such acts include swallowing, coughing, vomiting, yawning, a variety of courtship behaviors, and a movement pattern generator in the brainstem. More commonly, decisions are made in multiple circuits, and the implementing planning and sequencing is produced by related circuits that operate simultaneously in parallel. But what of thoughts that are internally generated in the absence of stimulation or even in the absence of a need for decisions to direct movement? Such conditions remind us that the brain is, in this sense, self-sustaining. Nonetheless, the brain still makes decisions, such as what to remember and for how long, how to feel emotionally, what to believe, and what to want. Brains may also perform the usual planning and sequencing that includes deferring motor commands until a more appropriate time. If we accept the brain’s central role in decision making, the next issue is how do brains make decisions? At the cell level, decisions are made in the synapse, the junctions between neurons where chemical communication occurs. Less obvious is an explanation for how brains make decisions at the circuit level. Brain circuits interface with each other in various ways. They often operate in parallel, sharing with each other the “information” going on in each respective circuit. Each circuit has an output of some type, either to glands and muscles or to other circuits. Each circuit’s “decisions” are thus influenced by other circuits and likewise influence the decision-making of those circuits with which it has an interface. The interface among neurons has two features that complicate understanding of how circuits make decisions. First, the output of one circuit to another is not necessarily mediated by one “output” neuron. There may be multiple output neurons from one circuit to another, with the contacts between made at multiple points in the receiving circuit. Likewise, a “target” circuit may be simultaneously supplying input to the circuit from which it is receiving input. Reciprocal connections among circuits are common in brain. They provide the way for one circuit to be informed of what another circuit is doing. Reciprocity also allows feedback, so that consequences of a decision made by one circuit can be used to inform an ongoing decision-making process in another circuit. A classic example is how decisions made in “motor cortex” are modulated by feedback from cerebellar circuits.
References Beauregard, M., & O’Leary, D. (2007). The spiritual brain. New York: Harper One. Bhattacharjee, Y. (2010). Inventory asks: where is all the non-dark matter hiding? Science, 327, 258. Eagleman, D. M. (2004). The Where and when of intention. Science, 303, 1144–1146. Hawking, S., & Miodinow, L. (2010). The grand design. New York: Random House. Langley 1999. The Nobel Prize. Barnes and Noble Books. New York.
2
Thinking About Thinking
I will not be reviewing thinking from psychological or philosophical points of view, but rather from the mechanistic view of how the brain accomplishes thinking. That is, I will consider how external and internal “information” is encoded, processed, and acted upon in terms of neural circuitry. In the process, I hope to show that thought is a physical reality, not something that is necessarily mystical or unknowable. Brain researchers have learned a lot, especially in recent years, about how brains in general think. One thing is clear: thinking comes from brains. So before we can explain how people think we have to know a little about brains. Don’t worry. We are not going to make you go through a lot of tedious anatomy and physiology. But a couple of things are fundamental. So let’s begin with those. Scientists who study the nervous system (“neuroscientists”) typically approach “biology of thought” issues from the perspective of brain structures and functions that give rise to thought. Some neuroscientists (called cognitive neuroscientists) are more focused on how the brain produces thought in the context of stimuli, goals, and tasks. But few neuroscientists study thought per se, because we don’t have good theories or tools, or even vocabulary, for doing that.
Defining Thought Biologically Before we can get much further, we must try to define “thought,” which confronts us with the vocabulary problem just mentioned. I hate it when an author starts out with a definition. But here I am doing just that. Thinking can mean different things to different people. For example, a minority believe it is fallacious to think the brain thinks. They call this the mereological fallacy (Bennett and Hacker 2003, 480p). The point is that brains can only think if they function as only one part of a whole body. While this view is fashionable among some philosophers, many neuroscientists consider the position a contrived provocation, especially when extended to the point that the firing of neurons cannot constitute “thought.” A neuroscientist only wants an operational definition of “thought” and can find it in a petri dish where a
W.R. Klemm, Atoms of Mind: The “Ghost in the Machine” Materializes, DOI 10.1007/978-94-007-1097-9_2, © Springer Science+Business Media B.V. 2011
19
20
2 Thinking About Thinking
brain slice thinks in the sense that it can process information and deliver an output from electrical stimulation of an input pathway, for example. Anyway, the mereological argument is not central to the argument of this book. For convenience, this book assumes that thinking is what brains do, and is most evident when brains generate a conscious state. I consider thinking to mean what the brain does to analyze, process, decide, and remember. Thinking includes both conscious working of the brain and the workings that go on beneath the radar of consciousness. Webster’s dictionary defines thought as “the process of thinking,” which is a circular definition. Other definitions include words such as “reasoning,” “imagination,” “conception,” and “consideration” – all of these are abstract nouns. None of these definitions treat “thought” as a real or tangible object. In this book, I shall attempt to present my ideas about thought in ways that are not abstract and philosophical but rather as tangible biology. What matters most here is the process of thinking. By the end of this book, I hope to have shown what neuroscientists think this process entails, that is, how thoughts are generated and sustained, and how well thoughts govern not only bodily action but also mentalistic processes such as beliefs, ideas, choices, decisions, and even consciousness. Many people tend to think of mind as “something out there” rather than something going on “in here,” in the brain. Most neuroscientists consider that notion to be nonsense. A conventional way to think biologically about thought derives from experiments in which one records the nerve impulse discharge response to excitatory sensory input. The source of the input, such as the features of an object seen by the eyes, is represented by the train of spikes, pulses of electricity. Representation is a key word, one that has implications throughout this book. Different neurons may represent, that is, be selectively responsive to, only one feature of the stimulus, such as its color, contour, or motion, as I discuss later in my summary of the work of Hubel and Wiesel. Getting the whole stimulus represented in brain requires the information in each participating neuron to become functionally bound together. How binding occurs is not known, but is an active area of widespread interest. So you might say, as many do, that “thought” emerges from the constellation of neurons in particular circuits in the brain. A thought is not possessed by a single nerve impulse nor even a train of impulses. However, the pattern of impulses in a network of neurons is possessed by the network and is a property of the network. The brain is a collection of intimately interacting networks, and the properties of these networks make up the system property, a thinking system in this case. Alternatively, we can think of brain representing information in spatially coherent oscillation patterns that engage large areas of brain. But this view is not mutually exclusive with the one just mentioned. In both cases, information is represented and processed by patterns of nerve impulses flowing through distributed and interacting circuits. Neurons in many brain areas connect with each other to form multiple circuit loops in which many connections are reciprocal. According to the latest thinking of Walter Freeman (2009), this functional anatomy produces brain function in the form of state variables. The ongoing succession of changing state variables in the brain is
Defining Thought Biologically
21
produced by dynamical interaction between body, brain, and environmental stimuli (I would include brain-generated thought that acts like “stimuli”). Rather than brain being a passive receptacle for receipt of information in the world, brain dynamics support purposive action in which the brain directs its sense organs as needed to detect, abstract, interpret, and learn from sensory experience. Such a system can generate goals and intent (I add also even free will – see Chap. 7 on Free Will Debates). Each perception is the outcome of a preceding action and at the same time serves as the condition for a following action. This serves to remind statisticians that brain function is highly governed by Markovian serial dependencies (see later coverage on such dependencies in trains of nerve impulses in Chap. 4). I will argue that “thoughts” are abstractions, represented as patterns of nerve impulses coursing through various paths and networks, that can represent certain physical entities. They certainly have a physical basis. They emanate from anatomy, biochemistry, and physiology of the brain. Thoughts must have a physical “carrier,” that is some physical representation of the thought. Can we regard “thoughts” as patterns of activity within certain groupings of neurons? Why not? Whether or not this is altogether appealing, this definition may be as close as we can get to a noncircular definition. Actually, the idea of patterns of neural activity needs considerable refinement, which is a major purpose of this book. One fundamental extension is clarification of time and space. Thoughts, i.e. patterns of impulse activity, are distributed spatially and across time. Different parts of the brain generate different patterns of activity, yet these parts are all more or less connected and capable of influencing each other. The timing interactions among these various patterns of activity are, I believe, central to understanding the nature of thought. Scientists typically think of “mind” as something that emerges from brain operations, yet remains “in here.” This view applies to each of the three kinds of mind I consider in this book: conscious, subconscious, and non-conscious. I will explain the differences in some detail later. Maybe we should think of mind as the collective processes of brain operation. Admittedly, this is not a very satisfactory definition of mind, but at least it does not limit the idea of thought to conscious mind only. Many brain operations proceed just below the surface of consciousness, and the results of that subconscious thinking even pop in and out of consciousness. If you think thought is intangible, how do you explain away the clear signs that thought comes from biology? The brain origin of mind and its thoughts seem indisputable. If you change the brain, you change the mind. If you damage the brain, you damage the mind. If you shut down the brain, you shut down the mind. Animals think too, especially higher animals, like primates. Even our pets give clear signs of the ability to think (although my dog sometimes acts as if it only has three neurons). Whatever “mind” is, it begins in the womb from a brain serving as a “blank slate.” The thoughts the brain generates and the increasing capacity for generation of new thought, derive from the senses and experience that begin in the womb. As Thomas Aquinas said around 700 years ago, “There is nothing in the intellect
22
2 Thinking About Thinking
that was not first in the senses.” This notion is not totally true, given the brain’s amazing ability to generate quite abstract and unique thought. But the brain does not operate in a mentalistic vacuum. It builds on what it has experienced and learned. Brains imagine and create thoughts that have not been directly experienced. In addition to individual experience, most scientists (Gangestad and Simpson 2007, 448p) would say that the human brain has evolved since it first appeared in primordial form perhaps some two million years ago from natural forces that selected the genes that have been passed on to us. Thus, human brain of today has inherited neural processes for generating certain kinds of thought (such as fear of snakes, predator avoidance, sexual attraction, aversion to incest, ability to infer the mind of others, desire to forage and hoard, and socialization and cooperation). Most importantly, human brain has evolved intelligence. The neural processes supporting such thought categories are more or less built into circuitry that generates nerve impulse patterns more or less biased to produce the corresponding kind of thought. Brains operate with propagating pulses of electricity (nerve impulses), modified by chemicals (neurotransmitters) in the junctions between neurons. A few impulses and squirts of neurotransmitter do not constitute much of a thought. They are just impulses and squirts, though the neurotransmitter systems in neuronal junctions are thought to be the repository for long-term memory. When orchestrated through the circuitry of neuronal assemblies, the mundane takes on the life of mind. Explaining how the impulse patterns yield conscious awareness of thought is the real challenge.
First Principles Physicists think the “Holy Grail” of science is to unify general relativity theory and Quantum Mechanics (QM). But life scientists think that the Grail is to explain the human mind. I suspect that the general public would vote for the latter, because everybody has a personal stake in understanding who we are and why and what we do. Moreover, many people are perturbed by the thought that materialistic explanations for thought conflict with their religious beliefs. I think there is not one mind, but three highly integrated minds (Fig. 2.1). The lowest form of mind is non-conscious and occurs in the spinal cord and brainstem to mediate simple reflexes, visceral control, and regulate hormone systems. Nonconscious mind is that which governs our simple body functions, such as regulating heart rate, blood pressure, and spinal reflexes. These operations all operate at the brainstem and spinal cord levels. This mind is clearly a physical phenomenon and has been abundantly explained by science. This mind is readily explained in terms of anatomy, physiology, and biochemistry. No one speaks of this mind as “emerging” from brain function. It IS brain function. Keep this “in mind” as we later ponder the functions of subconscious and conscious minds. The representations in non-conscious mind take the form of patterns of propagated nerve impulses, flowing in designated pathways (i.e., hardwired circuits) that have evolved to provide servo-system regulation of certain basic functions necessary to sustain life. These representations are not accessible to conscious mind. For example, one is not aware of his own blood pressure or state of hormone release.
The Brain’s Three Minds
23
Fig. 2.1 The brain has “three minds,” each overlapping and interacting with the other. All objective evidence indicates that all these minds arise from anatomical, physiological, and biochemical processes of the brain
Much of what scientists have learned about the brain has come from research on the non-conscious mind. I assert that this information must be fundamentally relevant to whatever processes create conscious mind.
The Brain’s Three Minds Ever since the discovery a little over 100 years ago that neurons exist as distinct units, scientists have been trying to figure out how brains work. Scientists have learned about nerve impulses, the architecture of neural networks, and what happens biochemically and electrically at the junctions (“synapses”) between neurons. We can now say with good assurance that thinking involves impulses, neurotransmitters, postsynaptic membrane receptors and biochemical signaling amplifiers, and post-synaptic ionic currents in multiple, parallel and recursive circuits. But it is not enough to say that these things are involved in or mediate thought. Such words are too glib and have little explanatory power. The knowledge accumulated over the last 100 years has, however, made it possible to hope that a true theory of thought is at hand. Whatever theory of thought emerges, it must explain and unify at least five major categories of thought: Non-conscious thought Subconscious thought Conscious thought Hallucinatory thought Dream thought
24
2 Thinking About Thinking
One reason it is so difficult to unravel the mind-brain enigma is that theorists tend to approach the problem from the wrong end. By trying to explain consciousness, theorists immediately get caught up in philosophical or religious, not scientific, issues and become trapped by their premises. Rather than starting by explaining conscious mind, I find it more fruitful to take an opposite approach, beginning with operations of the simplest levels, first nonconscious mind and then subconscious mind, eventually leading to an attempt to explain conscious mind (which I do in the last chapter). At this point, I need to establish some ground rules for how we should use language to describe “awareness.” This term carries a lot of anthropomorphic baggage, usually inferring that awareness is a conscious operation. We speak, for example, of ants being aware that there is food in the kitchen. Ant brains can do many impressive things, including detecting food in the kitchen, but they are not “aware” of such a fact. When describing non-conscious and subconscious operations, we are safer saying that the brain “detects” stimuli and situations. To say that someone is “consciously aware” is redundant. If one must refer to some subconscious detection, it is more correct to say that the subconscious detects or “is informed” of certain events or situations. To illustrate the point, the non-conscious processing of the solitary tract nucleus enables those neurons to detect sensations from the vagus nerve with which it connects. The nucleus should not described as “aware” of vagus nerve input. Likewise, the deep nucleus of the cerebellum detects influence from the cerebellar cortex, but cannot be “aware” of it. A higher level of detection can be illustrated with fear, of snakes for example. The amygdala generates a sense of fear upon being notified that snakes are near. Even animals can be afraid of snakes; in that sense they detect the snake stimuli. But we have to ask, “Are they aware that they have detected a snake?” That is, do they have a sense of self, and do they know that they know about snakes and why they are to be avoided? Level of awareness no doubt depends on which species of animal is involved. A human, for example, surely has a different level of awareness about snakes than does a chicken. Let us now consider subconscious mind. This is the mind with buried memories, unrecognized desires, compulsions and assorted emotions. This mind operates when we sleep and operates without conscious recognition throughout our wakefulness. This mind, like non-consciousness mind, can be explained by anatomy, physiology, and biochemistry, though such understanding is far from complete. Again, no one speaks of subconscious mind as emerging from brain function. It too IS brain function. Why then should conscious mind be any different? Yet most theorists seem to think that conscious mind is fundamentally different. Conscious mind is said to “emerge” from brain function but not be equivalent to it. The problem is that nobody knows what is really meant by saying that conscious mind is an emergent property of brain. Most brain scientists do, however, accept that conscious mind comes from the brain. But some seem to have difficulty in accepting that this mind also IS brain function.
Brains as Liquid-State Electronic Devices
25
Conscious mind cannot describe its processes, but it does know that those processes are going on and that they have consequences that can be altered – by conscious mind itself. The most fundamental aspect of conscious mind is the “sense of self.” That is, conscious mind knows it exists, residing separate from subconscious mind and able to be aware of at least some of what it knows and thinks. Such awareness extends across time, from past to future, integrating the present. Some theorists like to emphasize its autobiographical nature, but that just refers to memory of things that happened to oneself in the past. I think other primates, even dogs and cats, do that. Animals like these also seem to have a sense of self, but certainly not in the same way humans do. Most do not recognize themselves in a mirror. Throughout this book I will operate from the premise that conscious mind IS brain function. I will lobby my colleagues to think of it that way in the hopes that experiments will be developed that can explain just what brain functions produce conscious mind and how those functions create consciousness. For now, let me assert that a whole mind consists of three interacting “minds,” each of which IS brain function – albeit different manifestations of brain function.
Brains as Liquid-State Electronic Devices The processing unit in the brain is the neuron, and in a typical case, the neuron’s cell membrane branches out into multiple projections like limbs on a tree. These branches are typically polarized electrically in the sense that some of the branches can generate ionic current that flows into the other branches of the neuron. Details can be found in any general biology or physiology book. What needs emphasis here is that the brain is a liquid-state electronic device. Unlike solid-state computer chips, where electric current is carried by electrons flowing through metal conductors, the brain’s currents are in the form of charged atoms (ions) flowing through water. The important ions in brains (sodium, potassium, and calcium) are atoms that have lost one or more electrons, giving them a positive charge. The “Atoms of Mind” are, of course, all the atoms that make up neurons. But the atoms most directly responsible for thought are the ionized forms of sodium and potassium. When atoms like sodium and potassium dissolve in water, they give up electrons. Where do the released electrons go? Few people ask that question, but it seems clear that they do not flow from neuron to neuron like electrons in copper wire. Most likely, electrons in tissue are not free to flow, because they become captured by organic molecules, especially by intracellular proteins (which account for much of the net negative charge inside of resting cells). Calcium ions need to be mentioned, because they help to promote neurotransmitter release and certain “second messenger” systems inside of neurons. But the thought content of mind, as it operates in real time, is contained in patterns of nerve impulses, and these are created by flow of sodium and potassium ions. Some of the greatest research of all time, in my opinion, is the Nobel Prize winning work of Alan Hodgkin and Andrew Huxley. They did not stumble on discovery
26
2 Thinking About Thinking
of how nerve impulses where created, but rather they systematically set out to identify the mechanisms. Based on the work of Adrian (see sidebar in Chap. 4) and others, Hodgkin and Huxley developed a complex plan to determine the ionic carriers that created nerve impulses. Based on electronic instrumentation innovations, which were pretty clever for the time, they impaled giant squid axons with electrodes arranged to detect the various currents that appeared during an impulse. Salts were suspected for a variety of reasons, not the least of which was that they ionize when placed in water and thus could constitute an electric current. What the two demonstrated was that the initial phase of an impulse was generated by flow of sodium ions into a neuron, followed by a termination phase involving the flow of potassium ions out of the neuron. They also explained why this happens, which you can find in a textbook. They developed equations to quantify this flow. These findings might seem counter-intuitive, but they are abundantly documented. The important point for our purposes here is that a nerve impulse is created by flow of sodium and potassium ions into and out of an activated neuron. As these ions move through tissue fluid, they are in fact carriers of electric current and generators of voltage from the ionic currents that flow through the resistance of tissue and its fluids. Ohm’s law (voltage = current x resistance) applies. Under typical electrical recording procedures used experimentally, it is the voltage aspect of impulses that are recorded. This flow of ions creates voltage fields around a neuron, and the voltage associated with impulses destabilizes adjacent neuronal membrane. This serves to trigger changes in the permeability of the neuron’s membrane, and when this change occurs in certain terminals of the neuron, chemicals can be released into the gaps (synapses) between one cell and another, serving either to stimulate or inhibit the target neuron. Positively charged calcium ions are important to the process for releasing transmitter and also to the biochemical reactions in synaptic targets. This need not concern us here. Neurons are organized into distinct circuits, where ionic currents flow in specific spatial patterns. Some circuits in brain are markedly malleable, selectively changing in response to the kinds of input they receive. Also, a given neuron is not always exclusively tied to a given circuit. It may be recruited into multiple circuits, again depending on inputs and ongoing activity in other circuits with which it is in contact. Thus, thinking is an electro-chemical process in the central processing units (neurons) of the brain and their associated circuitry. Neurons are to brain as transistors are to integrated circuits. Beyond this point, however, the comparison of neurons to transistors becomes fallacious.
Brains vs. Computers I once team taught a graduate electrical engineering course with a group of engineers and a mathematician who worked with electronic “neural networks.” They had hoped that my expertise on how the brain works would help teach their students
Brains vs. Computers
27
how to design better computers. It did not take long for all of us to realize this was a pretty naïve idea. Although “neural network” technology can emulate some of what brains do, such as rudimentary learning, there are just far too many differences between brains and computers. The differences are qualitative, not just a matter of speed and memory capacity. Nowhere is the difference more clear than in the ways computers and brains “think.” As stated, a computer thinks with a steady stream of electrons flowing through conductive wires and semi-conductor material. Computer circuits are “hardwired,” that is, built into the system and not reconfigurable unless there is preplanning with programmed instructions. How, when, and where information flows is predetermined by the hard-wired circuitry or by programmed instructions. This is only partially the case in brain. Some circuits there are also hard-wired by in-born anatomical connections among neurons, but many are reconfigurable by experience and can even selforganize. Most significantly, brains can program themselves through learning. Even more astonishing is that this learning by brains can actually change some of their structure and connecting pathways, creating new capabilities. A brain thinks with pulses of ionic flow (“spikes”) that are separated from each other by variable intervals that are electrically silent. “Information” content of a spike train is represented by when spikes start and stop, how many spikes there are per unit of time, and the sequence of intervals, or equivalently the sequential appearance of spikes in successive adjacent time periods. How, when, and where spike trains flow may or may not be predetermined by the brain’s hard-wired circuitry. As with computer networks, information may be gated to flow or not flow in various networks and their sub-circuits. Both kinds of systems can learn to develop preferential pathways for information flow. Learning in computers results only when nodes in a circuit are programmed with certain weighting factors for certain kinds of input. Brain circuit pathways are built-in by genetics by some circuits are constructed directly by input, without thirdparty mediation. A most basic difference is that neurons are not digital. They are analog devices that generate their own electrical current that flows into, through, and outside their cell membranes through micropores (ion gates) that allow flow of the charged atoms that constitute the current. These nerve impulses are quasi-digital in that they occur as isolated pulses. But the forces that generate these pulses are analog and non-linear. Impulses are triggered by slow and graded changes in membrane current in and nearby the cell body of neurons. These can summate algebraically, from multiple inputs to a given neuron. If these currents are depolarizing, like shorting a battery, and of sufficient magnitude, the neuronal membrane becomes unstable and responds by blasting off impulses. Slow, graded changes in membrane current occur prominently in the synaptic junctions. Electrical inputs into a given synapse may be depolarizing or hyperpolarizing, and the algebraic summation of opposing synaptic currents should be regarded as a fundamental kind of cell-level thinking. The net summed current, if depolarizing, can trigger output, whereas hyperpolarizing current blocks output, or in other words is inhibitory.
28
2 Thinking About Thinking
The Currency of Thought If there is no impulse flow, there is no on-going thought. For example, one can inject an anesthetic into a carotid artery and stop all impulse traffic – and the corresponding thoughts – in the area of cerebral cortex supplied by that artery. There is latent thought however. In the above example, those anesthetized cortical circuits still have a capacity for thought stored in their synapses and connection pathways. However, thought itself is not expressed. The relationship of synaptic anatomy and biochemistry to human mind, compared with nerve impulses, can be likened to the relationship of potential energy to kinetic energy. The one represents the capacity for mind while the other reflects mind in action. Thoughts, for example, can reside in latent form in the microanatomy of neural circuits and their associated synaptic biochemistry or they may be expressed and on-going in the form of the circuit impulse patterns (CIPs). When a person is awake, the CIPs constitute an actively deployed, “on-line” mind that interacts with the world. This is also the mind that programs what goes into storage (memory) for later deployment as “up-dated” on-line mind. Once triggered into being, impulses spread throughout a neuron like a burning fuse and extend into all the cell terminals, which provides a way for one neuron to communicate with many others at roughly the same time. Actually, because a given neuron has numerous terminals, its impulses may reach hundreds of target neurons. Brains have two basic kinds of cells: neurons and cell types that support them called “glia.” As far as we know, the primary cause of brain function and mind comes from neurons. Neurons are organized into circuits, formed either under genetic control or under influence of environmental stimuli and learning. Some of these circuits are in constant state of flux, turning off and on, becoming more or less active, and changing their constituent neurons, impulse firing patterns, and routing pathways. A given neuron can be recruited into more than one circuit, as shown for the center neuron in Fig. 2.2, which participates in all four circuits shown. Activity may be going on in parallel in all the circuits shown above and the circuits may interact with each other via feedback. This constellation of these multiple processes underlie what I call thinking. Some circuits are hard-wired, and perform a predictable behavior when activated. An obvious example is the knee jerk reflex. Most people have had a physician tap on a knee tendon to observe the magnitude of knee jerk. Other neuronal circuits are malleable and can be constructed on the fly, so to speak, depending on the needs of the moment. Such circuits also are a repository for learning and memory. Scientists have traditionally studied neurons in their live state by using small electrodes to record voltage changes, both the graded synaptic changes and the pulsatile nerve impulses. If a relatively large electrode, about a third the diameter of a dime, for example, is placed on the scalp, it will sense the voltage changes of thousands of neurons in the region of brain closest to the scalp. This is what we call the electroencephalogram or EEG. The EEG thus can monitor thinking states of large populations of neurons in the outer mantle of brain, the neocortex, which is the part
The Currency of Thought
29
Fig. 2.2 Simplified neural network. This figure is central to all that follows in this book. The key ideas are that neurons are organized into circuits in which information is propagated in the form of temporal patterns of impulses. At any given instant, the impulse coding may be combinatorial, that is, contained as the impulse patterns in all the neurons of the circuit at a given moment. Neurons have cell bodies (circles in drawing) and membranous processes that project to other neurons. Note that these processes may branch so that a given neuron can act on multiple targets in multiple circuits. Impulses (direction of impulse flow shown in line arrows above) propagate outward from cell bodies to act on target neurons. Such circuitry illustrates parallel distributed processing with feedback. The contact-point gaps, known as synapses, are regions where neurochemical transmitters are released to bind with receptor molecules on target cell bodies and their small membranous process known as spines (not shown). Such binding facilitates or inhibits information flow, depending on the nature of the transmitter and its molecular receptors
of the brain that provides most of the electrical signal at the scalp, does the most sophisticated thinking, and gives rise to conscious awareness. If the electrode is very small, like a micro version of the tip of a sharpened pencil, and is thrust directly into the brain, it will detect the net electrical activity of a dozen or more neurons in the immediate vicinity. Scientists call this “multiple-unit activity,” because the activity comes from multiple neurons. If you insert into the brain an electrode that is less than the diameter of a human hair, the electrode may detect only the voltage of the nearest neighbor, a single neuron. If you impale a single neuron with such a microelectrode, the electrode will detect not only the impulses from that neuron but also reveal slow graded excitatory and inhibitory modulations of the voltage across that neuron’s cell membrane. If in addition, you use a glass capillary microelectrode and suck a small patch of membrane into the tip, you can even detect ionic currents as they flow through ion channels as they open and close in response to input to the neuron. These micro-methods take us further away from understanding the larger matter of thinking. In that sense, such methods teach us more and more about less and less. This is important to emphasize, because nerve impulses are the only things that are propagated throughout circuitry. Neurotransmitters (see below) operate in the junctions between adjacent neurons, but they do not propagate their signal over
30
2 Thinking About Thinking
more than a few microns. Post-synaptic receptors and biochemical amplifier systems are also confined to synapses. Post-synaptic voltage changes do propagate for a few microns of space, but do not move the many millimeters, even meters, that can be accomplished by impulses as they self-generate along the axons of neurons. The idea of nerve impulse patterns as information representation is key to developing expanded understanding of mind. Detecting and quantifying impulses in a single neurons is not sufficient. Thought is represented by the spike trains from all the neurons in a given circuit at roughly the same time. Thus, whatever code neurons use for thinking, it must be some kind of combinatorial code (more about this later in Chap. 4; for now, see Fig. 2.2). Such nerve impulse patterns are the currency of thought, presumably at all levels of mind. Is it not conceivable that both subconscious and conscious minds include impulse pattern representations that extend beyond the fixed circuitry of spinal cord and brainstem to include dynamic assemblies of neurons whose functional connections come and go in the course of neural processing? No doubt, the richness of combinatorial coding in dynamic assemblies would be greatest in conscious mind. Sub-conscious mind can also include certain servo-system operations, such as regulating emotions and their influence on brainstem neuroendocrine controls and on the neurons regulating such visceral functions as heart rate, blood pressure, and digestive functions. Subconscious operations also include a wide range of movements that have become so well-learned by neural circuitry that the controls are automatic and can be performed in zombie-like fashion without consciousness. But in all subconscious operations, it seems reasonable to assume that the representation for information and operating on it to generate responses or actions occurs in the form of combinatorially coded circuit impulse. Impulses can be triggered by stimulation or certain chemicals. A neuron at rest, like all cells, is an electrical battery, polarized, with the inside of the cell electrically negative relative to the outside. The “battery” of neurons, however, can be discharged (depolarized), which is manifest as a brief reversal of the voltage on the order of about 1 ms. Neurons propagate their spike discharges of electricity through circuits such as those shown above. As impulses reach the junctions between neurons (synapses), they generate voltage fields in the synapses, causing chemicals (neurotransmitters) to be released to modify information flow. The circuits are embedded in the extracellular voltage fields that they generate, although many regions of the circuitry may be electrically insulated by surrounding glia cells. The layout of a given circuit may change as it receives particular input from other circuits in the brain and spinal cord: some neurons may drop out of the circuit, while others may be recruited. Many neurons are shared by multiple circuits. How do all these impulses, voltage fields, chemical releases into synapses, and dynamically changing circuitry give rise to thinking? Well, they are the process of thinking. I like to say that thinking is equivalent to the CIPs. Why CIPs instead of what is happening in the synapses? First, thinking is a dynamic process and its “messages” are carried and distributed in real time through CIPs. What happens in the synapses becomes manifest in the CIPs. The chemical and microanatomical changes that occur in synapses represent the memory storage and processing of thought. The expression of thought is most evident in CIPs.
The Currency of Thought
31
In any given neuron, the impulse patterns take the form of rate and rate change in firing, onset and offset of firing, and sequential order of intervals. I think of CIPs as instruction sets for performing such brain operations as detection, integration, decision-making, and commands. The ideas that I develop in this book are abundantly supported by the research of others and bolstered in my own experiments, both at the cell level and at the “mind” level. Research in the last several years is exploding with evidence for the above view. For example, I realized in the early 1980s that nerve impulses did not occur randomly, that they contain a code, not only in the rate of firing, but also in their interval patterns. At that time a few other scientists had arrived at the same conclusion. More recently, I realized that oscillation and synchrony of the more global electrical voltage fields were important. These new insights are not mine alone. In the last 10 years, numerous researchers have been providing experimental data to support both ideas. Now the time is right to make these thoughts explicit and simply explain how they are supported by experiment. These ideas of CIPs and oscillation of circuit activity (see Fig. 2.3) are central to much of which follows in this book and to my idea about consciousness that is developed in the last chapter.
Fig. 2.3 Illustration of the idea of CIPs. In this example small circuit, each neuron generates a certain temporal pattern of spikes that affects what happens in the target neuron. For example, the inhibitory neuron #2 shuts down activity in #3, which nonetheless may reactivate when the inhibition wears off or when excitation comes from another circuit with which it interfaces. Collectively, all the neurons in the circuit constitute a CIP for a time epoch. When embedded within a network of interfacing circuits, such a CIP may become part of a more global set of CIPs. Such CIPs are regarded as a representation of specific mental states. The meaning of this representation may lie in the combination of spikes in all the circuit members, contained in the form of some kind of combinatorial code
32
2 Thinking About Thinking
Why and how should “information” be captured, processed, and propagated as a combinatorial feature of CIPs? First, a real nervous-system circuit does not operate in isolation. Many neurons in a given circuit have reciprocal connections with neurons in other circuits. Thus, the spike train of a given neuron embodies within the time distribution of its impulses the influence of other neurons in its parent circuit and the inputs from other circuits. The CIPs most relevant to conscious thinking come from neurons in the cerebral cortex, and the circuits of these neurons are closely packed in small columns oriented more or less perpendicular to the brain surface. With such close packing, the electrical currents resulting from impulses in each column’s circuitry readily summate as a collective consequence of the impulse activity in all members of the circuit. Often, this collective combinatorial effect drives frequency-specific oscillations of the whole circuit and neighboring circuits. These ideas underlie a recurrent theme in this book that will culminate in the last chapter’s discussion of the nature of consciousness. Neurons come with a wide variety of firing patterns. Even in a single brain area, intrinsic properties vary widely. In the hippocampus, for example, one cell type produces a short train of spikes that habituates after stimulation with a short depolarizing pulse but a single spike in response to a superthreshold pulse of current. Another type fires bursts of pulses in response to long and strong current pulses, but only a single spike in response to weak stimulation. Another type also fires bursts in response to weak but long stimulus pulses. Another type is similar but its bursts are very stereotyped. Yet another type fires rhythmic bursts of spikes spontaneously without stimulation (Izhikevich 2007). CIP representations are undoubtedly the stuff of non-conscious and subconscious minds. But what about conscious mind? Numerous tomes over the centuries have attempted to explain conscious mind from arcane and esoteric perspectives. Herein I propose another way of thinking about consciousness that may prove helpful. These patterns are the primary representation of the information, the processing, and the instructions set that we can collectively call non-conscious or subconscious mind. This mind is strictly physical, not too unlike a computer chip except in the nature of the current carriers. So what then is conscious mind? Is it generated as a combinatorial code of electrochemical CIP processes operating in multiple, dynamically changing circuit patterns? Probably. But nobody knows how this mind has an awareness process that is so different from subconscious mind. Brain circuits are arranged not only in series but also in parallel. Multiple operations can go on simultaneously in multiple parallel circuits. This kind of processing is especially prominent in the cerebral cortex, the outer mantle of cells that surrounds the rest of the brain. The various regions of cortex highly interconnect between and among other cortical areas. Many of these areas are mapped for both sensations and motor output. Moreover, the mapped regions are reciprocally connected to each other, so that input into one region can be fed back into the region that supplied the input. The upshot of such arrangements is
The Currency of Thought
33
Fig. 2.4 When the brain receives stimuli, such as images and sound, it extracts key features and distributes nerve impulse representations in parallel to different parts of brain that specialize in processing the respective types of information. In this case, sensory information about name and face are “deconstructed” and contained as CIP representations in different parts of cortex (especially speech centers and visual cortex). These areas exchange information with each other, as well as with other parts of brain (not indicated here) that might participate in supporting roles involving movement, emotion, memory, etc. When that information is consciously recalled CIP representations are again activated simultaneously to reconstruct the original stimuli
that any given mapped region of cortex can get near-instantaneous feedback on the effect of its operations. Let me illustrate what happens to sensory input. Suppose you see a picture of a person and at the same time hear someone say the name of the person in the picture. That information registers as neural CIPs widely distributed in the respective parts of the cerebral cortex that are hard-wired for sound and vision. Figure 2.4 shows that during recall of such information, information about my name, for example, is retrieved from the speech centers, as well as to multiple areas in the cerebral cortex (in the illustration, I only show a few arrows in order to keep the diagram simple). Likewise, visual information of my face is resurrected from multiple areas of the visual cortex. In short, sensory input is deconstructed, distributed and stored widely, and retrieved in a way that binds it all together to reconstruct the original stimuli. Such deconstruction and re-construction of information do doubt occurs at all levels of mind. But, of course, what intrigues us most is what happens when brain becomes aware of the results of the re-construction, as in consciousness.
34
2 Thinking About Thinking
Brain Creation of Consciousness These deconstruction/reconstruction processes can occur subconsciously or consciously. How the brain creates its conscious mind is not entirely understood. “Consciousness” is often equated with “mind,” and for centuries philosophers and scientists have grappled with what has been called mind-brain problem. In the nineteenth century, people thought of mind as a “ghost in the machine,” and perhaps most people regard it that way today. That is, people accept that mind seems to come from brain, but mind has a ghost-like quality and may not seem to be a physical entity. Mind seems inextricably linked with vague notions of spirit or soul. By the twentieth century, science began to show that conscious mind might have a material basis. In the twenty-first century, science may be able to explain that material basis. My stance is that by studying CIPs and oscillatory circuitry, we will come to see that mind is not a ghost, but is matter. In this regard, what we know about nonconscious mind is very well established. Much of the non-conscious mind emanates from the brainstem and its peripheral connections (Klemm and Vertes 1990). If we accept evolutionary theory, this knowledge is surely relevant to explaining subconscious mind and conscious mind. Non-conscious mind governs our simple body functions, such as regulating heart rate, blood pressure, and spinal reflexes. These operations all operate at the brainstem and spinal cord levels. This mind is clearly a physical phenomenon and has been abundantly explained by science. This mind is readily explained in terms of anatomy, physiology, and biochemistry. No one speaks of this mind as “emerging” from brain function. Next we come to subconscious mind. This is the mind with buried memories, unrecognized desires, compulsions, assorted emotions, and even a great deal of subconscious decision making. This mind operates when we sleep and operates without conscious recognition throughout our wakefulness. This mind, like non-consciousness mind, can be explained by anatomy, physiology, and biochemistry. Again, no one speaks of subconscious mind as emerging from brain function. It too IS brain function. For now, let me assert that a whole mind consists of three interacting “minds,” each of which IS brain function, as is whole mind. I contend that the same principles apply to the brain’s creation of conscious mind.
Hallucinatory Consciousness Conscious thought is sometimes hallucinatory. Hallucination is unreal thought, as when we imagine hearing voices or seeing sights that are not there. Though erroneous, such thought is nonetheless consciously realized. People who hallucinate are consciously aware of such thoughts but may not be aware of their unreality.
Brain Creation of Consciousness
35
Is there a subconscious counterpart? I don’t think anybody knows. What we do know is that hallucinations are characteristic of insanity, particularly the hearing of voices and seeing of non-existent images that occur in schizophrenia. One has to be schizophrenic to know what these conscious experiences are like, but we can surmise the imaginary nature from self reports from schizophrenics. Science cannot explain schizophrenia. Only a few clues are provided by the silent self-talk and imagined scenes that we all experience. Normal people hallucinate when they dream. Often, the dreamer knows at the time of the dream that the dream is just that, a dream and not real. So dreaming, and perhaps schizophrenia, are the brain’s way of staying busy inventing events and story lines. In Chap. 8, I present a new theory that I think explains dreaming. Another important shared function in normal people and schizophrenics is that they both hear voices. Of course, the voices heard by normal people are usually their own self-talk, whereas schizophrenics hear voices other than their own. Schizophrenic hallucinations are especially problematic because the patient believes the alien hallucinations are real and may cause the person to engage in destructive behaviors. Back in the 1970s, Princeton psychologist Julian Jaynes caused quite a stir with his book that proposed that the human brain, as it evolved the capacity for consciousness, first began with hallucinations (Jaynes 1972). Imaginary sounds and sights began to be perceived in consciousness, and later consciousness evolved to the point where hallucinations could be seen to be unreal. Proof for such conjecture is not possible, and I don’t think his arguments are compelling. He even went so far as to claim that all religions began from founding prophets whose claims of hearing God or angels were hallucinations. That could be the case, but it does not support the notion that everybody in the time of the prophets hallucinated. Today, people would say that hearing God speak to them is crazy. One wonders why this was less suspect in the days of the prophets. Jaynes’ notion has several problems. One is the unlikely possibility that in the short span of a couple thousand years of recent history, humans switched from schizophrenic-like to conscious beings. Worse yet for Jaynes’ argument is the fact that billions of today’s evolved humans who do not hallucinate still hold religious beliefs of one sort or another. Mentally normal people still believe at least some of what their prophets may have hallucinated about. This line of thought could lead us elsewhere into the topic of the biology of beliefs, religious and otherwise, that arise as a complex consequence of experience, memory, and reason. Books on the biology of belief exist, (Lipton 2005; Shermer 2000) though the understanding is quite incomplete and beyond the scope of this book. According to Jaynes, schizophrenia is the prototype of normal human mental function, and remains as a vestige in modern humans. He claims that in the first human cultures, no one was considered insane because everyone was insane. While this idea seems bizarre, it does seem likely that one function of normal consciousness is to prevent and correct hallucinogenic tendencies that may be inherent in primitive brains. As human brains evolved to become bigger with more neocortex,
36
2 Thinking About Thinking
conscious thinking became effective and powerful, and more capable of constraining and teaching subconscious operations. Jaynes postulates that hallucinations arise in the right hemisphere and in normal humans are suppressed by the dominance of the left hemisphere (and vice versa in left-handed people). He cites a few EEG studies that show a difference of electrical activity in the two hemispheres, but there are few modern studies using sophisticated quantitative EEG that address this question. Of special interest is time-locked activity (coherence) among various regions. Since schizophrenics hear voices, hallucinate, and have disordered logic, it would suggest that various parts of brain are not coordinating well. Schizophrenic patients do have abnormal EEG coherence in both resting and stimulus conditions, suggesting more diffuse, undifferentiated functional organization within hemispheres (Wada et al. 1998). I think that consciousness, as a state of mind, is not what is at issue here. People who hallucinate, whether because of brain abnormality such as schizophrenia or because they are having normal dreams, are still consciously aware of their hallucinations. We should also bear in mind that mentally normal people can be consciously aware of hearing voices, particularly self-talk chatter, and music in their “mind’s ear.” The line separating normalcy and insanity may be finer than we like to think.
Dream Consciousness Dream thought falls into a similar category. Dreams may be total hallucinations or grounded in reality. Common experience teaches that dreams are a special form of consciousness. Though we are behaviorally asleep, the dream content is a conscious experience, though we may not remember it after awakening. Such forgetting is a memory consolidation problem, not a consciousness issue. Many people have dreams where they not only are aware of the dream experience but are also aware that the events are not real but part of a dream. Dreams have to be a special form of consciousness, maybe not too different from ordinary consciousness. Let us consider animal dreaming. Anybody who has ever watched a sleeping dog bark and paddle its feet can have little doubt that they are chasing a critter in their dreams. Sleeping dogs will even sometimes twitch their nose, suggesting that they even have olfactory hallucinations. All higher mammals, and to a lesser extent birds and higher reptiles, show multiple sleep episodes where bodily signs are identical to those of human dreaming: an EEG of low voltage, high-frequency activity, rapid eye movements, irregular heart and respiratory rates, and spastic twitches of muscle. Are higher animals thinking consciously when they are awake? No one can know (except the animals), but there are many books that argue both sides of the possibility. Given that animals have less developed brains than humans, according to the Jaynes’ view, we might think that the waking state of higher animals would
Human Mind Is in the Brain
37
be perpetual hallucination. I doubt it, but don’t know how to disprove it. Their dreaming indicates that their brains have the capacity for non-linguistic hallucination, but that is not proof that hallucination is the default mode of operation in the awake state. Since higher animals, especially performance-trained animals, can exhibit a great deal of adaptive, purposive awake behavior, it would suggest that they are not hallucinating. Physiologically, we know that dreaming is the hallmark of advanced animal evolution. It is fully developed only in mammals, whose brains have a well-developed neocortex capable of “higher thought.” Paradoxically, babies spend more of their sleep time in dreams than do adults, yet their neocortex and fiber-tract connections are poorly developed compared to adults. I have an explanation for that in Chap. 8. Why do we dream? Books have been written on the subject, and we still don’t know. We do know that dreaming is a necessity. Many animal and human experiments show that the brain does not function normally if it is not allowed to enter the physiological state that enables dreaming. A whole array of reasons for dreaming have been suggested, and they are not necessarily mutually exclusive: (1) to ensure psychic stability, (2) to perform off-line memory consolidation of events of the preceding day, or (3) to restore the balance of neurotransmitters that has been disrupted by ordinary non-dream sleep. Dreaming may also just be an inevitable a side effect of re-organization of subconscious mental processes. Maybe, like our dogs and cats, human dreams reflect an inevitable physiological drive state that is just the brain’s way of entertaining itself. In any case, dreams can be very good indicators of what is on our minds, though the pronounced symbolism in dreams may require a good deal of introspection and analysis to interpret. Why are dreams so often symbolic rather than literal, though both types occur? Nobody knows, but maybe symbolism is a result of subconscious thinking trying to become manifest in the special consciousness of the dream state.
Human Mind Is in the Brain As far as contemporary science can determine, there is no evidence that minds are floating around in space. Each mind is confined to and not separable from brain. The brain is the vessel that not only contains mind but also generates it. Of course, the products of mind, its ideas, feelings, and thoughts, can be shared to the world outside a given brain through speech, writing, and observable deeds. In that way, many minds can contribute to the evolving nature of any given single mind as that mind experiences and learns from worldly encounters. What troubles many scholars is the question of how consciousness can affect brain. However, that is only a problem if you think of conscious mind as some sort of out-of-body “ghost in the machine.” The problem goes away when you realize that conscious mind IS matter. That is, mind affects matter, because mind is itself matter, expressed in processes to be fully elaborated in this book. Once a given thought, for example, is initiated in material process of brain, those same processes
38
2 Thinking About Thinking
can change the brain so it can regenerate the thought and integrate it with other thoughts, past, present, and future. One question cannot be answered by today’s science. Is there such a thing as an individual soul, embedded or otherwise entangled with mind? Most people in the world believe there is, and this is the basis for the world’s religions. The soul, by most people’s definition, is not a material thing, so it makes no sense to try to explain “soul” via what science has revealed about the material nature of mind. This book has a focus on explaining the material basis for mind, as scientists understand it today. But recall the earlier comments about known material realities, such as dark energy and dark matter, that scientists do not understand. If we accept the brain’s central role in all thought, the next issue is how do brains make decisions? At the cell level, decisions are made in the synapse, the junctions between neurons where chemical communication occurs.
Circuits and Networks Less obvious is the answer to how brains make decisions at the circuit level. First, it is helpful to review the basic kinds of circuits in brains. There are only four basic kinds (Fig. 2.5).
o o
o
o
o o
o o
o o o
o o
o
Divergent
Convergent
o o Parallel
o
o
o
o
o
o
Reverberating
Fig. 2.5 Four basic types of brain circuits, shown in simplest form. Open circles represent the cell body of a neuron. Lines indicate their fibers that propagate nerve impulses, and terminal branches indicate the synaptic junction with a target neuron
Manifestations of Thought
39
These circuits interface with each other in various ways. They often operate in parallel, sharing with each other the “information” going on in each respective circuit. One way to illustrate inter-circuit interactions with a Venn diagram, showing the overlap of different circuits to represent those features of processing that are shared among all the circuits. Each circuit has an output of some type, either to glands and muscles or to other circuits. Each circuit’s “decisions” are thus influenced by other circuits and likewise influence the decision-making of those circuits with which it has an interface. Note that the circuit diagrams above suggest how single neurons connect with each other. Most action in the brain is based on large populations of neurons. Thus, we should extend our view of these elementary circuit designs as applying to many neurons in a network in which the nodes of the network can be laid out in such patterns of divergence, convergence, etc. Many factors further complicate our understanding of neuronal networks. First, the influence of one node in the network to another is not a simple “on/off” or “yes/no” signal. Rather, what is transmitted from one node to another is a temporal pattern of nerve impulses. Moreover, not all the fibers in the “cable” that connects one node to another are sending the same temporal pattern of impulses. Further, the pattern of connecting activity dynamically changes. Finally, the brain should be thought of as a network of networks, wherein a given network (or for simplicity, one circuit) typically connects with other networks (or circuits). There may be multiple points of egress and access within a given network. A given “target” circuit may be simultaneously supplying input to the circuit from which it is receiving input. Reciprocal connections among networks are common in brain. They provide a way for one circuit to be informed of what another network is doing. Reciprocity also allows feedback, so that consequences of a decision made by one network can be used to inform an ongoing decision-making process. A classic example is how decisions made in “motor cortex” are modulated by feedback from cerebellar circuits. The upshot of all this is that the brain is so complex that its neural networks may not be realistically amenable to adequate scientific exploration. Many very smart people in computer science, bioinformatics, mathematics, and engineering work in the area of neural network analysis. Yet their work is severely constrained by the complexity of the network processes in brain and by the insufficiency of their analytical tools.
Manifestations of Thought Thoughts, as we commonly generate and experience them, are complex mixtures of more basic elements. If we can identify what the elements of thought are, we have at least some chance of developing an all-encompassing theory of thought. Some of these elements are found in the latent or stored form of mind, such as microanatomy and biochemistry.
40
2 Thinking About Thinking
Biochemistry Biochemists like to grind up brain, from sacrificed animals of course, and examine its chemistry as an index of what the animal had been thinking. This approach has led to many major discoveries, such as the existence of about 100 biochemicals, called neurotransmitters, that mediate synaptic communication among nerve cells. However, this cannot tell us much about what brains are thinking at any given moment. That information is carried, in real-time as people say, by patterns of nerve impulses. But the thinking represented by impulses causes biochemical changes, especially in the synapses. In turn, these biochemical changes may serve as a repository of the “information” signaled by the impulses and may affect subsequent discharge patterns of impulses. Related approaches include collecting neurotransmitters from localized regions of brain in a live animal while it is performing a given behavior. This is done with implanted double cannulae, in which perfusion fluid is pushed through one cannula and pulled out in the other, picking up along the way chemicals that have been released by nerve cells in the vicinity of the cannula tip. This approach can tell us a lot about the biochemical processes that are supporting thinking in real time, but obviously only a minute region of brain can be monitored this way. There are many other biochemical techniques too complicated and beyond the scope of this book. In general, biochemical analyses are not direct indicators of thinking, but rather indicators of the activity of biochemical dynamics in neurons as they participate in “thinking.” Certain biochemicals in synapses are the storage reservoir of thought – that is, memory. A metaphor for comparing relative roles of impulses and biochemicals in the function of mind could be 18-wheeler trucks and the Inter-state highways system. The highways link various cities together, serving as a communication network, much like the axons and dendrites that connect neurons constitute the networks of the brain. Each truck represents stored information. No communication or exchange of goods occurs if the 18 wheelers sit parked at the various warehouses around the country. The trucks are a reservoir, having the potential for distribution. Only when the trucks start moving in the highway network does communication occur. In the nervous system, what moves – that is, what is propagated throughout the networks – are nerve impulses.
Electroencephalogram (EEG) The EEG is a voltage waveform reflecting underlying impulse activity summed over many neurons. This kind of signal was first discovered by coupling electrodes on the scalp to high-magnification amplifiers that drove pen-and-ink displays. Similar signals can be seen from electrodes implanted within the brain, but these are called “field potentials” because they are not obtained from on top of the head. Think of such a signal as a plot of voltage (in microvolts) as a function of time. The waveform
Manifestations of Thought
41
Fig. 2.6 Illustration of the compounded nature of the EEG, as recorded from the hippocampus. Small waves seen “riding on top of” larger and slower ones. Top: small, slow frequency “theta” waves (about 6 waves per second, but not so obvious here because the total time shown is less than 0.4 s). Bottom: high frequency “gamma” waves (about 4 per 100 ms) (From Colgin et al. 2009; Note: EEG frequencies are usually designated in engineering Hertz units (Hz), as if their frequency were constant. In reality, a given EEG frequency “jiggles;” the cycle time is not absolutely uniform, and this is evident in the signals shown in this figure. “Waves per second” is the better term)
is generated from the voltage generated by ionic current flowing through the resistance of tissue, body fluids, and, in the case of scalp recordings, the skin. Tissue resistance has a capacitance component that shunts some of the highfrequency amplitude so that higher voltage frequencies are not fully represented in what is seen under typical recording conditions (see Fig. 2.6). As you will see later, this is not a trivial point, because higher thought processes are associated with higher frequencies in the EEG. The EEG provides a near-instantaneous index of the brain’s electrical activity in the region of the sensing electrode. However, that activity is hard to interpret because the signal seen is a composite of all sources of current in the region: postsynaptic and action potentials of multiple neurons and membrane potentials of supporting (glial) cells. As a result, the EEG is not a pure waveform but is rather compounded from voltages of different frequencies. Think of the EEG as a wiggly line that is a mixture of large slow wiggles with intermingled and superimposed faster-changing wiggles. A similar mixing of frequencies is seen everywhere in the brain, but is most conspicuous in areas, such as the cortex, where oscillation at several frequencies in the same general area is prominent. It is evident that time resolution is excellent, to the level of a few milliseconds. This time resolution is not possible with the other popular way of studying brain function non-invasively: brain scans.
42
2 Thinking About Thinking
Brain Scans The original brain scan technique, called positron emission tomography (PET) involved injecting a radioactive substance into the blood. Since brain areas that are more active get more blood flow, the radioactivity level there can be greater and thus indicate “hot spots” of activity in response to stimuli or mental task performance, for example. The radioactive feature of the technique has caused it to fall out of favor for routine brain scans and is being supplanted by magnetic resonance imaging (MRI) which poses no health hazard (that we know of). With MRI, a giant magnet surrounds the subject’s head and forces hydrogen atoms to align. When the brain is hit with a strong radio signal, the atoms are knocked out of alignment, and the rate at which they return to the aligned state provides a detectable signal. These signals increase when the level of blood oxygen goes up, indicating which parts of the brain are most active. Because MRI is much safer than PET scans, it is used for repeated scans on the same subjects under different cognitive conditions. A more recent refinement, called functional MRI (fMRI) uses special computers to increase the speed at which scanning is done. Even so, compared with EEG, the scan is slow. The value for research purposes is just the opposite of that for the EEG: the spatial resolution is excellent (on the order of millimeters), but the speed is slow (on the order of seconds). Brain scanning, especially with fMRI is THE hot area of neuroscience. The instrumentation is extremely expensive, but every brain research center wants to have one. All this popularity is misplaced in my view. Aside from the time resolution problem, fMRI scanning has numerous other problems. The first is the misuse of statistics (Vul et al. 2009). This kind of scanning is usually used to identify correlations between a specific cognitive task with increased activity in certain brain areas. But a survey of 54 randomly selected fMRI studies revealed that many had grossly inflated correlations between brain “hot spots” and the cognitive task. The process by which one determines the subset of voxels in an image to use in correlation calculations is often suspect. The matrices can consist of hundreds of thousands of numbers derived from specific brain areas or areas seeming to show activation. Then these pre-selected numbers are used to calculate a pair-wise correlation coefficient across subjects, often from the average response across trials on the average activity of X-number of adjacent voxels or the peak activity in the population of voxels. Then a separate correlation may be calculated for the cognitive task and those voxels whose activity exceeds a certain arbitrary statistical threshold (some of which will qualify for use in the analysis just by chance). With huge numbers of voxels involved, it is easy to generate inflated correlation values. Over half of the 54 fMRI publications had such flawed methods. The authors of the meta-analysis of the 54 papers used such procedures on a simulated analysis of pure noise and found a correlation coefficient of 0.9 (1.0 is perfect correlation). The basic cause of such misleading correlations is that the data selected for analysis are inter-dependent. Multiple t-tests are performed to compare voxel activity in
Manifestations of Thought
43
one task with another or the control state. Many MRI researchers do not properly correct for this kind of statistical error. Another problem is reliability of the numbers. Test and re-test reliability between repeated trials can be as low as zero, even for voxels which on the average seem to show an association with the cognitive task. But the most serious limitations are the ones typically glossed over. Even if and when we can believe the reported correlations of seeing areas of increased activity (“hot spots”) in certain brain areas during specific cognitive tasks, serious physiological interpretive problems arise. The fMRI scans measure blood flow change, which is also parallel with oxygen consumption. But even with “significant” effects, the magnitude of blood flow change is small, often less than 5%. We infer that this is caused by increased electrical activity. But what kind of activity? …. graded postsynaptic potentials or nerve impulses? … or the ultraslow electrical changes in glial cells? Recent fMRI studies that included acquiring neuronal activity data at the same time revealed that nerve impulses used only a small proportion of the total oxygen consumption (Alle et al. 2009). The vast bulk of fMRI signal therefore comes from postsynaptic potentials, not impulses. Of course that is also true of EEG signals. But how can fMRI signals tells us much if messaging and the results of information processing are expressed in nerve impulses? It does seem that fMRI correlates with EEG-like field potentials more than with nerve impulses. The reason that fMRI correlates better with field potentials than impulses is that both are “average” measures of activity of large populations of neurons. At any given instant, many individual neurons may be functioning as exceptions to the over-all population activity. Note that EEG is summed activity, mostly from postsynaptic potentials. With visual cortex responses in cats, increasing stimulus intensity increased high frequency field potentials, impulse activity, and fMRI signals (Niessing et al. 2005). Similar studies in two human neurosurgical patients showed correlations between field potentials, neuronal firing, and fMRI signal in the auditory cortex (Mukamel et al. 2005). We might wonder why major changes in cognitive task demands don’t produce more robust fMRI responses. As mentioned above, I believe that active cognition is achieved mostly by impulse patterns, which are under-represented in a brain scan. One reason is that patterns of inter-spike intervals can undergo major change without a change in total number of impulses, and therefore presumably no change in total oxygen demand would occur. Patterning of impulses can be more important than the number of impulses (see Chap. 4), and thus brain scans are not capturing the neural events most directly relevant to thinking. It should also be obvious that hot spots only indicate the possible location of neurons that support a given mental process but is not likely to show how they do it. Most brain researchers realize this, which is probably the reason they term hot spots “regions of interest.” That is an admission that they must be restrained in drawing conclusions about what hot spots mean. Brain-scan hot spots don’t indicate whether the activated neurons exert excitatory or inhibitory influences on their targets. We don’t know if increased MRI
44
2 Thinking About Thinking
activity is coming from synaptic processes in inhibitory or excitatory neurons, which produce opposite effects. There is also the problem that any change in activity is a correlate of a given thought, but not necessarily a part of the cause of a cognitive process. For example, during a mental task, several areas may show as hot spots, and researchers typically regard these as indicating a system of connections responsible for a given mental function. A recent advance in fMRI is the technique of “functional connectivity” analysis (Rogers et al. 2007). These statistical methods test for correlations of activity in brain areas under various mental performance conditions. The focus is on brain areas that show increased activity at the same time (testing for correlations of increased and decreased activity might also be useful, but not commonly done). Such analysis might identify brain areas that are necessary for a various function, and longitudinal analysis (also not commonly done) could indicate changes over time from learning, disease, age, etc. could alter the degree of needed connectivity. However, showing that several areas are activity at the same time can be misleading. There is no way to if one area is driving the others, or if the others mutually facilitate each other. One or more of these hot spots may be incidental to the process being studied with scanning. The activity could have been released from inhibition by one or more of the other hot spots which were actually causally related to the mental process. Thus, some of the hot spots may have nothing to do with causing the mental process under study. Even when increased activity may be part of the cause, the role played is not selfevident. The same part of the brain may be activated under a variety of conditions. The amygdala, for example, is activated by the sight of snakes, intense odors, or erotic stimuli. Another example is the hippocampus, which is activated by a wide range of emotional stimuli as well as participating in the formation of memories regardless of the content of those memories. Still other problems exist. While some areas of brain increase activity during a cognitive task, other areas show a decrease. Most scientists ignore areas of decreased activity. Yet those should not be dismissed, because if the decreased activity is in a pool of inhibitory neurons, the result is likely to be a disinhibition that might ultimately be the release activity in a remote site that is attributed to be the cause of the cognitive process when the real cause came from the area of diminished activity. Another issue told to me by an fMRI expert I visited in Houston is that an area of increased activity, regardless of task, is often preceded by diminished activity in that same area. Almost no one pays attention to such data, because everyone seems fixated on finding areas and conditions of increased activity.
Behavior Most of us judge what other people are thinking by their behavior. If someone yells at me, I assume that he is thinking about his reasons to be mad at me. If a crook robs a store, he is probably thinking about getting money, how to get away, and how he wants to spend it.
The Brain as a System
45
Anyway, scientists like to call this ability to “read one’s mind” from observing behavior as a special human ability called “theory of mind.” We presume that others have a mind, based on what they do and how our own mind would be operating in similar circumstances. Actually, a theory of mind has been attributed to many species of animals, though obviously their capacity for imputing mind to other animals is far more limited than our own. There is also the issue that you cannot always know for certain what other people are thinking from what they do. Even their speech, which is a behavior, can be misleading. They may mean one thing while saying another. They may even lie, even to themselves. I won’t pursue this further here, because behavior is mostly beyond the scope of how brains think. I will, however, come back to behavior later when I explore how behavior is a feedback device for brain, and brain uses the feedback from behavior and its consequences in altering the brain’s thinking. The simplest element of thought is so simple we would not ordinarily think of it as a “thought.” Let us begin, for example, with the idea that many thoughts arise from sensory experience, and the first stage of such thought is the way that nerve cells create an impulse-based representation of an environmental stimulus. For example, if you touch a hot stove, nerve fibers in the finger generate impulses that are sent into the spinal cord and brain. This initial registration of the stimulus can be thought of as “tagging” the stimulus in the form of the impulse pattern, in this case in a pathway that goes from the fingers directly to the spinal cord. This tag pattern is one fundamental element of thought, and in this case the element will be a building block for subsequent elements that lead to the brain’s response that we can more clearly classify as the thought: “Damn! That hurts.” As thought grows from the thought elements that are activated by touching a hot stove, we now have impulses spreading into the divergent circuits in the spinal cord, thalamus, and sensory cortex. As the thought of “Damn! That Hurts” grows, the representational tag now becomes one of circuit impulse patterns (CIP). It is the CIP that represent the thought. More than that, I would argue, the CIP is the thought. Thinking of CIPs as thoughts is somewhat of a stretch if we limit our view to conscious thought. I will get to that later, but for now I want us to view non-conscious and subconscious thoughts as CIPs. That should be more intuitive and easier to accept. The CIPs are an abstract representation of a complex thought. If they represent the thought, is it not possible that they are the thought?
The Brain as a System A brain is an information-processing system. While there are a few maverick scientists who don’t believe this, the vast majority of neuroscientists think the evidence for information processing is overwhelming. Thoughts, whether conscious or subconscious, arise from this system and can be remembered by it. Neuronal information is moved around in multiple, parallel pathways, the circuits of which are juxtaposed
46
2 Thinking About Thinking
and commonly overlapping. A given neuron may be recruited into more than one circuit. Neurons and their clusters, called nuclei, are typically connected reciprocally, so that output from one place to another in the brain can be processed and fed back to the source of input. While it is tempting to think of a hierarchical organization in “top-down” terms, wherein the cerebral cortex “supervises” the hierarchy of subsystems within the nervous system, the matter is not that simple. For instance, neurons in the spinal cord can carry out mundane control functions for their respective body segment while the cortical neurons are “free to think higher thoughts.” But the assignment of rank order to subsystems in the brain and the spinal cord is not as obvious as it may seem. Although the part of the brain that provides intelligence, the neocortex, ranks above the reflex systems in the brainstem and spinal cord, there are practical limits on the degree of control exerted by the neocortex. If neocortex control were absolute, for example, people would not succumb to the dizziness and ataxia associated with motion sickness and the vestibular system of the brainstem. If cortical control were absolute, people would be able to suppress the pain that is mediated in the thalamus. They could stave off sleep indefinitely by keeping the reticular activating system active. This is clearly not the case. Thus, rather than relying on a permanent “supervisor” neuron or population of neurons, the nervous system functions as a hierarchy of semiautonomous subsystems whose rank order varies with situational stimuli. Any subsystem may take part in many types of interrelationships. Whichever subsystem happens to dominate a situation, each subsystem is independent only to a certain extent, being subordinate to the subsystem above it and modulated by the inputs from its own subordinate subsystems and from other subsystems whose position in the hierarchy is illdetermined. This design feature of the mammalian nervous system provides maximum flexibility and is probably the basis for the brain’s marvelous effectiveness. Even for a function that we habitually think of as top-down, such as attention, the actual process may be bottom up. György Buzsáki (2006) points out that the effect can be produced by gain control from primitive subcortical structures. The neurotransmitters acetylcholine and norepinephrine are released from brainstem sources, and these enhance sensitivity of cortical circuits to sensory input. This becomes manifest in enhancing cortical gamma rhythms, which are strongly associated with attentiveness and complex thinking. What excites these brainstem structures? For one, intense sensory input suffices (see my comments about the Readiness Response later in this chapter). But because the cortex and these brainstem structures are reciprocally connected, thought processes emanating from the cerebral cortex can accomplish the same thing, as for example happens when we think about some idea that really excites us or some intense phobia. Cerebral circuits can maintain an autonomous, self-organized activity independent of input. Buzsáki explains that the activity and thinking processes of brain are a “synthesis of self-generated, circuit-maintained activity and environmental perturbation.”
Nonlinearity Matters
47
If different brain areas and systems can be autonomous, how can they interact with each other? First, they are connected via fiber tracts that convey information from one area to another. Thus, information in one area becomes shared with other brain areas. The sharing comes not only because the anatomy of the circuits overlaps, but also because the impulse activity in the various areas may become timelocked (coherent). Such coherence seems to be achieved through oscillations that become entrained to certain frequencies (see Chap. 6).
Nonlinearity Matters A linear response or linear system is one wherein a steady increase in input leads to a proportional change in output. A graph of such output produces a straight-line curve, making it easy to predict the output for given levels of input. In the instance of a nonlinear response, there is no direct proportion between input and output. Responses may be discontinuous or non-related, or may relate in non-proportional, non-linear ways (Fig. 2.7).
Cell-Level Consequences of Nonlinearity The inherent definition of nonlinearity precludes the possibility of directly predicting the amount of response exhibited by a nonlinear process during a given passage of time. Aside from some unpredictability, additional consequences of nonlinearity exist. In the case of second messengers, for example, nonlinearity allows for much more
Y
Nonlinear relationships
Exponential curve (y=xn)
behavior nerve impulse
EEG
Dosage response curve Sine curve y=sin(x)) Logarithmic curve (y=ln(x))
receptor binding X
Fig. 2.7 Different kinds of non-linearity seen in the nervous system
48
2 Thinking About Thinking
effective and efficient operation of the nervous system. One second messenger can activate many other molecules in the course of a signal transduction pathway, and each of these molecules can activate many more molecules. The result is an exponential relation between second messenger concentration and cellular response, so that just a few second messengers can be used to educe a greatly magnified cellular response. Amplifying a response across time in turn increases the speed at which it is elicited, which is vital considering how central response time is to effective nervous system functioning. There are other practical benefits to nonlinearity. Consider the release and binding of neurotransmitter, which takes the form of an S-shaped curve. Once neurotransmitter concentrations reach the saturation point where all stereospecific binding sites are filled, the body would be squandering valuable resources if it released neurotransmitter in a linear manner. As such, once binding sites are filled and neurotransmitter concentration reaches a level of excess, the body may begin a pathway of feedback inhibition where the neurotransmitter “left over” from the filled binding sites serves to inhibit (“down regulate”) the molecules that initiate its production.
Cognitive Consequences of Nonlinearity Nonlinear consequences at the cell level show up in thinking. For example, an S-shaped curve that holds great implications is the learning curve, depicted below (Fig. 2.8).
Fig. 2.8 This curve indicates that when initially exposed to material to learn, an individual’s mastery of it is somewhat gradual, until a certain point is reached and the pace at which material is mastered accelerates. However, eventually an asymptote is reached where more time spent learning has no effect on performance tests for mastery, meaning that maximum mastery has been attained. Further attempts at learning a certain material become, at this point, relatively futile
Inhibition Matters
49
Fig. 2.9 This curve, which models retention of material over time, approximately depicts Hermann Ebbinghaus’s formula for forgetting, given by R = e(−t/S) where R is memory retention, S is the relative strength of memory, and t is time. Material is forgotten at a predictable, albeit nonlinear rate over time, although a number of variables including the difficulty of the material and physiological fluctuations of an individual may affect the exact depiction of this curve. Interestingly, though, studies have shown that retention in each material-specific curve can be improved over time by conscious review. This observation holds many implications, especially in the realm of the education system, but outside of it as well. For instance, it might be used to support the concept of “refresher courses” as a means of augmenting knowledge for individuals in a professional career
A graph of forgetting, as opposed to learning, is also non-linear, but the shape is quite different (Fig. 2.9).
Inhibition Matters Some neurons are inhibitory. That is, their only effect on their targets is to produce inhibition by driving the resting potential of target neurons away from firing threshold. When a neural pathway contains inhibitory elements, non-linearity is introduced (Fig. 2.10). If an inhibitory neuron acts to suppress activity in an excitatory chain, it is said to “disfacilitate” it. If an inhibitory neuron inhibits an inhibitory neuron that acts in an excitatory chain, it is said to “disinhibit” the target and may thus lead to increased output activity. Another thing inhibitory neurons do is serve as crucial nodal points in feedback circuits (Fig. 2.11). The inhibition may be a negative feedback on the target, a feedforward inhibition of the target or a lateral inhibition on neurons in parallel pathways. Inhibition not only selects pathways of neuronal chains, but can also select whole assemblies of neurons. Slight differences in synaptic strengths between the inputs to an inhibitory neuron can determine what happens in whole populations of related neurons.
50
2 Thinking About Thinking
Fig. 2.10 Inhibitory neurons introduce non-linearity. Top: chain of excitatory neurons (black) produces steady increase in excitation. Middle: and bottom: presence of inhibitory neurons (gray) introduces unpredictable effects on output that vary with the details of connections and strength of connections (Reproduced with permission from Buzsáki (2006))
Feedback
Feed-forward
Lateral inhibition
Fig. 2.11 Left, feedback inhibition: activation of an excitatory cell (triangular shaped ) activates an inhibitory neuron that feeds back inhibitory influence to suppress activity in the cell that excited it. Middle, feed-forward inhibition: Activation of an excitatory cell is damped when a parallel input path includes an inhibitory neuron. Right, lateral inhibition: activity in parallel pathways can be suppressed or shut off when an excitatory neuron activates inhibitory neurons that supply input to the parallel pathways. The superimposed triangles representing principal cells in the parallel pathways indicate neurons that excite each other or are simultaneously excited by the same input (Reproduced with permission from Buzsáki (2006))
Whole competing assemblies can be isolated, and the same network can produce different output patterns at different times, depending on the time-and-space distribution of inhibitory influences. Inhibitory influences also modulate the firing pattern of impulses to determine whether target neurons fire in bursts or in more or less steady streams of impulses. Variable flexibility in inhibitory influences is of special importance in the highest levels of neuron function in the cerebral cortex. These neurons will not normally get locked into excitatory overdrive (epilepsy is a notable exception). Likewise, these neurons will not get frozen into a state of unresponsiveness.
Physiological and Behavioral Readiness
51
Bodies Think Too Thinking is not de-contextualized. Brain is embodied. If “thought” is the neural activity within certain circuitry associated with a body part, then one could argue that a simple knee-jerk reflex is a thought. What I really want to emphasize is that most of the brain’s thinking occurs from the reference point of the body and its relationship to its inner parts and to the outside world. Biologically speaking, the brain exists primarily to help make the body work right and to make behavior appropriate and successful in the context of the real world in which bodies operate. We can have subconscious thought, as for example emotional kinds of thought that affect our body and behavior. Our heart may race or palpitate, cold sweat may appear, we may blush, we may become sexually aroused – all can occur subconsciously. We can even have subconscious responses in our dreams that affect our body (recall the earlier example of foot paddling and barking in sleeping dogs). In the simple case of spinal reflexes, “thought” inevitably is mediated through the body, in which a train of nerve impulses arising from a nerve fiber in the patellar tendon travel up the nerve to the spinal cord, where they activate nerve cells that project back to that muscle to which the tendon was attached. Thus, you might say that the spinal cord thinks, non-consciously of course, the equivalent of: “My tendon has been stretched and to get my leg back to normal position, I must contract the thigh muscles.” Such thought is obviously embodied.
Physiological and Behavioral Readiness Many of us have been embarrassed by friends teasingly sneaking up behind us and startling us into jumping or letting out a little scream of surprise. All of us react similarly to such startling stimuli – our head turns toward the stimulus, our heart rate picks up, our muscles tense, and our mind assumes a heightened sense of awareness. These reflexive reactions allow us to quickly make appropriate behavioral responses to environmental contingencies. I fondly remember the pioneer researcher in the study of this “orienting reflex,” Andre Grastyán, in Pécs, Hungary in the 1970s. It was there I met his student, György Buzsáki, who was later to become a research pioneer more famous than his mentor. Neurons in the central core of the brainstem govern the orienting reflex. This system engages a constellation of sensory, integrative, and motor responses to novel or intense stimuli. The brainstem core is ideally situated to monitor and respond to a variety of stimuli, because its cells receive inputs from all levels of the spinal cord. When brainstem core neurons are stimulated by sensory input of any kind, they relay excitation through numerous reticular synapses and finally activate widespread zones of the cerebral cortex, enhancing consciousness and arousal level. If this stimulation occurs
52
2 Thinking About Thinking
during sleep, it can disrupt sleep and trigger consciousness. Concurrently with cortical activation, muscle tone is enhanced, preparing the body for forthcoming movement instructions. At the same time, the limbic system is activated, which allows new stimuli to be evaluated in the context of memories, and neurons of the hypothalamus and the autonomic nervous system mobilize the heart and other visceral organs for the so-called fight-or-flight situations. This conglomeration of responses makes an animal or person ready to respond rapidly and vigorously to biologically significant stimuli, including the pranks of mischievous friends (Fig. 2.12). The central core of the brainstem produces what I call a “Readiness Response” (Klemm 1990). Such readiness creates the capacity for conscious thought by awakening the brain, mobilizing and making it alert. It is the brain’s way of saying to itself “wake up brain, you have some incoming information you need to deal with.” Without it, the brain remains in a comatose state, even if the primary sensory pathways are functioning normally (Ropper 2008). Other scientists have used more restrictive terms, such as “arousal response” and, as mentioned earlier, “orienting response,” but these terms don’t capture the full
The Brain’s Consciousness Triggering System Neocortex
Sensory cortex
Neocortex Neocortex
Thalamus Thalamus Brainstem
Brainstem central core
cranial
spinal
Sensory Nerves
Fig. 2.12 The major pathways in the brain that are crucially involved in the genesis of consciousness: neocortex, thalamus, and the central core of the brainstem. Sensory inputs enter specific thalamic nuclei, which project the information into the small strip of the sensory cortex part of the neocortex. All the rest of the neocortex gets diffuse input from sensory nerve collaterals that activate the central core of the brainstem which in turn provides widely distributed excitatory drive to all parts of the neocortex. This brainstem influence is the essential part of the consciousness triggering system
Physiological and Behavioral Readiness
53
range and significance of the response. In fact, it was during my visit with Grastyán that I realized his “orienting response” ideas did not capture the full range of associated activities. It is well established that consciousness, however it “emerges,” arises from the interaction of the cerebral cortex and the brainstem core. When the core is active, it provides an excitatory drive for the whole cortex. In a sense, the reticular formation can be said to “arouse” cortical cells to be more receptive to sensory information arriving over the primary sensory pathways. That same arousal effect operates on internally generated images, memories, and thoughts. Conversely, depression of reticular activity leads to behavioral sedation. Destruction of the reticular formation causes permanent subconsciousness and coma. No more fundamental relationship among the three kinds of mind can be found than in the functions of the brainstem, because it is the seat of the non-conscious mind and links intimately with the other two kinds of mind. The brainstem and spinal cord enable the other two kinds of mind. This idea was popularly captured in Paul MacLean’s idea of the “Triune Brain”, (Fig. 2.13) (MacLean 1990) although he did not explicitly apply the idea to the three kinds of mind as I have done earlier in this book. MacLean’s triune brain was originally intended to be a model for the evolutionary development of the brain. It holds that there are three distinct brains, each of which has a unique set of functions that become increasingly more complex with progression along the hierarchy of brains and evolution. First is the “reptilian brain.” It carries out autonomic processes as well as instincts and is involved in survival functions, such as eating, escaping predators, reproduction, and territorial defense. Reptilian structures are those deepest in the brain, including the brainstem and cerebellum. The next brain is the limbic system, or the paleomammalian brain, which wraps around the reptilian brain and includes such structures as the hippocampus, amygdala, thalamus, and hypothalamus. Emotions, memories, and the subconscious value judgments stemming from emotion and memory fall within the domain of
Fig. 2.13 Paul MacLean’s triune brain concept, showing gross structural changes at different evolutionary stages of brain development
54
2 Thinking About Thinking
limbic system control. Most evolved is the third brain, the neocortex, which is evident in higher mammals- especially primates. Attributable to the neocortex are the complex functions that distinguish primates and especially humans from other animals: language, logical and rational analysis, thought and abstraction, and advanced learning and memory. Its anatomical domain includes the two cerebral hemispheres that envelop the two other brains, and especially the prefrontal cortex. Application of the “triune brain” concept to the three minds leads to some interesting comparisons. The Reptilian Brain, for instance, because it controls autonomic functions and instinctual urges, is comparable to the non-conscious mind. The subconscious mind certainly includes the limbic system, which creates motivational drives and emotions that in turn regulate judgments just below the level of consciousness. Subconscious mind also includes basal ganglia, those multiple clusters of neurons that create a complex network for subconscious controls over movement. Finally, the neocortex is the seat of the conscious mind, though it only operates in concert with arousal drives from the brainstem. In MacLean’s triune brain theory, he explicitly attributes consciousness to the neocortex. In MacLean’s theory, it is easy to see that as essential as the neocortex and limbic system are to life as we experience it, it is the Reptilian Brain that is essential to life itself, since it controls vital functions. The Reptilian Brain’s non-conscious mind is also responsible for the function of the upper two levels of mind in other ways. For instance, the brainstem contains many neurons that activate the cortex, and in the process triggers consciousness. We know this from several lines of evidence, but the classic study of Moruzzi and Magoun stands out as landmark in the history of neuroscience (Moruzzi and Magoun 1949). Their paper, by the way, inspired my own interest in pursuing a career in neuroscience. What they did was to discover that mild electrical stimulation of the core of the brainstem of experimental animals created behavioral alertness and “activation” of the EEG, as is seen when an animal is awake and alert.
Ascending Reticular Arousal System Guiseppe Moruzzi (1910–1986) and Horace Magoun (1928 –1942)
As a student of such renowned neuroscientists as Lord Adrian (see sidebar in Chap. 5) and Frederic Bremer, discoverer of the electroencephalogram, Giuseppe Moruzzi had the pedigree for great discovery. So when the Rockefeller Foundation sponsored a visiting professorship that united Moruzzi’s skills, namely his expertise on the use and interpretations of EEGs, with Horace Magoun’s knowledge on states of sleep and wakefulness and his interest in the brain’s “waking center,” it is not surprising that the result was a paper of great insight that is now a citation classic.
Physiological and Behavioral Readiness
55
(continued)
56
2 Thinking About Thinking
Ascending Reticular Arousal System Guiseppe Moruzzi (1910–1986) and Horace Magoun (1928 –1942) (continued)
Moruzzi, an Italian professor from the University of Pisa, and Magoun, an American professor from Northwestern University, began their collaboration in 1948. Originally, their intention was to study inhibition pathways of motor-cortex discharges in the cerebellum of anesthetized cats using stimulating electrodes placed in the cerebellum and the brainstem reticular formation, with the EEG used to measure provoked discharges. However, when they stimulated the reticular formation, they unexpectedly observed what appeared to be flattening of the cortical wave. However, when they used more amplification, they saw the waveforms were high frequency, low amplitude waves that are typically seen during waking states. This led them to consider the reticular formation as part of a pathway that activated the entire cortex, which in turn provoked them to conduct stimulation experiments testing the role of the reticular formation in arousal. Several important conclusions were drawn from their experiments. Foremost is the existence of a so-called “ascending reticular activating system,” (ARAS) also known as the “wakefulness center,” that generates an arousal reaction upon stimulation that is analogous to the arousal reaction generated by sensory stimulation. Magoun and Moruzzi also pointed out that the reticular formation receives many messages of sensory input from the main sensory pathways and weights these messages before projecting them to thalamic neurons and the cortex. Another conclusion noted that wakefulness appears to result from background activity in the ARAS. The existence of the ARAS proved that wakefulness was an internallyregulated property of the brain that results cumulatively from such control functions as the regulation of neurotransmitter activity and synaptic inhibition, and the activation of excitatory systems. Knowledge of the ARAS holds important clinical implications. For example, in their experiments, Magoun and Moruzzi were able to induce waking states or comas from, respectively, stimulation and inhibition or destruction of the reticular formation. It follows that when someone is put under anesthesia, the activity of the ARAS is suppressed. Over-arousal stemming from ARAS activity has been implicated in ADD and ADHD, and behavior states and internal clock regulation both appear to stem from ARAS activity. These are just a few instances of the expansive possibilities of ARAS influence; as research continues, we will inevitably continue to see just how broadly ARAS activity affects our minds and bodies. Many scientists today think Moruzzi and Magoun should have gotten the Nobel Prize for this work. They were passed over, it is felt, because their discovery now seems so obvious. But it wasn’t obvious to anybody else until they demonstrated it. Their discovery excited most neuroscientists of their era. I know they excited and inspired me to become a neuroscientist.
Physiological and Behavioral Readiness
57
Sources: Dell, P. C. (1975). Creative dialogues in sleep-wakefulness research. In G. Adelman, J. Swazey, & F. Worden (Eds.), The neurosciences: paths of discovery (pp. 554–560). Cambridge: The MIT Press. The ADD/ADHD Support Site. (2008). Reticular activating system: The ADHD brain and behavior. Retrieved June 30, 2008, from http://www.attentiondeficit-add-adhd.com/ reticular-activating-system.htm Neylan, T. C. (1995). Physiology of arousal: Moruzzi and Magoun’s ascending reticular activating system [Electronic version]. Neuropsychiatry Clinical Neuroscience, 7(2), 250. Moruzzi, G., & Magoun, H. W. (1981). Citation classic – Brain-stem reticular-formation and activation of the EEG [Electronic version]. Current Content/Life Sciences, (40), 21–21.
Donald Lindsley and colleagues provided the corroborating evidence that lesions of the central core of the brainstem caused coma, while lesions of the surrounding fiber tracts did not. These key ARAS experiments were performed over 50 years ago seem to have been forgotten by today’s neuroscientists. Many modern textbooks don’t even mention it. The brainstem’s scope of functions extends still further. Various populations of neurons in the brainstem are nodal points between sensory input and motor output. These populations govern consciousness and alertness, as mentioned, but they also govern the responsiveness to sensory input, activation of many visceral and emotive systems, the tone of postural muscles, and the orchestration of primitive and locomotor reflexes. Particularly important to this constellation of responses is activation of the reticular formation and the periaqueductal grey region. Also engaged during activation are brainstem nuclei whose neurons release specific neurotransmitters: raphe (serotonin), locus coeruleus (norepinephrine), and substantia nigra (dopamine). We can think of a readiness response as including behavioral and mental arousal (Fig. 2.14). When the animal is aroused by sensory input, all relevant systems are activated by reflex action. More than that, the brainstem also mediates most of the other components of readiness by generating a global mobilization that can include enhanced capability for selective attention, cognition, affect, learning and memory, defense, flight, attack, pain control, sensory perception, autonomic “fight or flight,” neuroendocrine stress responses, visuomotor and vestibular reflexes, muscle and postural tone, and locomotion. Each of the changes associated with a readiness response prepares us to face our environments in different ways. When the cerebral cortex is excited, the resultant enhancement of consciousness and arousal level allows us to better observe our environment. Muscle tone enhancement prepares the
58
2 Thinking About Thinking
Fig. 2.14 Diagram of the physiological components of the readiness response
body for forthcoming movement instructions. Activation of the limbic system allows new stimuli to be evaluated in the context of memories, and activation of neurons in the hypothalamus and autonomic nervous system mobilizes the heart and other organs for so-called fight of flight situations. This conglomeration of responses makes an animal ready to respond rapidly and vigorously to biologically significant stimuli. These multiple reflex-like responses are for the most part very obvious during startle and orienting reactions of either animals or humans. For example, consider orienting. If you hear a sudden, loud noise, most likely you will reflexively turn your head toward the sound and become tense. Other, less evident responses may occur, including visceral changes, such as an immediate rise in pulse rate and blood pressure. Less intense stimuli may not evoke a full-blown readiness response because the brain can quickly determine whether or not a response of great intensity is appropriate to the stimuli. Another good example to which most people can relate is found in a sleeping cat that is suddenly startled into arousal by a dog barking nearby. The cat leaps to its feet, orients to the dog, becomes extremely tense (including arching of the back and extension of the limbs). The hair will rise and the cat will hiss and prepare to lash out its claws toward the dog. Clearly, the cat is mobilized for total body response to the threat.
Triggering Consciousness
59
How can the brainstem accomplish all of these responses? It was mentioned earlier that various neurons in the brainstem are nodal points between sensory input and motor output. This is mainly evidenced in the brainstem reticular formation neurons that receive collateral sensory inputs from all levels of the spinal cord, including such diverse sources as skin receptors of the body and head, Golgi tendon organs, aortic and carotid sinuses, several cranial nerves, olfactory organs, eyes, and ears, in addition to extensive inputs from various other brain regions, particularly the neocortex and limbic system (Starzl et al. 1951). Such input can be a major influence on behavior, which makes the brainstem core neurons ideally situated to monitor and respond to a variety of stimuli that can be biologically significant. For example, the cortical and limbic-system activities that are associated with the distress of a newly weaned puppy probably supply a continuous barrage of impulses to the brainstem, which in turn continually excites the cortex to keep the pup awake and howling all night. The role of the brainstem core in these arousing responses can be demonstrated by direct electrical stimulation at many points within the brainstem reticulum. Such stimulation activates the neocortex (indicated by low-voltage, fast activity [LVFA] in the EEG), the limbic system (rhythmic 4 –10/s [theta] activity in the hippocampus), and postural tone (increased electrical activity of muscles). Additionally, many visceral activities are activated via spread of brainstem core excitation into the hypothalamus. All readiness response components seem to be triggered from the brainstem core and some of its embedded nuclei (Hobson and Brazier 1980, 564p; Steriade and McCarley 1990). Evidence that the ARAS performs an important function in readiness includes: (1) humans with lesions in the brainstem core are lethargic or even comatose, (2) surgical isolation of the forebrain of experimental animals causes the cortex to generate an EEG resembling that seen in sleep, (3) Direct electrical stimulation of the brainstem core has unique abilities to awaken sleeping animals and to cause hyperarousal in awake animals, and (4) brainstem core neurons develop a sustained increase in discharge just before behavioral and EEG signs of arousal. Some recent studies have implicated cholinergic neurons in the pons in the EEG arousal component of the readiness response. These neurons appear to be under tonic inhibitory control of adenosine, a neuromodulator that is released during brain metabolism. This may relate to the mental stimulating properties of caffeine and theophylline, which act by blocking adenosine receptors. Note that in addition to the bodily activation, if one is asleep at the time of stimulus, the brain will be jolted into conscious awareness and prodded to be more aware, more attentive, and to think more effectively.
Triggering Consciousness The consciousness that such anatomy can generate has to be switched on. Otherwise, we would remain in perpetual sleep or coma. Once triggered, consciousness is dynamic, bobbing up and down like a raft in an ocean of ideas and feelings. The processes by which consciousness is sustained are distinct from those that trigger it.
60
2 Thinking About Thinking
As in waking from sleep or from anesthesia, consciousness can just “pop up.” Surely, something must trigger this. The suddenness may be an illusion, in that the activation process could have taken longer than we think, but we are too groggy to remember what happens in the groggy state. It may be akin to the problem of remembering dreams. Physiological monitoring can show you had them, but often can’t remember what they were about. What triggers consciousness, whether from waking in the morning or from emerging from anesthesia? As I just explained, increased activity from the brainstem triggers behavioral readiness and consciousness. What increases activity in the brainstem core? Sensory input certainly does. That is why it is hard to go to sleep in a noisy and bright environment. Or when you wake up in the middle of the night worrying about a personal problem or thinking about a work task, all the conscious mental activity keeps you awake because the neocortex is reciprocally connected to the brainstem and keeps re-exciting it. In general, there is a barrage of brainstem activity immediately prior to any form of arousal. I have recorded during stimulus-induced behavioral arousal such antecedent barrages of multiple-unit activity in the reticular formation of rats and rabbits, species that presumably don’t have robust consciousness because of their poorly developed cortex. Reduction of brainstem activity, in turn, correlates with decreased levels of arousal that may lapse into coma. Consider the possibility that conscious mind is not so much triggered as it is released. Arousal seems to be produced by activation of the ARAS in an indirect way. Though the original idea was that consciousness results from a global excitation of the neocortex, there is clear evidence that the excitation is indirect and results from a release from inhibition (Yingling and Skinner 1977).
Where Consciousness Comes from I still haven’t said exactly where consciousness comes from. Presumably, in lower mammals at least, the ARAS may trigger arousal and the readiness response without much accompanying consciousness. How does an activated brainstem-cortex call up its conscious mind? Is it automatic? We certainly don’t seem to have voluntary control over this. We can’t just say: “I want my conscious mind to go away for a while.” The closest thing we can say is, “Now I lay me down to sleep.” By definition, when we do go to sleep, we lose consciousness (dreaming is an exception that I will explore later in Chap. 8). Associated with the loss of consciousness is a decline in brain metabolism. A common interpretation for why we sleep is that it provides rest for the brain. In other words, consciousness makes demands on the brain. Direct measures of glucose consumption by whole brain of humans has shown that over-all brain metabolism decreases some 25% and oxygen consumption decline by about 16% during sleep. The exception is during dream sleep, when metabolism increases (Boyle et al. 1994).
References
61
I would say that calling up conscious mind is automatic. When we wake up after a night’s sleep, conscious mind just appears. The same is true when you come out of anesthesia. It is an amazing thing. We go from oblivion to suddenly being aware – and aware that we are aware. If either the brainstem core or neocortex is non-functional, as during anesthesia or brain damage, there will be no conscious mind. If your body is alive in such situations, that in itself is proof that your non-conscious mind that controls your heart and breathing is still operational. We can only assume that the subconscious mind is also still operating, but perhaps at an impaired level varying with the amount of brain damage. Other relevant “ancient” experiments from the Moruzzi/Magoun era include studies revealing that sensory pathways are still intact in anesthetized brain and in fact the propagation of sensory information may even be greater than in the conscious state (French and King 1955). Recording electrodes were implanted into various points along the sensory pathways and the thalamus. Responses to stimulus showed that the sensory activation not only could still activate brain responses, but that the magnitude of response was often larger than could be obtained without anesthesia. In other words, the sensory information was received, but obviously not perceived. This, by the way is a key point often missed even by scientists: pain occurs only in the consciousness. A species that cannot generate consciousness can respond to noxious stimuli, but such animals cannot feel pain. Modern research has made it abundantly clear that consciousness emerges from distributed processes within multiple regions of the cortex. This has been confirmed by brain imaging and by brainwave coherence during consciously performed tasks (see Synchronization in Chap. 4 and EEG Coherence and Consciousness in Chap. 6). A consensus is starting to build among many neuroscientists that synchronization of electrical activity among widely distributed parts of the cortex is central to the conscious state, though in ways that are by no means understood. Currently, the idea is that neurons in shared oscillatory circuits have an impact that stands out from the unsynchronized activity of other neurons. Whatever is being held in consciousness at the time persists only so long as the linked oscillation persists. Most recently, emphasis is being placed on high-frequency oscillations in cortex, in the beta and gamma range as the index and probable cause of the enhanced thinking capabilities that occur as part of the readiness response (Uhlhaas et al. 2009). Synchrony among multiple oscillatory circuits also seems to be a core operation of higher-level thinking.
References Alle, H., Roth, A., & Geiger, J. R. P. (2009). Energy-efficient action potentials in hippocampal mossy fibers. Science, 325, 1405–1408. Bennett, M. R., & Hacker, P. M. S. (2003). Philosophical foundations of neuroscience. Malden: Blackwell Publishing.
62
2 Thinking About Thinking
Boyle, P. J., et al. (1994). Diminished brain glucose metabolism is a significant determinant for falling rates of systemic glucose utilization during sleep in normal humans. The Journal of Clinical Investigation, 93(2), 529–535. doi:10.1172/JCI117003. Buzsáki, G. (2006). Rhythms of the brain. Oxford: Oxford University Press. Colgin, L. L., et al. (2009). Frequency of gamma oscillations routes flow of information in the hippocampus. Nature, 462, 353–357. doi:10.1038/nature08573. Freeman, W. J. (2009). Consciousness, intentionality, and causality. In S. Pockett, W. P. Banks, & S. Gallagher (Eds.), Does consciousness cause behavior? (pp. 73–105). Cambridge: MIT Press. French, J. D., & King, E. E. (1955). Mechanisms involved in the anesthetic state. Surgery, 38, 228–238. Gangestad, S. W., & Simpson, J. A. (Eds.). (2007). The evolution of mind. Guilford: New York. Hobson, J. A., & Brazier, M. A. B. (Eds.). (1980). The reticular formation revisited. New York: Raven Press. Izhikevich, E. M. (2007). Dynamical systems in neuroscience (pp. 309–310). Cambridge: MIT Press. Jaynes, J. (1972). The origin of consciousness in the breakdown of the bicameral mind. New York: Houghton Mifflin. Klemm, W. R. (1990). The readiness response. In W. R. Klemm & R. P. Vertes (Eds.), Brainstem mechanisms of behavior (pp. 105–145). New York: Wiley. Klemm, W. R., & Vertes, R. (1990). Brainstem mechanisms of behavior. New York: Wiley. Lipton, B. (2005). The biology of belief. Santa Rosa: Mountain of Love/Elite Books. MacLean, P. D. (1990). The triune brain in evolution: Role of paleocerebral functions. New York: Springer. Moruzzi, G., & Magoun, H. W. (1949). Brain stem reticular formation and activation of the EEG. EEG Clinical Neurophysiology, 1, 455–473. (Reviewed in Moruzzi, G., & Magoun H. W. (October 5, 1981). Current Contents, Untitled) Mukamel, R., et al. (2005). Coupling between neuronal firing, field potentials, and fMRI in human auditory cortex. Science, 309, 951–953. Niessing, J., et al. (2005). Hemodynamic signals correlate tightly with synchronized gamma oscillations. Science, 309, 948–951. Rogers, B. P., et al. (2007). Assessing functional connectivity in the human brain by fMRI. Magnetic Resonance Imaging, 25(10), 1347–1357. doi:10.1016/i.mri.2007.03.07. Ropper, A. H. (2008). Chapter 268: Coma. In A. S. Fauci et al. (Eds.), Harrison’s principles of internal medicine (17th ed.) [electronic version]. New York: McGraw-Hill. Shermer, M. (2000). How we believe. New York: Holt. Starzl, T. E., Taylor, C. W., & Magoun, H. (1951). Collateral afferent excitation of the reticular formation of the brainstem. Journal of Neurophsyiology, 14, 479–496. Steriade, M., & McCarley, R. W. (1990). Brainstem control of wakefulness and sleep. New York: Plenum. Uhlhaas, P. J., et al. (2009, July 30). Neural synchrony in cortical networks: History, concept and current status. Frontiers in Integrative Neuroscience. doi:10.3389/neuro.07017.2009. Vul, E., et al. (2009). Puzsling high correlations in fMRI studies of emotion, personality, and cognition. Perspectives on Psychological Science, 4(3), 274–290. Wada, Y., et al. (1998). Aberrant functional organization in schizophrenia: Analysis of EEG coherence during rest and photic stimulation in drug-naive patients. Neuropsychobiology, 38(2), 63–69. Yingling, C. D., & Skinner, J. E. (1977). Gating of thalamic input to cerebral cortex by nucleus reticularis thalami. In J. E. Desmedt (Ed.), Attention, voluntary contraction and event-related cerebral potentials. Progress in clinical neurophysiology (Vol. 1, pp. 70–96). Basel: Karger.
3
Kinds of Thought
Non-conscious Thought Non-conscious thought is the kind that is not accessible to consciousness. It is the kind that is associated with spinal and cranial nerve reflexes and with various primitive brainstem operations governing such things as breathing, heart rate, hormone functions, and certain stereotyped movements (Klemm and Vertes 1990). Why then consider non-conscious thought in a book that purports to explain conscious mind? Why, indeed, consider this as thought at all? First, non-conscious mind technically fits my definition of mind. Secondly, non-conscious mind performs many vital functions in an automatic fashion, thus reducing the information processing burden placed on the other two kinds of mind. Third, non-conscious thought is achieved by processes that are amenable to study, results of which have enabled scientists to understand many of the core ideas of how the nervous system works (Klemm 2008a). Finally, I think non-conscious mind is the default state of animal existence – including humans. Brains are designed to reliably produce adaptive responses to environmental conditions by way of nonconscious reflexes and control systems. It takes special neural circuitry, available only in the neocortex of mammals and optimized in humans, to produce consciousness and its capacity for the highest level of complex brain function.
Spinal Cord I contend that a simple knee-jerk reflex is a kind of thought. This seems to defy common-sense. But bear with me, because exploring this idea is crucial to understanding what a thought is, and I will later attempt to explain conscious thought on the basis of what we know about non-conscious thought. Thought certainly does not have to be something our brain generates in the consciousness. We can have subconscious thought, as for example emotional kinds of thought that we may not be aware of and cannot explicitly describe, yet W.R. Klemm, Atoms of Mind: The “Ghost in the Machine” Materializes, DOI 10.1007/978-94-007-1097-9_3, © Springer Science+Business Media B.V. 2011
63
64
3 Kinds of Thought
nonetheless affect our beliefs, attitudes, feelings, and behavior. We can have thoughts in our dreams, although in my view dreams are thoughts that you are consciously aware of during the dream even though your brain is mostly shut off from outside input and movements that would normally be associated with the thought. In the simple knee-jerk case, “thought” consists of a train of nerve impulses arising from nerve fibers in the patellar tendon of a muscle group in front of the thigh. The impulses travel up the nerve to the spinal cord, where they activate nerve cells that project back to that same muscle group to make them contract. Thus, you might say that the spinal cord thinks, non-consciously of course: “My thigh-muscle tendon has been stretched and to get my leg back to normal position, I must contract the thigh muscles.” Now that may not seem like much of a “thought.” But consider a slightly more complex spinal reflex, called the flexion reflex. An example of this reflex is when you step on an object that causes pain, like a tack or nail in a loose board. Before you even generate a conscious thought in your brain that “this hurts,” your spinal cord is already processing the painful information. Nerves in the foot are activated to send impulses to the cord, where these in turn activate nerves that go to the flexor muscles on the back side of the leg. When these muscles contract, your leg flexes and thus moves away from the noxious source. If your leg had extended, an opposite behavioral effect would have occurred in which you would have maintained the leg in contact with the noxious source rather than withdrawing from it. This so-called “withdrawal reflex” results then from a thought, all non-conscious, which says in effect: “I have stepped on something harmful and to keep from feeling prolonged pain, I must withdraw my leg from the tack. To do that, I must activate the muscles that can flex the leg and make the leg pull away from the noxious stimulus.” But this thought, primitive as it is at this stage, continues to develop. At almost the same instant, the leg opposite to the injured foot will probably extend. If the opposite leg does not extend, you might fall down when you flexed the leg on the other side. Extending the opposite leg also makes it more likely that flexing the leg of the injured foot will actually achieve withdrawal from the noxious stimulus. So, now our non-conscious thought processes, all still mediated in the spinal cord, have been extended to include the instructions: “If I don’t extend my opposite leg while flexing the leg with the injured foot, I will fall down, so I will tell the extensor muscles of the opposite leg to contract.” The thought continues to grow, as the nerve fibers that brought the painful information into this spinal network, also send this information, in the form of nerve impulses, into the brain. The information is first routed to a particular part of a structure called the thalamus, which is topographically organized so that the brain knows which part of the body the information is coming from. The brain at this point now knows which foot has been injured. Information is then sent to the part of the brain that is necessary to think at a conscious level, the cerebral cortex. Moreover, the part of the brain that gets this information is also topographically organized so that it too knows which foot is injured. Now the brain knows at a conscious level which foot is injured. Moreover, the brain now realizes that this injury “hurts.” At this point non-conscious thought has melded into conscious awareness.
Non-conscious Thought
65
Other parts of the brain may now get activated to develop the thought further. For example, the sensory nerve fibers in the spinal cord that were carrying noxious-stimulus information to the thalamus have branches that go to a large group of small neurons in the central core of the brainstem known as the reticular formation. As explained earlier with the “Readiness Response,” these neurons have a larger thought function of activating the entire cerebral cortex. In the case of our primitive pain reflex, thought now expands to include making the brain more consciously alert – “pay attention and look out for further hazards.”
Brainstem The brainstem at its lower end (the medulla) is continuous with the spinal cord. The medulla, houses cranial nerves that influence certain vital functions, such as brainstem reflexes, breathing, and cardiovascular functions. While brain structures like the cerebral cortex may seem more important because they are associated with consciousness and the higher-level, complex thinking that distinguishes humans, their functions would be impossible without properly functioning non-conscious mind structures. The brainstem regulates a wide range of vital functions, such as heart rate, blood pressure, certain digestive functions, and orientation to external stimuli. Malfunctioning of these reflexes is often seen in comatose patients, and examination of reflexes that are malfunctioning can be used to determine what damaged brain anatomy may be inducing coma. The pupil reflex, which causes the pupils to constrict when exposed to light, is especially useful for this purpose. A specific reflex known as the spino-bulbo-spinal reflex is also governed by the brainstem. This reflex involves reciprocal relations between the spinal cord and brainstem that provide a means of orienting to the environment. When a dorsal spinal nerve root is stimulated, this stimulus ascends to the brainstem and then back through the motor roots of the spinal cord. Urination is a familiar process that is a spino-bulbo-spinal reflex. Signals from the bladder indicating fullness pass through the spinal cord to the brainstem, which then consciously sends a signal to the bladder to void. The “myoclonic jerk” is another example of a spino-bulbo-spinal reflex. Whether you know it or not, you are probably familiar with this reflex; it is a quick and involuntary muscle twitch most often experienced while falling asleep and appearing often during dreaming. There is another brainstem reflex that I have enjoyed studying over the years (Klemm 1990), known as the immobility (IR) reflex. This is a stimulus-induced behavioral inhibition that used to be called “animal hypnosis” or “death feint.” It has also been called “playing possum,” because opossums are especially prone to do this. The response is a global constellation of reflexes that suppress the usual spinal reflexes and impose whole-body immobility. The system is headquartered in the brainstem. At the same time that immobility is triggered, the brain is not shut down, and in fact is activated. As a convenient analogy, think of a car engine racing along as the clutch is engaged to stop the wheels from rotating. A similar function is
66
3 Kinds of Thought
produced during dreaming by part of the brainstem that suppresses movement while allowing the forebrain to generate dreams. Aside from vital functions and reflexes, the brainstem exhibits regulating roles in a diverse array of other functions that relate to conscious thought. One of these is pain perception (Gebhart and Randic 1990). Pain, of course, can only be perceived in the consciousness, but to regulate pain perception, localized areas of the brainstem can modulate the pain sensations that are transmitted from the spinal cord. Certain parts of the brainstem have also been shown to generate natural analgesia by neurons that release endogenous opiates (endorphins). The two types of sleep, known as slow-wave sleep (SWS) and rapid eye movement (REM) sleep, can also be modulated by parts of the brainstem (Vertes and Robert 1990). SWS promoting sleep clusters of neurons have been found in the brainstem in stimulation experiments. Centers with roles in REM generation have also been found in the brainstem, as well as areas that modulate REM sleep, including the region responsible for myoclonic twitches. Regulation of sleep and some respiratory regulation are influenced by the middle section of the brainstem, known as the pons (located just under the cerebellum). The pons also serves as a connection point between the brainstem and cerebellum, and acts as a relay station between the spinal cord and cerebellum. The uppermost portion of the brainstem is the midbrain, which may be involved in orientation and sexual control functions. Running through the center of the brainstem is the reticular formation, which is also involved in control of vital functions, as well as orientation, environmental response functions, sleep regulation, and modulation of painful stimuli.
Other Functions of the Non-conscious Mind The maintenance of balanced bodily functions, known as homeostasis, is also partially attributable to the widespread controls of the brainstem. Homeostatic systems include control over hormones, especially those under control of the master gland of the body, the pituitary, which is directly regulated by the anterior part of the brainstem, the hypothalamus. Another important non-conscious-mind control system operates via nerves that belong to the so-called “autonomic nervous system,” also arising out of neurons in the hypothalamus. This system regulates function of most viscera, mobilizing their function when needed and giving the organs a “rest” whenever possible. We even have non-conscious functions from nerve cells embedded in our internal organs. Some of these control the muscles in our arteries and digestive tract. Others even exist in organs such as the ovary and the liver. Those functions are not fully understood, but recently in the liver researchers have discovered that the organ’s neurons have signaling interactions between the liver and the pancreas (Imai 2008). Obesity activates a protein in liver that induces proliferation of insulinproducing cells in the pancreas. This effect is mediated by neurons in the liver that are incorporated into an obviously non-conscious inter-organ communication system.
Non-conscious Thought
67
Hypothalamus
LL
A
Autonomic Nervous System (ANS) Visceral control of the ANS is accomplished non-consciously by control centers located mostly in the hypothalamus and brainstem. These control centers exert neural control over visceral organs such as the heart and digestive tract, which have intrinsic contractions and secretions for proper functioning. Most visceral organs receive a dual and antagonistic innervation (Fig. 3.1). This arrangement allows viscera to be activated or deactivated, and the nervous system controls normally operate to produce homeostatic balance of activity. Both smooth muscle and glands of viscera are subject to this control.
ME
DU
Eye
SPINAL CORD
Salivary glands
Heart Lungs
Spleen Digestive tract Skin Adrenal gland
Bladder Genitals
Fig. 3.1 Neural control systems for visceral function. Note that sympathetic pathways (emerging from the middle region of the spinal cord) have synaptic junctions outside the brain and spinal cord, either in a series of ganglia (cell clusters) lying alongside the spinal cord or in certain ganglia in the thoracic and abdominal cavities. The adrenal gland is an exception, in that it is its nervous tissue is the functional equivalent of a group of postsynaptic neurons. Parasympathetic neurons (emerging from the brainstem and the caudal end of the spinal cord) have their synapses outside the brain, located in the target organs themselves (Reprinted from Klemm 1996)
68
3 Kinds of Thought
A significant portion of the system lies outside the brain and spinal cord. There are two divisions of the ANS, poorly named as sympathetic and parasympathetic. As a simplification, they can be regarded to have mutually opposing action, with the sympathetic division designed to mobilize the body for emergency, life-threatening conditions, and the parasympathetic division designed to support nurturing, regenerative physiological processes (Table 3.1). The sympathetic system is commonly called the “fight” or “flight” system, while the parasympathetic system could be called the rest-and-digest system. In both divisions of the ANS, the output pathway from the central nervous system has two synaptic relays. The first uses acetylcholine as a neurotransmitter that communicates signals from the nerve cell that releases it to its target smooth muscle or gland. The second relay in the parasympathetic division also uses acetylcholine, but the second relay in the sympathetic division uses norepinephrine, a derivative of epinephrine (adrenalin). In many synapses, there is also co-release of peptide transmitters that can modulate the kinetics, duration, and strength of action on target muscles and glands. Would your body need to do the same things when you are calmly eating a steak and when you recognize that a wild animal were about to attack you? Of course, when under attack you would need sympathetic activation of such visceral functions as increased heart rate, blood pressure, air flow through the lungs, and adrenalin release from the adrenal gland. Note from the table of functions that appropriate blood flow changes also occur. Flow increases in the heart and skeletal muscle, where it is needed, and decreases in the skin, digestive tract, and spleen, where it is Table 3.1 Summary of functions of the autonomic nervous system Sympathetic (emergency) Salivation of mucus Blood vessels constricted Peristalsis decreased Sphincters contracted Secretions decreased Rate increased Contractile force increased Coronary arteries dilated Blood vessels dilated Blood vessels constricted Hair erected Sweat, from palms Bronchioles dilated Contracted Wall dilated Sphincter contracted Ejaculation Pupils dilated Epinephrine released
Digestive Organs
Heart
Skeletal muscle Skin Lungs Spleen Bladder Genitals Eyes Adrenal gland
Parasympathetic (routine) Salivation, watery – Peristalsis increased Sphincters relaxed Secretions increased Rate decreased Contractile force decreased – – – – Sweat, general Bronchioles constricted Dilated Wall contracted Sphincter relaxed Erection Pupils constricted –
Non-conscious Thought
69
not needed. Note also that the hormone, epinephrine, has many of the same functions as the sympathetic division. It thus acts as a reinforcer to prolong and intensify the effects triggered in the nervous system. Ever notice how long it takes you to calm down after being frightened or in an emergency? Conversely, during non-emergency situations, parasympathetic influences dominate, promoting such maintenance functions as digestion, rest for the heart, sexual activity, and appropriate re-distribution of blood. Unlike the case with skeletal muscle control, there is no particular advantage in having millisecond speed, because smooth muscle and glands can’t respond that fast anyway. Second, the action needs to involve diffuse targets. For example, biological emergencies often call for blood to be diverted away from the digestive tract to skeletal muscles. It makes little sense to divert blood away from just one region of the digestive tract. To be most effective, the action needs to occur all at once, not spread out in sequence over time. Finally, this is the kind of response that you don’t want to have turned off immediately- most biological emergencies last longer than milliseconds. In the case of skeletal muscle action, for example, there may be a need for a continuously changing pattern of vigorous muscle contractions, which can only occur on a background of a sustained availability of increase in blood supply. Hypothalamic-regulated neuroendocrine response is intrinsically a gradual process, so it takes care of this requirement. Neurons of the sympathetic system are clustered together in ganglia that lie outside of the brain and spinal cord. Input to these ganglia come from certain cranial nerves and from nerves arising out of the thoraco-lumbar part of the spinal cord. The ganglion cells (called postsynaptic because they are the last ones in the output chain), send axons that distribute widely to most of the visceral organs: heart, lungs, digestive tract, urogenital tract. The axon terminals dump neurotransmitter (norepinephrine) at the junctions with smooth muscle cells and glands. The adrenal gland is an interesting exception in that the postsynaptic neurons constitute the adrenal medulla itself and the secretory product is epinephrine, which is released into the blood stream (and thus, by definition, is a hormone). In the parasympathetic division of the autonomic nervous system, the postganglionic neurons are located in the visceral organs themselves. Why is this an advantage? For tubular organs, such as gut and bladder, the postganglionic neurons form more or less continuously circumferential layers, so that the whole gut can be activated more coherently, as the neurons can uniformly spread their excitation of muscle and gland secretions. This would be harder to do if the neurons were located in the brain or spinal cord. In both systems there are feedback signals, neuronal and hormonal, that inform both brain and local reflex circuits of the consequences of the autonomic output. Because of these controls, animals can adjust rapidly and automatically to environmental conditions, mobilizing for “fight or flight” when needed or relaxing for bodily maintenance functions. Neuroendocrine Response Neuroendocrine response involves hypothalamic- influenced hormonal control of neurons, through regulation of the hormones they release. Because the hormones
70
3 Kinds of Thought
released as a result of neuroendocrine response can have nervous, behavioral, or emotional impacts, both subconscious and subconscious minds can be greatly influenced by them. Correspondingly, the brain region most involved with neuroendocrine response, the hypothalamus, sits at a blurred junction between the non-conscious and subconscious minds. It has important projections into the brainstem, but most of its influence is accomplished through its projections into an endocrine organ, the pituitary gland, which has significant impacts on the both the non-conscious and subconscious mind. As such, some of the effects of hypothalamic- influenced neuroendocrine response will be deferred until the following section, “Subconscious Thought,” but for now let us briefly consider their influence on the non-conscious. The visceral functions influenced or completely regulated by neuroendocrine response include sex-hormone regulation, immune response, sleep-wake cycles, body temperature regulation, fluid homeostasis, thirst, and appetite. Overall, regulation not only provides a mechanism for the some of the neural control necessary to drive the ANS, but it also helps accomplish the essential task of maintaining a state of homeostatic balance within the body. Homeostasis and the Endocrine System Homeostasis encompasses coordination of the respiratory, digestive, circulatory, and urogenital systems – the same systems which, not by coincidence, are involved in autonomic nervous system responses. Correspondingly, homeostasis is subject to the influence of brainstem and hypothalamic control centers just as the ANS is. There are two main mechanisms for maintaining homeostasis: control over hormone release and direct nervous system. Control over hormone release often takes the form of response of endocrine glands, while direct nervous system control is accomplished through a variety of mechanisms that include incorporation of inhibitory neurons within neuronal circuits and the ability to self-tune sensitivity of neuronal firing so that it remains constrained within optimal ranges. Within the endocrine system, the hypothalamus is particularly responsible for integrating internal and external stimuli to regulate the pituitary, which in turn governs function of other major endocrine glands via tropic hormones. Feedback information from these glands and target tissues occurs in the form of biochemical and neuronal messages: these messages provide the internal stimuli that influence the brain and the hypothalamus in their regulatory control. Specific receptor-binding sites in the hypothalamus for circulating hormones constitute the exact mechanism of such message-dependent feedback, which can have both physiological and behavioral consequences. For example, injection of estrogen directly into the hypothalamus of cats that have their ovaries removed can restore sexual behavior and even induce nymphomania. (Because this treatment does not immediately affect the ovary or uterus, it indicates a direct brain effect.) Another principle that this example illustrates is that the level of hormone may alter the probability and intensity of the associated behavior, but does not alter its form. Concerning homeostatic control of the brain itself, it should be obvious that neural activity within circuits must be constrained. Otherwise, circuits could become progressively excited to the point that the activity grows out of control, as indeed it
Non-conscious Thought
71
does during epilepsy. Epilepsy typically originates in neocortex or paleocortex. Normally, seizures are prevented because cortical circuits contain inhibitory neurons which are triggered into activity by excitatory neurons. Many circuits have inhibitory neurons that are activated in parallel with activity in excitatory neurons. Inhibitory neurons can feed back inhibition to neurons along the input pathway or they can “feed forward” to inhibit neurons at more distant points in the circuit. Under normal conditions, neural circuitry is often shaped by experience. As an experience is repeated, excitability grows to facilitate the relevant pathways. Yet, at some point, circuit stability is required and achieved through gain adjustment. Though this is a new field of inquiry, some ideas do seem valid. For one, neurons can self tune. That is, upon repeated experience they increase their firing, but only up to an optimum range. The gain within circuitry is adjusted as needed. For example, if input to the circuit is blocked, gain in the system increases. If input is increased, gain is decreased. These gain adjustments seem to occur in the synapses, in particular by adjustments in the density of molecular receptors on the membranes of target neurons. Clearly, homeostasis is often a joint effort of the nervous and endocrine systems (Fig. 3.2). Neurohormones exemplify this intrinsic link between these two systems: feedback from the nervous system can influence hypothalamic release of hormones, but in turn these hormones and neurohormones also act upon the nervous system when they are released into the blood. The diagram below illustrates this interrelatedness.
Fig. 3.2 Diagrammatic illustration of the various components of the brain and hormonal systems that collectively provide homeostasis for the body’s organ systems
72
3 Kinds of Thought
Subconscious Thought Many scientists believe that most of the brain’s thinking is subconscious (see section “Free Will Debates” in Chap. 7). Obviously, that is hard to prove. Even so, we have a fair idea where much of the subconscious processing is performed. For emotions, the processing is in the limbic system, which includes the hypothalamus, septum, amygdala, hippocampus, certain portions of the thalamus, and three parts of the cerebral cortex (cingulate, piriform, and entorhinal). Another portion of subconscious mind includes the basal ganglia (which includes the globus pallidus, caudate nucleus, substantia nigra, subthalamus). Yet another subconscious area is the cerebellum which automates the coordination of body movements (Fig. 3.3).
Cerebellum Composed of its own kind of folded and fissured sheet of cortex, the cerebellum is essential for coordinating our body’s movements. It connects extensively with the spinal cord, and also with the cerebral cortex and the pons. Over half of the neurons in the CNS are found in the cerebellum, which allows it to conduct the extensive processing necessary for smooth and coordinated body movements. Balance is also extensively regulated by the cerebellum. The balance and movement impairments that result from alcohol consumption are due to the impairments they inflict on cerebellar functioning. More permanent impairment occurs from cerebellar damage, such as in stroke. Certain cognitive functions, including attention and the processing of stimuli, may also be subject to cerebellar influence. Even intelligence may be somehow correlated with the cerebellum’s functioning, as evidenced by the disproportionate reduction in cerebellar size that is associated with Down Syndrome (Lawrence et al. 2005).
Fig. 3.3 Vertical slice through the brain, along the midline. Many important structures are lateral to the midline and cannot be seen in this plane
Subconscious Thought
73
Limbic System Immediately atop the brainstem lies the limbic system. Referring back to MacLean’s triune brain concept discussed in the section on the Readiness Response, the limbic system is the second-most evolved brain system and correspondingly controls more complex functions than the basic, vital functions attributed the primitive reptilian complex epitomized in the brainstem. The few parts of the cortex that are in the limbic system are of ancient origin and their microstructure differs from the “neo” cortex that is found in higher mammals, most notably primates. While the limbic system does control some more instinctual functions like appetite and sleep with thirst, it also manages emotion, memory, and homeostatic functions of the subconscious brain. Though many of the mechanisms of the limbic system are carried out below our awareness, the results of these mechanisms often inhabit our consciousness. For example, the limbic system plays a role in learning and memory. While we don’t experience the neuronal changes that lead to learning or memory, once something has been learned or a memory has been formed, the new knowledge or recollection doesn’t stay forever beyond the reach of our conscious mind. Instead, we can dip just below the surface of our consciousness, into our subconscious, when we wish to retrieve such things. If these memories are well learned, they may be retrieved subconsciously and thus they can influence thinking and behavior without our realization. Prejudices are a good example. Many of the limbic system operations that govern emotions are considered subconscious operations. Likewise, much of the basal ganglia operations in movement control and coordination are considered subconscious. Both limbic and basal ganglia operations, however, can be manipulated and trained by conscious thought. For example, bad attitudes can be supplanted by good attitudes through disciplined conscious-mind correction. Motor skills such as touch typing or throwing a football can be improved by practice and coaching from the conscious mind. Much of learning in general begins as a consciously willed operation (see section “Free Will Debates” in Chap. 7). Our emotions are constructed largely by circuit connections known as the “Papez” circuit. This circuit extends from the association regions of the neocortex through the cingulate gyrus along the midline of the cortex to the hippocampus and amygdala, and in turn to the mammillary bodies of the hypothalamus. We are not aware of the neuronal activity that is involved in the modulation of this circuit, though we may be consciously aware of the emotions that result. The hippocampus, amygdala, thalamus, and hypothalamus are considered the main structures of the limbic system, but the others also serve important functions. The septum mediates the driving force for rhythmic, oscillatory EEG activity of the hippocampus, known as “theta rhythm” which is associated with an aroused brain (I’ll have much to say about this rhythm in Chaps. 5 and 6). The hippocampus is crucial to forming so-called explicit memory. It also contributes to emotional behavior, including stress regulation. Recent studies have shown that the advancement of Alzheimer’s disease is associated with changes in hippocampus structure and no doubt is a primary cause for the memory deficits associated with the disease.
74
3 Kinds of Thought
Emotional behavior is influenced by the cingulate cortex, although the hippocampus and amygdala are the primary generators of emotion. The piriform lobe of the cortex functions in olfactory processing. Finally, the entorhinal cortex seems to aid memory formation and spatial location and is a bidirectional transmission connection with the neocortex. The amygdala and hippocampus may collaborate in the regulation of some emotions. For example, both the amygdala and hippocampus can stimulate release of corticotropin-releasing hormone (CRH) from the hypothalamus. This hormone regulates another hormone that is responsible for the release of cortisol, the so-called “stress hormone” that mediates stress responses. Most often, the amygdala is associated particularly with control of fear and anxiety. Damage to the amygdala correlates with decreased fear, as well as an overall flattened affect. Supporting evidence for this observation is seen in an often cited study conducted by neuroscientists Heinrich Klüver and Paul Bucy (Bucy and Klüver 1938) wherein temporal lobe tissue, including the amygdala, was completely removed from the brains of monkeys. Unusual behaviors, including increased oral and sexual interest, increased interest in the observation of objects, and decreased display of fear or other affects, were all witnessed. Various subsequent studies have focused particularly on lesioning or stimulating only the amygdala. Lesioning or damage leads to the observations of fearlessness and flattened affect that were seen in the Klüver-Bucy study, as well as decreased aggression. Stimulation has the opposite effects of increasing fear, anxiety, and aggression, as well as attention. The thalamus is a group of many neuronal clusters along the midline of the brain, lying just in front of the brainstem and underneath the cerebral cortex. These clusters generally are topographically segregated for routing various sensations from specific parts of the body to the neocortex. When describing thalamic function, it is not uncommon to hear the thalamus referred to as a “relay station.” It has earned this moniker because its primary function is to route and process sensory information received from topographically mapped spinal cord projections to the appropriate sensory cortices for analysis, apparently through parallel processing. Given that consciousness arises from the interaction of the cerebral cortex and the brainstem reticular formation, these relay duties make the thalamus an essential structure in the genesis of consciousness. Permanent loss of consciousness (coma) can result from damage to certain parts of the thalamus. Much like its role as an intermediate in the generation of consciousness, the role of the thalamus in the generation of emotions is that of a liaison. Its cortical connections deliver hypothalamic input to subconscious mind for complex analysis and interpretation. Other functions are also subject to certain parts of the thalamus. Sleep, for instance, is characterized by relatively synchronized activity of billions of neurons, especially those located in the thalamus and the cortex. Also, its reciprocal connections with the cortex provide the anatomical substrate for oscillations, which we will discuss later. The hypothalamus is a small zone on the ventral surface of the brain. I mentioned before that this area contains several distinct clusters of neurons that are important
Subconscious Thought
75
for regulating the visceral and hormonal functions that constitute neuroendocrine response and maintain visceral homeostasis. Vital to such regulation are the hypothalamus interactions with the pituitary gland, particularly its use of “releasing factors.” A “releasing factor” is a neurohormone that alters the release of hormones from the pituitary. Such hormones should not be confused with neurotransmitters, which are used to transfer information between adjacent neurons. Neurohormones travel through the blood and adopt hormonal-like functions, targeting specific organs. These releasing factors are widely used by the hypothalamus, and directed toward the pituitary gland. The hypothalamus connects to a venous system that drains directly into the anterior pituitary gland, and utilizes this system to distribute releasing factors. A number of hormones released from the anterior pituitary gland may be regulated by releasing factors, including adrenocortical stimulating hormone, follicle stimulating hormone, luteinizing hormone (the release of which can be stimulated or inhibited), and thyroid stimulating hormone. Hormones released from the posterior pituitary gland, including oxytocin and vasopressin, can also be controlled by hypothalamic releasing factors. Because interactions between the hypothalamus and pituitary gland are so complete, they can be jointly referred to as the hypothalamic-pituitary axis. The subconscious functions influenced or completely regulated by neuroendocrine response affect diverse behaviors, ranging from copulatory behaviors to eating behaviors. It is not uncommon for the hypothalamus to exert multiple forms of influence in each of these domains. For example, in the case of eating behaviors, a study conducted by A. W. Hetherington and S. W. Ranson (1942) involved producing small lesions in different areas in the hypothalamus of rats. These experiments demonstrated that the lateral hypothalamus holds sway over appetite, while the ventromedial hypothalamus exerts some control over satiety. The rats that had their lateral hypothalamus lesioned developed symptoms of anorexia, whereas rats that had their ventromedial hypothalamus lesioned would eat to the point of obesity. Thus, different areas of the hypothalamus control appetite differently. Still other hypothalamic-controlled mechanisms additionally serve to regulate appetite and weight, such as hypothalamic processing of the hormone leptin, which is produced by adipose (fat) cells. Accumulation of adipose tissue causes leptin levels in the blood to rise. Leptin receptor molecules on the hypothalamus in turn initiate responses that lead to increased metabolic rate and decreased appetite (Kalra et al. 1999). The hypothalamus teeters on the ledge between the non-conscious and the subconscious. While it does participate in many direct subconscious functions such as those above, it also has a very important role in the non-conscious control over the ANS. Its range of ANS control is quite broad, despite the fact that much of the ANS lies outside of the brain. Some of this regulation is accomplished indirectly through the same neuroendocrine responses that regulate the subconscious. These responses may affect endocrine tissues in the ANS or elsewhere. The neurohormonal control provided by the hypothalamus is vital, since neural control of autonomic viscera needs to be slower
76
3 Kinds of Thought
to act, more undifferentiated, more divergent, and longer lasting than does neural control of skeletal muscle. Generally neuroendocrine response allows for these types of control. ANS regulation can also be accomplished by means of direct projections of cells from the hypothalamus into structures dictating autonomic functions, such as the brainstem and spinal cord.
Reward Of particular interest to all of brain function, both conscious and subconscious, is the reward system, that part of the brain that mediates positive reinforcement to make us feel good, feel happy. This system is driven especially but not exclusively by neurons that release dopamine and norepinephrine as neurotransmitters. These neurons originate from the ventral tegmental area and the locus coeruleus, respectively, in the brainstem. With introspection, we can consciously become aware of the consequences of this reward system. That is, we can know when we feel good and what makes us feel that way.
James Olds (1922–1976)
What makes cocaine addictive? What gives us an emotional “high” after laughter? The answer is the same to both questions: both cocaine and laughter trigger the brain’s “reward system.” The reward center is responsive to a variety of positive reinforcers, which may vary from person to person. Among these reinforcers are money, special foods (like chocolate), addictive chemicals, praise and countless more. (continued)
Subconscious Thought
77
James Olds (1922–1976) (continued)
James Olds is credited with illuminating what we know about the “reward system,” which is contained in a so-called medial forebrain bundle that passes through the lateral hypothalamus. This system functions just as its name implies: stimulating it produces a “reward,” some pleasurable sensation that tends to cause us to seek out whatever the stimulus that causes it. In the 1970s, Olds visited my lab and told me some things about his discovery of this system that I don’t think he ever said publicly. When I asked him how he made his discovery, he said that he and his research partners were studying the effects of electrical brain stimulation on behavior of rats. Although the rats were tethered by a light-weight wire through which stimulus was delivered to an electrode implanted in the brain, rats were free to move about in an open field. One day, stimulation in a rat caused it to stop moving and look around, as if it were trying to find where the stimulus was coming from. Olds said the rat even looked as if the stimulus made him feel good. If the rat could smile, he would have. This made James think the stimulus might be pleasant and be a positive reinforcer. Olds was trained in a psychology laboratory, so the idea of positive reinforcement came naturally to him. If the stimulus acted as a reinforcer, rats could be trained with it. The simplest way would be to deliver stimulus only when the rat strayed into a particular part of the open field. This is similar to the operant conditioning strategy that B. F. Skinner made famous. Olds quickly discovered that he could make the rat stay in any corner of the field by delivering stimulus only when the rat moved into that area. Rats quickly learned where Olds wanted them to go. The rat seemed to look for the stimulus and learned where they needed to go to get it. It turned out that the electrode was accidentally misplaced in this particular rat. After sacrifice and histological examination of the brain, Olds found that the electrodes were misplaced by a millimeter or so in the lateral hypothalamus, actually in a fiber tract region known as the medial forebrain bundle. From then on in other rats, Olds made it a point to implant stimulating electrodes in this fiber tract. He also studied reinforcement learning more precisely by using operant condi tioned lever pressing. Rats studied in this way revealed they would work very hard at lever pressing to get delivery of their electrically-stimulated “reward.” Some would lever press hundreds of times in a short session to the point of fatigue. Olds’ wife, Marianne, aided him in anatomically mapping the reward system as well as in determining its pharmacological properties. From these collaborations, it was determined that the neurotransmitter norepinephrine appeared to be directly involved in functioning of the brains reward system by inhibiting adjacent areas of the brain. Many other substances can also be associated with reward pathways, such as amphetamines, cocaine, nicotine and various other drugs. (continued)
78
3 Kinds of Thought
James Olds (1922–1976) (continued)
Learning and memory were important research areas for Olds. His research on these topics reflects the intersection of his training as a psychologist and his interest in neurophysiology. Using a single-neuron recording method, Olds observed the neuronal changes in various brain areas of rats as auditory conditioned learning occurs. From these observations he derived a model for the sequence of classically conditioned learning and the corresponding brain pathway changes that occur at each step. First, when a rat initially shows interest in the stimulus, the hypothalamus and “emotional” centers of the brain change their activity. These changes are followed by alterations in the responses of the reticular activating system that correlate with the arousal resulting from a rise in stimulus directed. Movement activity and thalamic responses increase next, as purposive, conditioned behavior emerges. Finally, activity changes in the auditory and frontal cortex, when conditioning behavior is fully achieved. This conditioned learning model shows that different sections of the brain develop learning at different rates. These conditioning experiments also depicted that the changes associated with conditioned learning only occur within the pathways associated with the learning cue (in the auditory experiment, this corresponds to the auditory cortex) and the unconditioned, natural “reward” pathways that are relied on to generate the conditioning. Olds never received a Nobel Prize for his work, although many speculate that he would have if he had not died at a relatively young age. Certainly his research has resulted in many insights, not the least of which is the value of interdisciplinary research. Sources: Olds, J. (1975). Mapping the mind onto the brain. In G. Adelman, J. Swazey, & F. Worden (Eds.), The neurosciences: paths of discovery (pp. 375– 400). Cambridge: MIT. Thompson, R. F. (2008). James Olds, May 30, 1922–August 21, 1976. Washington, DC: The National Academies Press. Retrieved July 10, 2008, from http://www.nap.edu/ html/biomems/jolds.html
Positive reinforcement motivates behavior, both subconsciously and consciously. Brains of trained seals and children both learn through positive reinforcement, often, I presume, subconsciously. The way that humans can consciously access this reward system (I have no idea of the extent to which seals are conscious) comes from a tight coupling between the prefrontal cortex and the reward system. For example, stimulation of the prefrontal cortex can excite dopamine neurons in the ventral tegmental area which contributes fibers to the medial forebrain bundle. Under nonstimulated conditions, Ming Gao and colleagues at Yale showed that 67% of the neurons in the reward area fired impulses spontaneously in a slow oscillation that was highly synchronous with activity of prefrontal cortex neurons (Gao et al. 2007). Something was binding them together. This coherent firing was even
Subconscious Thought
79
seen in anesthetized rats, and it could be abolished by transecting fiber tracts just behind the prefrontal cortex. Later in Chap. 6, I have a large section devoted to the role of synchronous activity. Here, I have introduced the issue of how the reward system interacts with higher thought processes in affecting behavior. Such interaction may well involve synchronicity between the subcortical reward system and the cortex, though that has not been investigated. Science writer, Kathleen McGowan, has raised the specter that the brain is hard-wired for bad behavior, such as the “seven deadly sins” (McGowan 2009). Such behavior occurs because it seems to be positively reinforcing. She points out that the brain must often deal with the conflict between what the reward system impels us to do and the conscious-mind veto operations of our better judgment. There is research indicating operation of a “conflict detector” in the anterior cingulate cortex.
Subconsciously Driven Behavior Too often, perhaps, our attitudes and behaviors are driven by the subconscious mind, operating in seeming independence from the conscious mind. Everyone has probably experienced this phenomenon without realizing it. For example, whether you have learned how to type without looking at the keyboard, learned a musical instrument, or learned any complex movement behavior such as riding a bicycle, you know that you first learned it consciously and now can perform it without consciously thinking about how you do it. When performing a new behavior, the brain has to use many neurons in many different locations. But as the behavior becomes perfected and ingrained, it can be performed with many fewer neurons. This has been confirmed in a variety of brain-scan studies. The same thing happens with attitudes and emotions. If repeatedly rehearsed, they become ingrained and can become driven underground below the radar of consciousness. This causes us to have unthinking “knee jerk” responses to events. This can be the basis for such things as religious or political prejudices. Such beliefs are so deeply ingrained that people know they may be asking for trouble if they bring up religion or politics in casual conversation. People have a built-in bias for holding all kinds of beliefs. Once a belief is established, we tend to embrace it steadfastly, denying any evidence it might be wrong – sometimes even when evidence clearly shows it to be wrong. People prefer to believe what they already believe. Most people don’t like ambiguity or discrepancies. We are far more likely to seek out evidence supporting our beliefs than evidence that would negate them. Children are clearly prone to believe what they are told, especially if it comes from someone who is a trusted authority. That is why all societies stress schooling of the young so children can be indoctrinated into the belief systems of the culture. That is why dictators create youth corps. Here is another example of subconscious thinking: bar hoppers have long known that people of opposite sex seem to become more attractive as the evening – and drinking – wear on. A University of Missouri researcher, Ronald Friedman (2005), reported a study that showed that male undergraduates were far more likely to
80
3 Kinds of Thought
consider women attractive if they had just been exposed to alcohol-related words, such as “keg,” “wasted,” “booze.” Moreover, they didn’t even know they had seen the words, because they were flashed on a disguised computer screen for only 40 ms. Findings like this build on a long tradition of cognitive research that makes it clear that our attitudes and behavior can be driven by subconscious forces. An even more compelling example of subconscious processing operating below the radar of consciousness can be given in the pathological condition known as “blindsight” (Farah 1995). This condition was first discovered by Lawrence Weiskrantz in the 1970s in patients who had suffered extensive damage to the visual cortex, a large expanse of tissue at the back of the brain. In such patients, large regions of the visual field will not be consciously recognized. In the blind field, the patients will say that they see nothing, no sensation of light or dark or of color. Yet if the patient is asked to guess what is in the blind field, the correct responses are much greater than chance. If instructed to reach for an object in the blind field, the patient will usually reach in the right direction. If asked to report verbally what shape an object is, the patient will usually be wrong. But if asked to guess among multiple-choice shapes, the performance is much better. Of course in such patients the recognition of what is in the blind field is never perfect, but the fact that betterthan-chance recognition occurs at all indicates that a significant amount of visual processing is going on without conscious awareness. Even better blind-sight performance can be achieved in monkeys with experimentally induced lesions of the visual cortex. The difference presumably is that humans have more ego involvement; that is, they are loath to admit to unreasonable things. To humans, it is unreasonable to indicate that you see something when you don’t. Monkeys presumably don’t have this hang-up.
Bias The way most of us are aware that a subconscious mind operates is when we are forced to confront biases and prejudice. We all have certain biases and prejudices, even if they are not racial, ethnic, or religious. Biases often only become consciously realized when overwhelming logic and evidence force us to recognize that our attitudes or behavior are not entirely rational. Bias in others is much easier to spot than in ourselves. A formal study of the operation of bias has been reported by Silvia Galdi et al. (2008). They attempted to answer the question of whether people can think they are undecided about a political issue even after they have already subconsciously made up their minds. In other words, how well do people know their own minds? Apparently not well. When confronted with new information, the subconscious mind sizes up things immediately, while the conscious mind takes a more studied approach. A good example is the first impression formed subconsciously when meeting a new person. If we did not have a mechanism for fast sizing up of new events, our conscious brain would be overwhelmed. The downside, of course, is that fast first impressions may well be wrong. Conscious evaluation takes time.
Subconscious Thought
81
The Galdi team measured automatic responses to political issues by computer analysis of a person’s immediate (“knee-jerk”) association between a given attitude object and words with positive or negative meaning. Then these measures were compared with the person’s self-reported attitude. In the study of political attitudes, there was little correlation between automatic associations and reported attitude. In other words, people were poorly aware of how little they knew about their own subconscious beliefs. The place in the political arena where this confusion is most evident is in voting exit polls. While the voter may well have voted as claimed, the reasons for voting a certain way are often suspect. This is entirely consistent with my own cynical automatic response, based on seven decades of interaction with people: namely, people often have two reasons for what they do, the one they admit to and the real one. In the Galdi study, one practical finding was that so-called “undecided” voters had really made up their minds subconsciously. Automatic associations for a given political issue in “undecided” subjects correlated with consciously reported beliefs and future choices. But the opposite was true for subjects who were in the “decided” group. In other words, automatic associations predict future choice, even though the subject claims a decision has not been made. In such cases, subconscious biases seem to be directing the ultimate decision before the subject is aware that a decision has been made based on bias. The correlation between automatic associations and conscious reports did not occur in “decided” subjects. We assume that the decision in the “decided” group was arrived at from conscious reflection, which may have included analysis of earlier subconsciously held beliefs. Over time, the conscious decisions are likely to influence future automatic associations.
Access by the Conscious Mind Can subconscious mind be accessed by the consciousness? Certainly, but it requires an uncommon level of introspection. Such access often requires the aid of a psychological therapist. On the other hand, what we think and do consciously serve to program the subconscious mind, though we may not be aware of it. An example known to everyone is the process of learning to type a computer keyboard without looking at the keys. Depending on the learning protocol, you may first consciously learn the middle row of the keyboard for the left hand (A, S, D, F, G) and the right hand (H, J, K, L). You practice this over and over, and eventually you can hit the right keys without looking. Then you repeat the process for the upper row and then the lower row. Voila! You have learned to touch type (except for numbers and punctuation). After doing this for a while, the whole process becomes subconscious. In fact, an experienced typist may not even be able to verbalize the layout of the keyboard they use to type at high speed. This same idea applies to learned habits. I have two teenager granddaughters who play with their hair constantly. They don’t even know they are doing it. The same idea applies to prejudices and bias, which are also over-learned until they become ingrained subconsciously. What about the kind of exchange between minds
82
3 Kinds of Thought
Fig. 3.4 Conscious and subconscious minds are separate manifestations of brain function. But these minds are not independent. Each influences and programs the other
wherein conscious mind accesses the subconsciousness (Fig. 3.4)? We have all experienced situations where ideas and facts just seem to pop into our heads. Interesting physiological evidence abounds to help explain this. For example, recently a cluster of nerve cells in the brainstem, the locus coeruleus (L.C.), has been found to adjust functioning of the prefrontal cerebral cortex (the part where “heavy” conscious thinking occurs). Pharmacological manipulation of the release of L.C.’s neurotransmitter, norepinephrine, in the cortex alters performance on cognitive tasks in a nonlinear but predictable way. When L.C. firing is continuous, task performance is impaired. When activity is phasic (i.e., off and on) task performance is optimized (Minzenberg 2008). Ever get a feeling or attitude for no apparent reason? Under these conditions, if you reflect consciously, you may realize why that attitude or emotion is there. The conscious mind can then act upon that feeling or attitude, and consequently accept, reject, or modify. Most of us think our conscious mind directs our behavior. We consciously decide what to think and how to act and then instruct our brain how to respond. Or as Star Trek’s Captain Jean-Luke Picard decides on a course of action and tells his crew: “Make it so.”
Subconscious Thought
83
This is the notion of free will. However, willed action is not entirely free. Actions can spring directly out of the subconscious. In fact, many scientists think that all actions spring forth from subconscious mind. In short, they believe there is no such thing as free will. I will explore the pros and cons of such beliefs in Chap. 7. It is certainly demonstrably true that subconscious thinking has a huge influence on conscious behavior. The most obvious expression is in bias of conscious thinking. Everyone has certain biases, about politics, religion, people of different races and cultures, and, of course, personal preferences such as favorite foods and drink. The physical taste of grapefruit, for example, is always the same. Some people like it, others don’t. A common experimental approach for evaluating subconsciously driven behavior is to use priming stimuli that are apparently unrelated to a task, yet nonetheless influence performance without conscious realization by the subject (Custers and Aarts 2010). For example, when subjects are asked to solve a puzzle, a subgroup that was first primed by reading achievement words (such as win, achieve, solve, etc.) outperformed a comparison group that did not receive such priming. Other studies have identified other ways subconscious cues can affect behavior. People entering an office become more competitive when they see a leather briefcase on the desk. People talk more softly when looking at a picture of a library. They clean a table more thoroughly if there is scent in the air of a cleaning agent. Fluid consumption increases after exposure to drink-related words. Helping behavior increases after subliminal priming with names of friends or helping occupations (such as doctor, nurse, etc.). Conclusions based on such findings have been faulted on the grounds that the subjects were consciously aware of the prime stimuli, even if they did not realize the primes were intended to influence their behavior. Subliminal stimuli which are not consciously realized would make a better case of subconsciously driven behavior. For example, subliminal exposure to achievement words improves subsequent task performance. One of the more recent, and unexpected examples of subconscious bias was a recently reported study on the influence of touch. In these experiments, researchers measured the effects of holding heavy or light objects, solving rough or smooth puzzles, and touching hard or soft objects (Ackerman et al. 2010). A typical test design was to have volunteers pretend to conduct a job interview, with the applicants presenting the resumes on light or heavy clipboards. Those with the heavy clipboards were judged more competent. In another test, clipboard weight affected how people judged sociopolitical attitudes: from a clipboard survey asking whether specific public issues needed more or less government funding. Clipboard weight exerted a gender effect: men allocated more money in the heavy clipboard condition, while women’s decisions were unaffected by board weight. In a third study, two sets of impressions were obtained from volunteers who read a passage about a social interaction that could be variously interpreted, as to whether it was friendly or contentious, and competitive or cooperative. Immediately before reading, subjects put puzzle pieces together that were rough (covered with sandpaper) or smooth. Subjects working the rough puzzle judged the social-interactions in the
84
3 Kinds of Thought
story as more strained and harsh. Another test revealed with roughness affected decision-making in social situations. Another test comparing the effects of hardness and softness involved volunteers watching a magic act to guess the trick. The object used in the trick was either soft (a blanket) or hard (a rock). Subjects manually examined the object to verify there was nothing unusual about the object. Actually the magic was postponed (never performed), so that volunteers would immediately watch an interaction between two people, one pretending to be a boss and the other an employee. Subjects rated the employee in terms of attitude (rigid, unyielding, strictness, etc.). People who had manipulated the hard object provided more negative impressions. A final test evaluated the effects of sitting on a cushioned or hard chair. Volunteers were tested on an impression task, as in the study just mentioned, and also in a simulated car-buying negotiation. Subjects who sat in a hard chair judged the pretend employee more stable and less emotional than did subjects who sat in soft chairs. In the simulated task of negotiating with a car dealer, the buyer’s initially offered price was the same, irrespective of what kind of chair the dealer sat in. But, among those buyers who made a second offer when the dealer rejected the initial offer, less change in offer came when the buyer sat in a hard chair than in a soft chair. The explanation for such biases is that these touch stimuli have subconscious metaphorical connotations: heaviness, for example, is associated with ideas of seriousness and importance. Roughness carries the connotation of difficult or coarse or unpleasant. Hardness conveys such notions as rigidity or toughness. These findings obviously may have practical application for situations involving negotiators, pollsters, job seekers, marketers, and others. Any way you look at it, subconscious experiences can influence conscious behavior without the subjects realizing it. I want to emphasize that subconscious operation acts as a filter of attitudes, beliefs, emotions, and memory to influence conscious decision making. The filter is constructed from unconscious processing of past learning and present experience. This filtering effect is fundamental to human nature. Frequently, behavior is irrational, because we don’t think consciously about why we are doing or feeling certain things. For example, we know of otherwise intelligent people who do stupid things; a whole book has been written on this phenomenon. Why are people so often irrational? The subconscious mind makes choices that are not always properly analyzed and vetoed or altered in the conscious mind. Conscious mind can discipline subconscious thinking because through consciousness we can be more objective, introspective, discerning, analytical, and more in control. Getting conscious mind to do what it is capable of is another matter. Otherwise, we can act like zombies. Implications for willed behavior are disturbing. Subconscious mind is a major generator of willful behavior. Goals are pursued because they have some reward or positive reinforcement value. The reward system in the brain can certainly operate subconsciously. It is an open question if we consciously pursue rewards; maybe we delude ourselves into thinking we decided consciously to seek a given reward.
Subconscious Thought
85
Likewise, brain systems that mediate avoidance of discomfort can reside as “low” in the brain’s hierarchy as the brainstem. These facts, however, are not proof that similar reward- or avoidance-seeking cannot be driven by conscious intent, choice, or decision. Possible cooperative action of subconscious and conscious mind in generation of willed behavior is considered in Chap. 7. The French philosopher Jean Paul Sartre (1905–1980) persuaded many scholars to accept the notion there is a seamless connection between consciousness and subconsciousness. Sartre believed we are our subconscious mind too. Our individual essence and human responsibility do not stop at the edge of consciousness. The subconscious mind is especially important because it uses its store of unrecognized memories to influence our attitudes, feelings, thinking, decisions, and behavior. Together, these two minds do constitute who we are as distinct personalities. Sometimes the two minds are conflicted. You can be a subconscious racist, but reject such attitudes consciously. You can say one thing and do quite another. The bad part about reflex attitudes, emotions, and beliefs is that subconscious processing is not readily accessible for the correction or veto the conscious mind can provide. All new information is passed through the filters of the subconscious memory of past events. Thus we often respond quickly, in a way that agrees with past experience without the apparent need to “think about it.” Failure to “think about it,” however, consigns us to stereotyped attitudes, emotions, beliefs, and behaviors that may not be in our best interests. Think about your close friends and relatives. You know them well enough to recognize many predictable mannerisms, attitudes, feelings, thoughts, and behaviors. They can do the same with you.
Unmasking the Subconscious Our brains probably could not handle the information load if everything going on in the subconscious were simultaneously available to conscious mind. Nonetheless, our behavior emerges from the interactions of conscious and subconscious mind. How much interaction occurs depends on how introspective and self-aware we are. Many scholars like to think of conscious mind as the tip of the iceberg, that the vast majority of what our brain does is subconscious. Some say that 90% of brain operations are subconscious. I don’t know where they get such numbers. There is no evidence at all that I know about to justify assigning any percentage number. But common experience tells us that much of what we do is not well thought out consciously.
Existential Emotions Destructive emotions, such as anxiety, guilt, anger and shame should concern us the most. In their book on Passion and Reason (Lazarus and Lazarus 1994, 321p ), Richard and Bernice Lazarus state that these are existential emotions in that they derive from the meanings and ideas about who we are, our place in the world, even of life and death.
86
3 Kinds of Thought
They say that “We have constructed these meanings for ourselves out of our life experience and the values of the culture in which we live.” We learn and create these meanings both consciously and subconsciously. They go on to reinforce the importance of conscious understanding of our emotions: “If we understand ourselves, and our emotions, we are much more likely to make wise decisions about our lives, which reflect both the realities we face as well as our hopes, the will to struggle, and a degree of optimism that we can prevail against adversity.” Failure to recognize or denial of reality can be destructive, even dangerous. Such failure certainly leads to poor choices and decisions. These ideas were the focus of my 2008 book, Blame Game. How To Win It (Klemm 2008b). The thesis was that too often when things go wrong, humans go into denial and deception. It takes conscious awareness, analysis, and discipline to move to deliverance. Cognitive therapy, where conscious-mind analysis is used to reappraise disturbing emotions, can help people recognize dysfunctional thinking and help re-program the brain. Such therapy can weaken the hold of a wide variety of disturbing emotions that have been programmed into the amygdala and other subconsciously operating systems. Drug companies are also working on drugs that may be able to do the same thing. The memory is the problem. Without intervention, the negative effects of learned bad emotions persist as long as the (subconscious) memory persists. That brings up this point: we have many memories that operate at the sub conscious level. Such memories are called implicit or procedural. If you don’t believe that, think of how Magic Johnson did his famously complex moves in making lay-up shots in a crowd of defenders. Do you really think his brain consciously figured all this out in a split second? Or how about a be-bop jazz musician like Charlie Parker who ran off strings of notes so fast that fellow musicians in the audience could not track the notes in their own consciousness. They had to tape record the solos and play back at slow speed to transfer Parker’s solos to sheet music. Important processes affecting memory occur not only during the special form of consciousness that we call dreaming, but also when we have fallen into the pit of deep sleep oblivion. I outlined some of the evidence for this conclusion in my earlier book, Thank You Brain For all You Remember. What You Forgot Was My Fault (Klemm 2004, 312p). I mentioned earlier that back in the 1950s, scientists demonstrated that even during anesthesia, sensory information gets into the brain. In fact, the brain responses to electrical stimulation in certain pathways can be greater than that which occurs in a fully awake state. Even the complex processing of language occurs, at least to some extent, during subconsciousness. In modern times, brain scans of unconscious patients showed that the language areas of the cerebral cortex can be activated similarly to what can be observed in normal, conscious patients. When a person becomes emotional, a parallel shift to irrationality often occurs. Rigorous reason is a property of the conscious mind, and subconscious emotions can overwhelm conscious processing. A hyper-emotional person does not listen well and tends to hear only those comments that are consistent with the current
Subconscious Thought
87
emotions. Emotion distorts reality. A two-thousand year-old quote from Aristotle makes the point about as well as can be stated: Under the influence of strong feeling we are easily deceived. The coward under the influence of fear and the lover under that of love have such illusions that the coward owing to a trifling resemblance thinks he sees an enemy and the lover his beloved. And the more impressionable the person is, the less is the resemblance required. Similarly, everybody is easily deceived when in anger or influenced by any strong desire, and the more subject one is to these feelings the more one is deceived (Karrass 1974, 280p).
As to the issue of forgetting bad subconscious memories, Sigmund Freud rose to fame on the hypothesis that the brain can repress such memories. Today, magnetic resonance imaging (MRI) studies seem to confirm the idea, in that brain activity increases in certain areas, such as the prefrontal cortex, when memories are being repressed. In other words, repression seems to be an active process, one that presumably can be learned. Though this line of research is in its infancy, many clinical problems could perhaps be ameliorated if we knew more about how the brain actively suppresses memories. Examples include phobias, post-traumatic stress syndrome, anxiety disorders, and paranoia. Even counter-productive negative emotions might be helped, such as the learned response of low self-esteem. Most of present-day research seems to focus on finding drugs that help us forget bad memories, as described in an article by Greg Miller (2004). Drug companies don’t make money on cognitive strategies. What we really need is a better understanding of how the conscious mind can be taught to suppress unwanted memories in ways that actually resolve a problem rather than just driving it underground. Sigmund Freud, though he had several crackpot ideas by today’s standards, laid much of the theoretical foundation for recognizing and understanding the role of the subconscious. It was Freud who pioneered the notion that much, if not most, of our thinking and behavior is driven by subconscious processes. Perhaps Freud’s greatest contribution was to show that each of us has a subconscious mind that is always operating, but doing so beneath the radar of our conscious awareness. This subconscious mind can process information, make decisions, and even drive our attitudes, emotions, and behavior without our knowledge. Intuitively, we know that we are conscious and that our conscious mind can be interacting with the subconscious. We often, for example, have ideas or emotions surface into our consciousness that must have sprung up from our subconscious mind. Ever get a feeling or attitude for no apparent reason? Under these conditions, if you reflect consciously you may realize why that attitude or emotion is there. The conscious mind can then act on that feeling or attitude, accepting, rejecting, or modifying it. I consider conscious mind as the teacher of the subconscious. It is the conscious mind that teaches the subconscious codes of behavior, how to think, to believe, to speak intelligibly, to use one or more languages, to read music and play an instrument, to ride a bike, to drive a car. The conscious mind serves to program the subconscious, as well as to modulate and even veto ideas generated by the subconscious. So why does the conscious mind so often fail to check our behavior and make sure it always serves our best interests? Two reasons come to my conscious mind: (1) the conscious brain may not
88
3 Kinds of Thought
be fully aware of what the subconscious is doing until it is too late, and (2) the conscious mind may not have learned what is best and how to impose its will on subconscious drives. We often think of conscious mind and language as synonymous. Language, however, is limited. Consciousness is not limited to language. Our brains devote only a small fraction of its 100 billion or so neurons to language. We can be aware of vastly more than we can conveniently or adequately describe. Most of the brain is responsive to images, and images often evoke subconscious-mind metaphors that penetrate to the depths of our being. One practical application of this power of images is that it is much easier to remember pictures than it is to remember words (Klemm 2004). Another application is that the image of Coca-Cola’s logo evokes a deep-seated preference for that drink over Pepsi, even though blind taste tests show that most people think Pepsi tastes better. A more serious example of the use of pictures to touch people deeply is in the design of Children’s Hospital in Pittsburgh. Marianne Szegedy-Maszk has described how this hospital was design to comfort the children and their parents (Szegedy-Maszak 2005). Using a technique patented by Gerald Zaltman, an emeritus business professor from Harvard, the hospital design team had children, parents, and staff members cut out pictures that they somehow associated with the hospital, even though they might not be able to explain why they picked the pictures they did. The people were then interviewed nearly 2 h to explore the thoughts, feelings, and associations that were triggered by the pictures, producing a stream of metaphors. When all the interviews were examined, several core themes emerged. The main metaphor was transformation. Supporting metaphors were control, connection, and energy. So how did the designers translate these metaphors into interior design? The hospital features murals of butterflies, the quintessential symbol of transformation. All rooms will look out upon a huge garden, symbol not only of transformation but also of energy and connection. Odors have powerful metaphorical connotations that extend far beyond simple recognition of smell. Odors evoke a whole context of associations and feeling. The smell of cherry pie evokes for me the memories of my 3-year-old child and my kindly grandma DeLong, who was always baking things that she knew I liked. “I like gamma’s pie,” captured more than just my liking cherry pie. It expressed in ways that I could not do with language, how much I was comforted by grandma and how much it meant to me that she cared about what I liked and wanted. Certain odors, such as lemon or lavender, evoke comforting memories for certain people. “Aroma therapy” can be an effective approach to treating psychosomatic disease.
Conscious Thought Conscious mind is the hallmark of human existence. This kind of mind is also the hardest to understand. A whole chapter is devoted later to conscious mind, but the purpose here is just to introduce the topic and explain that conscious mind is one of our three kinds of mind.
Conscious Thought
89
Consciousness has been defined in many ways. The definition may be like pornography: hard to define but “you know it when you see it.” Defining “consciousness” is much more problematic. E. Pacherie suggests there are two ways to think about consciousness: the first idea is that consciousness is a state where one is conscious (aware?) of an object, property, or state of affairs. This strikes me as a circular definition, which can also be found in many dictionary definitions. The second definition of consciousness is that it is a state where one “has a representation of that state as a specific attitude toward a certain object, property, or situation.” It seems to me that this is simply saying that consciousness is a state in which you are aware that you are aware. I think a more useful permutation of this idea is that consciousness is a state of self-representation. We operate in the world around us in the context of self awareness. These notions are perhaps easiest to comprehend if consciousness is regarded as a neurophysiological avatar, generated as a neural representation of self, aware of events in the environment in the context of itself. Such an avatar could be a selfaware active agent of the embodied brain, an argument that I pursue in Chap. 8. What is sensed in the consciousness? Consciousness senses first its own ego, its own identity. It is the sense of “I.” It senses much of what the brain is thinking, such as beliefs, wishes, decisions, plans, and the like. Moreover, consciousness can sense how it is teaching the subconscious brain, whether it is in terms of specific cognitive capabilities, motor skills, ideas, attitudes, or emotions.
What It Means to Be Conscious Neither philosophers nor neuroscientists have good answers to fundamental questions about consciousness. This includes such questions as “What is consciousness? Where does it come from? How do conscious mind and brain interact?” Originally, people believed that mind was some kind of spirit separate from brain, a “Ghost in the Machine.” This philosophy acquired academic standing in large part because prominent philosopher Rene Descartes in the middle seventeenth Century popularized the idea that the mind is something separate from and external to the brain. This early view captured the imaginations of many thinkers down to recent times. But in terms of today’s science, such dualism, as it is called, is considered untenable by most scientists. If there is some sort of force field operating outside our head to control what the brain does, there is no scientific instrument that can detect it. Scientists long ago generally abandoned the notion that the mind is something that “sits out there,” outside the brain, monitoring and adjusting the brain’s activity. The modern view is that mind does not exist independently of the brain – at least there is no scientific evidence to that effect. When the brain ceases to function normally, as in coma, anesthesia, or ordinary (non-dream) sleep, the mind temporarily disappears. So how do neuroscientists think of “conscious mind?” The common view is that mind is constructed by well-known neural processes, such as registering sensory input, comparing it with stored memories in terms of content and affect, and making
90
3 Kinds of Thought
“decisions” about appropriate response. This mind may operate below the level of consciousness, so we are left with the enigma of explaining the subconsciousconscious difference. Conscious mind emerges when the neural processes are expressed, perhaps re-created, in terms of sounds, sights, or other sensations that are associated with the events triggering the neural processes. For example, if I hear a gunshot outside my door, my subconscious mind registers the sound, detects the location in space from which it came, and compares that kind of sound with other kinds of sound that the sub-conscious mind has experienced and remembered. Conscious mind provides the brain with a way to know what the brain is doing at the present moment and to adjust and respond to the sound processing. That is, I know that my brain has determined the sound originated outside my door and that the sound is from a gun. Simultaneously, I may also become aware of my memory that guns can be dangerous, that there are bad people out there who shoot people, and that I may therefore need to call the police and make some attempt to protect myself. The trick for brain researchers is to figure out how the brain manages to let me in on what it is doing. For now, let us agree on one thing. The brain uses tactile sense, muscle sense, and vision to construct a representation image of its body and its interaction with the outside world. This is done consciously and subconsciously, but the net result is that the brain’s mind map knows where everything is in the body and knows how to command selective action, with at least one of its three minds. Senses also create a mind map of the world outside the body. Thus, we can say that sensation creates the mind, which once created, now has the power to do other things than just register sensation. Moreover, the brain even can re-organize its body map if it has to. For example, studies in monkeys that had a nerve in the arm experimentally severed, the part of the cortex that controls the originally innervated muscles re-wired itself to take into account that those muscles no longer seemed to exist (Merzenich et al. 1983). Mind evolves in real time because brain evolves in real time. It evolves, not only over millions of years as pre-human and even human brains generally got bigger, but also evolves in each person as the brain circuitry is sculptured by maturation and experience within each person from birth to death. Teenagers do not have the same brain and are not the same people they were before puberty. Some teenagers change almost overnight from a lovable cherub into a rebellious monster. The brain is maturing in very observable ways up to at least age 30 (Dosenbach et al. 2010). For example, fiber tracts and their connectivity still progress up to that time. Microscopic changes in synapses and their biochemical machinery occur throughout life. What this means is that what we experience and learn changes the brain. Those changes influence changes in our mind. Even lower animals, at least mammals, have the neural capacity to sculpt subconscious mind from sensory experience. Lower mammals, for example, have well-defined cerebral cortexes that have areas that map bodily sensations. The size, shape, and relative location of the topographic map does change with species. The size of a mapped area generally corresponds to the extent to which the peripheral body part is used or is important. For example, in humans, a relatively large expanse
Conscious Thought
91
of cortex is devoted to the thumb, whereas in pigs, a large amount is devoted to the snout. Similar principles apply also to the size of cortical tissue that is mapped to the body for commanding complex movements. Primates have many neurons devoted to instructing movement of fingers, but relatively few neurons devoted to making back muscles move. When we compare species, we have to look not only at topographical mapping but at the total expanse of cortical tissue. The human cortex is only about 15% thicker than that of the macaque monkey, but it is at least ten times greater in area (Rakic 1998). We know from computers that there is more than a linear relationship between number of computer elements and computational capacity. At some point, increasing the number of computing elements gives rise to qualitative differences in capability. The same principle must hold also for the brain’s computing elements, neurons. At some point, given enough properly functioning neurons, you reach critical mass for producing consciousness. In case you are not compelled by this “critical mass” and threshold idea, recall that the DNA of humans and chimpanzees is 98.4% identical and amino-acid sequences are 99.6% identical (Goodman 1992). Not only the number of neurons are important to consciousness, but also how they are used. The mind map can be individually sculpted by unique experience. James Shreeve (2005) has summarized some of the studies that show the brain to be very changeable by experience. For example, blind people who read Braille show a great increase in the size of the region of cortex that innervates the right index finger, the finger used to read Braille. Violin players have an analogous spread of the cortical region that is associated with the fingers of the left hand. London taxi drivers have an enlarged rear portion of the hippocampus, a brain area involved in spatial orientation. Learning how to juggle increases the amount of grey matter in two cortical areas involved in vision and movement control. When newly trained jugglers stop practicing, these areas shrink back towards normal. One good way to study the relationship of conscious and subconscious mind is to study the subconscious state of sleep and the transitions of consciousness to and from sleep. It is no accident that real sleep only occurs in advanced brains. Brain indicators of sleep do not occur in fish, amphibians, or primitive reptiles. Only in advanced reptiles, birds, and mammals do physiological signs of sleep occur, and full- blown sleep only occurs in advanced mammals. Scientists learned decades ago that neurons do not shut down when you go to sleep. Many neurons fire impulses just as actively during sleep as during consciousness and some neurons are even more active in sleep. What most likely changes is the degree of interaction and communication among neurons. A group at the University of Wisconsin in Madison (Massimini et al. 2005), recently reported that as humans go to sleep the communication among cortical neurons breaks down. The researchers recorded brain waves (EEGs) from many scalp sites over the cortex while at the same time stimulating a small patch of right frontal cortex with transcranial magnetic pulses. The responses to this stimulation at various other cortical locations indicate how well information is being spread and communicated throughout the cortex. What they found was that, when awake, the stimulus triggered responses in sites nearby the stimulation site and also to similar structures on the
92
3 Kinds of Thought
opposite side of the brain. However, during sleep the stimulation only evoked responses at the stimulation site. In other words, during sleep, brain areas seem to stop talking to each other. Consciousness may well require such communication. The investigators have not yet performed the comparable experiments during the dream stages of sleep, when a kind of consciousness is present. My bet is that wider communication between areas is restored during dreaming. Another unstudied, but probably important, dimension to this matter is the degree of timing coherence of brain activity in different parts of the brain. I would suspect that activity in specific frequencies is much more phase-locked and coordinated at multiple cortical sites during wakefulness and dreaming than during non-dream phases of sleep. Classical philosophers, David Hume, John Locke and Immanuel Kant, clarified the issues relating conscious mind to the material world (Macphail 1998, 256p). Kant made forceful arguments that all human knowledge begins with experience, but not all knowledge is derived from experience. This leaves us with the likely idea that conscious mind can create new understandings from experience. Thus, we have a philosophical basis for affirming that mind and body and environment interact, each influencing the other. We also have a basis for affirming the important role of learning in the development of brain and mind. Not all learning has to be conscious, of course. Indeed, implicit, subconscious learning is a well established phenomenon in both animals and humans. We have no good way to know how much of our learning and brain processing is subconscious, though the prominent cognitive neuroscientist, Michael. Gazzaniga, makes the claim that 98% of what our brains do is subconscious (Gazzaniga 1998, 201p). Where does one get such a number? This is reminiscent of the widely accepted myth that we only use 10% of our brain capacity. Nobody I know can show where or how such a number is derived. Religious arguments do not help much in trying to understand the material basis of mind. But let us not assume the material basis of mind is necessarily incompatible with religion. Science and religion seem separated by a territorial line, but in reality this line may be a broad, fuzzy landscape that can be crossed from either side. E. O. Wilson, in his book on Consilience (Wilson 1999, 367p), would say that the misunderstandings arise not from fundamental differences, but from ignorance of the fuzzy boundary. Religious people in particular are attracted to this view, because it embraces the idea of a non-materialist soul, which is a mandatory part of their belief systems. Descartes was among the first to assert that the soul cannot have material properties. Yet a close reading of comments attributed to both Christ and to St. Paul, for example, revealed that in the afterlife the soul does have its own body, albeit one that differs from what we know about bodies.
Dreams Are Made of This All human brains dream. Those few people who claim not to dream are wrong. They dream, but just don’t remember dreaming, as has been documented in experiments
Conscious Thought
93
where people are awakened immediately after their brain and body signs indicate they were dreaming. Invariably, under these conditions they report having a dream interrupted, even if they can’t remember their dreams during a normal night of uninterrupted sleep. Moreover, we know that higher animals and people must dream. Forced deprivation of dreaming causes irritability, emotional upset, and dysfunctional thinking. In people, dream deprivation can drive one to the edge of insanity. There must be some physiological reason for dreaming, but scientists don’t know for certain what it is. Speculation abounds. Maybe dreaming is needed to restore chemical or circuitry balance that has been in high gear all during the wakefulness period. Maybe dreaming, which occurs most frequently after one has gotten the rest of the stage of sleep where one “falls into the pit,” may be a form of protecting the brain from falling too far into oblivion, a form of rehearsal to get ready for the next day of wakefulness (note: people who sleep at night dream the most in the early morning). I present a theory for dreaming in Chap. 8. One physiological fact does seem well established. Dreaming is a form of off-line learning, where memories of the day’s events are being consolidated from temporary memory to more permanent memory. Many studies have shown that deliberate disruption of dreaming impairs forming permanent memories of the events of that day before dreams were disrupted. Memories are formed by repeated firing along specific distributed circuits in the brain. Perhaps these firing patterns are augmented and sustained during dreaming. Dream content is likely a by-product of that activity, not its cause. This does not mean that what we dream about is what our brain is trying to remember. Common experience teaches that many of our dreams have no apparent relationship to what happened the previous day. How can we explain this paradox? We can’t. However, dreams would not necessarily have to reflect the events being remembered. For example, perhaps activity in a memory circuit, A, triggers activity in another circuit, B, which is involved in generating dream content. But what has this to do with the issues of conscious and subconscious mind? People who remember many of their dreams know that dreams are a special kind of consciousness. During vivid dreams we are “consciously” aware of our dream and what is going on in the dream and maybe even being an active participant. Some people like to think of dreams as hallucinations. They are hallucinations in the sense that dreams are made up and may not reflect real events. If you had the same kind of cognitive experiences during alert wakefulness, you would be called crazy. But you are not crazy if you know that your dreams are just dreams and not real. Some dreams may be the way that the subconscious mind can grab the attention of conscious mind. The consciously awake mind, as California neuroscientist Ben Libet likes to emphasize (see Chap. 7), is preoccupied with control and veto power. I would add that the conscious mind is also preoccupied with being aware of what may need control and veto. There are many things that the conscious mind does not want to think about, which no doubt includes things that the subconscious mind is trying to surface. Dreaming is a way for the subconscious mind to make thought available to the
94
3 Kinds of Thought
conscious mind for more rigorous analysis and interpretation. This is also another way of stating the understanding about dreaming that Freud gave to the world. So, we should pay attention to our dream content. That is not easy, and numerous books have been written to help us interpret dreams. For our purposes here, it may suffice just to point out that the subconscious mind is very real and it is in constant interaction with conscious mind, even when conscious mind wants to sleep. In the last chapter of the book, I talk about how experiences and thoughts serve to program the brain. It should be evident that whatever the brain experiences and learns during our waking hours can be deposited and probably re-formulated at very deep levels of the subconscious.
Conscious Identity How does all this relate to a conscious sense of “I?” Well, that is not clear, but it may not be such a big leap to move from the brain having a conscious body map to a more abstract mapping of self produced by improved communication among the mapped sites. Does the brain know that it must exist? Subconsciously, the brain may not “know,” but it does enable the conscious mind to be aware of the body and outer world maps. Maybe in somewhat similar fashion the brain makes the conscious mind aware of itself. The last part of the book makes these ideas more explicit. Where is the “I” that monitors much of this mapping? It surely cannot be found in any one place in the brain. It must therefore emerge from distributed dynamic processes, no one of which constitute “I.” These distributed processes shift their coupling among the generating circuits they also create shifts in our conscious state of “I” being. “I” become happy or sad, energized or tired, focused on this or that. Personal growth depends on recognition of personal weakness. We can know when we feel defensive, or angry, or overly critical, or judgmental, or upset – and we can know when we try to make excuses. We can also recognize the situational causes or triggers of undesirable attitudes and behavior. Further, we can realize that avoiding the situational causes or triggers will not always work, that real change must come from within. Conscious intent and planning will guide us in making those changes from within.
Conscious Influences on the Subconscious A core issue in this present book is the role of conscious mind in controlling the brain. To say that the conscious mind is “ineffectual” at controlling the brain flies in the face of all we know about learning, brain plasticity, psychotherapy, and even religion. Perhaps Gazzaniga became biased in his view of subconscious dominance because of his pioneering work with so-called “split-brain patients,” whose left and right brain hemispheres were unable to communicate consciously because the huge interconnecting fiber tract had been surgically severed to reduce the spread of epilepsy. The left brain (in right-handed subjects) has a conscious “interpreter” that
Conscious Thought
95
seeks to understand and make explicit internal and external events. This interpreter, in Gazzaniga’s view, is likened to a “spin doctor” that deceives us into thinking that we are personally in charge. However, conscious perceptions do occur in the right hemisphere, but they cannot be verbalized without connection to the left hemisphere’s speech center. We can, and do, express those conscious right-hemisphere processes through such non-verbal venues such as music, object and spatial recognition, and symbolic representations, as in art. Thus, Gazzaniga could have said that there is a non-verbal spin doctor in the right hemisphere. A compelling example is how the right brain can consciously recognize its mis-interpretations of optical illusions, such as ambiguous figures that contain embedded and “hidden” images (e.g. vase-face illusions). Thus, the point for my purposes in this book is to acknowledge that each hemisphere has an interpreter that interacts with the brain that generates it. Interpreter and brain mutually train and change each other. The challenge for us all is to teach and discipline our inner interpreters so that they don’t make so many mistakes. Even Gazzaniga finally concedes in the last sentence of his book that “maybe we can drive our automatic brains to greater accomplishments and enjoyment of life.” Folklore, common sense, and psychology all implicate the conscious mind in mediating, programming, and even controlling what the subconscious mind does. I will discuss some of the experiments on this in the section on free will in Chap. 7, but here I will describe a recent study that makes the same point without the obfuscations of free-will controversy. In this study (Van Gaal et al. 2008), a group in the Netherlands examined subconscious inhibitory control and found that the frontal cortex mediated the control behavior. They used a Go/No-go task, while at the same time using EEG evokedresponse monitoring to track the fate of the No-Go (inhibitory) signals in the brain. Others had reported that such signals produce an evoked response peak 200–500 ms after stimulus onset and that the waveforms clearly distinguish between Go and No-Go trials. In their behavioral task, subjects were to respond as fast as possible to a visual Go signal but to withhold their response when they perceived a No-Go cue preceding the Go signal. In one group of trials, the No-Go signal was preceded by a visual cue that was presented faintly enough and close enough in time to the Go signal that the cue was not consciously perceived. At longer intervals, the cue was perceived consciously. With and without conscious perception of the No-Go cue, the task could be performed correctly. This suggests that subconscious processes can drive conscious behavior, something that has been demonstrated by others in other ways. What is important here is that, even subconsciously, inhibition of the behavior arises from the prefrontal cortex. Scalp recordings revealed that No-Go tasks were associated with reproducible evoked response peaks over the prefrontal cortex, even in the tasks where the cue for No-Go was not consciously perceived. However, the waveforms were different depending on whether the No-Go cue was consciously perceived. Evoked responses were qualitatively different in the trials where the cue was perceived and when it was not. The size of the frontal evoked response
96
3 Kinds of Thought
correlated with the impact of the unperceived cue on the behavior; that is, whether complete inhibition was produced or whether inhibition occurred but was delayed. The latencies and spatial profile of the evoked responses suggest that the unperceived cue makes its way to the prefrontal cortex where it can trigger the inhibition mediated consciously by the cortex. Learning and conscious use of memory are the pinnacle of conscious mind function. Yes, all three minds can learn. Non-conscious mind, for example, can create supersensitive reflexes, especially under certain pathological conditions. Persistent high blood pressure in the absence of artery obstruction can be an example. Subconscious learning results from subconscious registration of cues and associations, especially those affecting emotions. Likewise we can learn things subconsciously, such as bias and emotions. But consciousness expedites the subconscious learning process, which once accomplished allows the brain to use its learning more efficiently with less effort. The subconscious mind uses its store of unexpressed memories to influence our attitudes, emotions, beliefs, choices, and behavior – both conscious and subconscious. As was suggested in Fig. 3.4, the process is reciprocal: our behavior, good, bad, or otherwise, creates informational feedback and learning experiences that get deposited in the subconscious and thus become a part of us. For example, abuse of young children may not result in their having any complete conscious memory of the abuse. But those memories may still be etched in places like the amygdala and other emotional parts of the brain. Though not normally accessible to the conscious mind, these emotional parts of the brain can still be active in the subconscious mind, and can influence how we feel and act beyond our awareness. Like learning to ride a bicycle, much of what we have made ourselves from our conscious thinking and behavior has been driven underground into our subconscious. Subconscious memories and thinking routinely get expressed as “subliteral” meanings in human communication. That is, much of what we say has an obvious literal meaning, and a less obvious, and sometimes very different, subconscious meaning. This idea applies to body language, of course, but it easily extends to encoded talk. The idea is related to euphemism, “political correctness,” “double-talk,” and of course, “reading between the lines.
The Wholeness of Multiple Minds Making Two Minds Into One Since both subconscious and conscious mind have access to each other, they can interact. A healthy personality is one in which this interaction is seamless, intimate, and extensive. Perhaps one of the best examples of such interaction is the well-known placebo effect in medicine. If you think that a medicine you are taking will help your health, it may often be of help even if the medicine is a fake. This is exactly why all human drugs have to be tested prior to marketing in experiments in which one age- and
The Wholeness of Multiple Minds
97
sex-matched control group gets a placebo. The conditions should be double-blind, so that neither the patient nor the experimenter knows whether a given person is getting placebo or the real drug. In most such studies, some 35–75% of patients receiving the placebo get some benefit, although the effect does not usually last very long. The placebo effect may also underlie apparent benefits of alternative medicine, such as acupuncture, herbal medicine, or homeopathic medicine (where dilution is so great that only a few molecules of the drug are actually administered). Three recent studies show that believing that one is getting a medication, when in fact it is really a placebo, actually causes changes in brain activity. People given sugar pills or some inert substance under the guise of medication can improve medically and show accompanying signs of brain activity change. In one study at UCLA, patients who responded clinically to a placebo showed EEG changes in the same brain areas that were affected by patients receiving a real anti-depressant medication. In another study at the University of British Columbia, a brain imaging PET scan method revealed that injections of a simple salt solution in the same concentration in which it normally occurs in the body increased the release of the neurotransmitter, dopamine, just as well as the drug commonly used for that purpose. In yet another PET-scan study by Swedish and Finnish researchers, a placebo relieved pain as well as a prescription pain killer and heightened brain activity in the same brain region as did the real pain killer. In summarizing these studies, Robert Hotz concludes that “Each study in its own way is a testament to the mind’s inexplicable and often baffling power to affect health (Hotz 2002).” Western medicine has not come to grips with this mysterious reality. Professor Daniel Moerman at the University of Michigan – Dearborn is quoted as saying that these are “fundamental neurological processes” and that “it is pretty clear they are open to manipulation by belief and meaning and ritual and someone waving a magic syringe.” Maybe our doctors waving magic syringes have not come as far as we thought from the witch doctor or shaman of the past. In other words, we used to think that the placebo effect was some sort of psychological phenomenon, but now we see that it has a physical basis. I’m not surprised. All psychological phenomena have a physical basis. The brain affects the mind, and the mind affects the brain. Even professional medical people and scientists sometimes have trouble realizing that brain-mind interaction is a two-way street.
Making Four Minds Into One As mentioned earlier, there are also the so-called “split brain” experiments showing that the right side of the brain performs different functions from the left side. The left side performs language, math, and analytical operations, while the right side is more engaged in art, spatial relationships, music, and emotions. The respective functions are performed at both conscious and subconscious levels. In that sense, we can think of having four minds, two in each hemisphere.
98
3 Kinds of Thought
In normal, unoperated people, the functions of the two sides, both conscious and subconscious, are coordinated by bidirectional communications between the two sides. Both brain-wave and magnetic resonance imaging studies show that many mental tasks are performed simultaneously by both sides of the brain, even though the task may be preferentially handled by one side. Nobody really understands how the two minds in each of the two sides coordinate and jointly process mental tasks, but it is clear that they do. Most likely, this coordination provides extra circuitry for shared processing. Many experiments have demonstrated how widely distributed the brain areas are that are engaged in a task. I think that this widespread distribution of neural processing is what underlies a completeness of human consciousness. A full sense of self emerges only when the neural circuit activity in multiple areas becomes shared and coordinated. It is as if these circuits recognize that there is a unity to all the information being processed by their neighbors, that it all is associated with the same bodily sources of information – that it is all about the sense of self. Possibly self awareness is created in the subconscious and only becomes manifest in consciousness. I am loath to pursue this much further lest I fall victim to the following error: Consciousness is like the Trinity. If it is explained so that you understand it, it has not been explained correctly (LeDoux 2002, 416p ).
Nonetheless, I would not be true to my original intent for this book to stop at this point. The next three chapters will set the stage for a more daring pursuit of this Holy Grail of consciousness.
References Ackerman, J. M., Nocera, C., & Bargh, J. A. (2010). Incidental haptic sensations influence social judgments and decisions. Science, 328, 1712–1715. Bucy, P. C., & Klüver, H. (1938). An analysis of certain effects of bilateral temporal lobectomy in the rhesus monkey, with special reference to “psychic blindness.”. Journal of Psychology: Interdisciplinary and Applied, 5, 33–54. Custers, R., & Aarts, H. (2010). The unconscious will: How the pursuit of goals operates outside of conscious awareness. Science, 329, 47–50. Dosenbach, N. U. F., et al. (2010). Prediction of individual brain maturity using fMRI. Science, 329, 1358–1361. Farah, M. (1995). Visual perception and visual awareness after brain damage: A tutorial overview. In C. Umiltà & M. Moscovitch (Eds.), Attention and performance XV: conscious and non-conscious information processing (pp. 37–75). Cambridge: MIT. Friedman, R. (2005). Automatic effects of alcohol cues on sexual attraction. Addiction, 100, 672. Galdi, S., Arcuri, L., & Gawronski, B. (2008). Automatic mental associations predict future choices of undecided decision-makers. Science, 321, 1100–1102. Gao, M., et al. (2007). Functional coupling between the prefrontal cortex and dopamine neurons in the ventral tegmental area. Journal of Neuroscience, 27, 5414–5421. Gazzaniga, M. S. (1998). The mind’s past. Berkeley: University of California Press. Gebhart, G. F., & Randic, A. (1990). Brainstem modulation of nociception. In W. R. Klemm & R. Vertes (Eds.), Brainstem mechanisms of behavior (pp. 315–352). New York: Wiley.
References
99
Goodman, M. (1992). Reconstructing human evolution from proteins. In J. Jones et al. (Eds.), The Cambridge encyclopaedia of human evolution (pp. 307–312). Cambridge: Cambridge University Press. Hetherington, A. W., & Ranson, S. W. (1942). The spontaneous activity and food intake of rats with hypothalamic lesions. The American Journal of Physiology, 136, 609–617. Hotz, R. L. (2002, February 18). Healing body by fakery. Los Angeles Times, p. 14. Imai, J. (2008). Regulation of pancreatic b cell mass by neuronal signals from the liver. Science, 322, 1250–1254. Kalra, S. P., et al. (1999). Interacting appetite-regulating pathways in the hypothalamic regulation of body weight. Endocrine Reviews, 20(1), 68–100. Karrass, C. L. (1974). Give and take. New York: Thomas Crowell. Klemm, W. R. (1990). Behavioral inhibition. In W. R. Klemm & R. P. Vertes (Eds.), Brainstem mechanisms of behavior (pp. 497–533). New York: Wiley. Klemm, W. R. (1996). Understanding neuroscience. St. Louis: Mosby. Klemm, W. R. (2004). Thank you brain for all you remember. What you forgot was my fault. Bryan: Benecton. Klemm, W. R. (2008a). Core ideas in neuroscience. Bryan: Benecton. E-book: http://neurosciideas. com. Klemm, W. R. (2008b). Blame game. How to win it. Bryan: Benecton. Klemm, W. R., & Vertes, R. P. (Eds.). (1990). Brainstem mechanisms of behavior. New York: Wiley. Lawrence, C. J., et al. (2005). Neurobiology of autism, mental retardation, and down syndrome: What can we learn about intelligence? In C. Stough (Ed.), Neurobiology of exceptionality [electronic version] (pp. 125–142). New York: Springer. Lazarus, R. S., & Lazarus, B. N. (1994). Passion and reason. Making sense of our emotions. New York: Oxford University Press. LeDoux, J. (2002). Synaptic self. How our brains become who we are. New York: Viking. Macphail, E. M. (1998). The evolution of consciousness. New York: Oxford University Press. Massimini, M., et al. (2005). Breakdown of effective cortical connectivity during sleep. Science, 309, 2228–2232. McGowan, K. (2009, September). Seven deadly sins. Discover, pp. 49 –52. Merzenich, M., et al. (1983). Progression of change following median nerve section in the cortical representation of the hand in areas 3b and 1 in adult owl and squirrel monkeys. Neuroscience, 10, 639–665. Miller, G. (2004). Learning to forget. Science, 304, 34–36. Minzenberg, M. J. (2008). Modafinil shifts human locus coeruleus to low-tonic, high-phasic activity during functional MRI. Science, 322, 1700 –1702. Rakic, P. (1998). Cortical development and evolution. In M. S. Gazzaniga & J. S. Altman (Eds.), Brain and mind. Evolutionary perspectives (pp. 34–42). Strasbourg: Human Frontier Science Program. Shreeve, J. (2005, March). Corina’s brain. All she is … is here. National Geographic, pp. 6–31. Szegedy-Maszak, M. (2005, February 28). Mysteries of the mind. U.S. News and World Report, pp. 53–61. Van Gaal, S., et al. (2008). Frontal cortex mediates non-consciously triggered inhibitory control. Journal of Neuroscience, 28(32), 8053–8062. Vertes, & Robert, P. (1990). Brainstem mechanisms of slow-wave sleep and REM sleep. In W. R. Klemm & R. Vertes (Eds.), Brainstem mechanisms of behavior (pp. 535–583). New York: Wiley. Wilson, E. O. (1999). Consilience. The unity of knowledge. New York: Vintage Books.
4
Carriers and Repositories of Thought
Brain Structure Brains contain circuits and connecting pathways within and between circuits. Every circuit contains a group of interconnected neurons that can be recruited as a functional unit for containing the neural signals that constitute any given thought. Thus, the brain thinks with circuitry—within circuits, between circuits, and among circuits. Collectively, the circuits constitute a network, the network of mind. At the turn of the twentieth century, before neurons were discovered to exist as distinct cells, neurons were thought to be what scientists called a syncytium. That is, all neurons were thought to be fused, much like a basketball net, with the knots in the net representing the neuronal cell bodies. Such a circuit would be very homogenous. Activity at any one zone within the net would spread almost immediately and unchangeably to all other parts of the net. Something like this does exist in primitive living creatures such as the hydra. However, in all higher animals, nerve cells do exist as separate and distinct units. Even though connected to each other by close contact points, called synapses, circuits of such elements can become complex, involving a mixture in which neurons have varying degrees and kinds of influence on the neurons to which they are connected.
W.R. Klemm, Atoms of Mind: The “Ghost in the Machine” Materializes, DOI 10.1007/978-94-007-1097-9_4, © Springer Science+Business Media B.V. 2011
101
Histology and the Neuron Doctrine Camillo Golgi (1843–1926) and Cajal (1852–1934)
The neuron doctrine declares neurons as distinct and separate cells. The improvement of microscopes permitted the observations necessary for the formulation of this theory, but even more important to such conception were the advancements in histology methods that permitted detection of individual neurons. Neurons do not take up histological stains like other cells, and therefore were not seen in true form until two Europeans made noteworthy staining discoveries. Working at a home for the incurables in Italy and equipped with only the simplest of scientific apparatus, Italian physician Camillo Golgi discovered around 1873 a staining method he deemed as the black reaction. This technique relied upon the reaction of potassium dichromate and silver nitrate to yield silver chromate, and while it only stained neurons that lacked the coating of insulation called myelin, it did so with a precedent-setting clarity that attracted the attention of prominent scientists, sparking new insights and earning Golgi a position at the University of Pavia. Golgi’s technique also attracted the attention of Santiago Ramón y Cajal, an innovative and motivated young Spaniard who adopted Golgi’s silver staining technique. Ramón y Cajal soon improved upon Golgi’s staining technique by intensifying it and applying it to thicker tissues obtained from infants or embryos prior to myelination of their nerve cells. By choosing nonmyelinated tissues, Cajal was able to abolish the impediment caused by myelinated cells not staining, thus allowing a more thorough and reliable method of staining. Applying this new technique to various types of nervous tissue, Cajal was led to the conclusion that each nerve cell was an independent unit. None of his stains provided evidence for the theory of a reticulum of nerve axons, as was advocated by many of his contemporary scientists, including Golgi. Armed with this conviction and with his slides for evidence, he translated the
Camillo Golgi (continued)
Properties of Neurons
103
Histology and the Neuron Doctrine Camillo Golgi (1843–1926) and Cajal (1852–1934) (continued)
journals chronicling his work and presented his findings at an anatomist conference in Berlin, where they were met with wide acceptance.
Santiago Ramón y Cajal The “neuron doctrine” was thus born, and has subsequently been nourished by the work of many scientists. Both Golgi and Ramón y Cajal were awarded the Noble Prize for their histological contributions leading to this theory. Notably, however, although Golgi was a key to the theory’s development, he truly never accepted it. Rather he obstinately interpreted his staining as evidence for a neuron’s axons forming a “neural net,” or reticulum of branches. Correspondingly, he also sided against notions of cerebral localization, and rather advocated that the brain functions on a holistic scale. Nevertheless, both the “neuron doctrine” and the concept of localization remain important concepts in modern neuroscience.
Properties of Neurons Receptive Fields All sense organs are not only stimulus-specific, they also detect from limited regions of space, known as “receptive fields.” For a given sensory neuron, its receptive field is a region of space in which the presence of an appropriate stimulus will alter the firing of that neuron. Receptive fields have been identified for neurons that detect sound, light, and bodily sensations such as touch, pressure, and temperature. Each sensory system has a specific receptive field correlating with the environmental space—within or without the body—that it monitors. The size of that field varies
104
4 Carriers and Repositories of Thought
with how much innervation is devoted to it. Thus, size inversely relates to a field’s acuity at detecting stimuli, so that smaller receptive fields are more sensitive and require less stimulation to evoke a response. Innervation of the hand, for example is much more dense (i.e., has smaller receptive fields) than the skin on the back. Receptive fields can be defined in terms of single neurons; at this level of reception there is often overlap of the terminal axon arborizations of each individual neuron, and consequential overlap of their receptive fields. This concept of overlap applies also to spinal nerves: each dorsal root of the spinal nerve receives input from a defined region of the body, called a dermatome, and adjacent dermatomes have overlapping receptive fields. It is commonly observed that the primary neurons associated with a given receptive field converge their output onto a fewer number of target cells, or secondary central neurons. From these neurons, further projections occur to other neurons in the central nervous system. Receptive fields are dictated by anatomy. They cannot be altered by experience. As such, the sensory messages carried by sensory neurons are constrained by the size and location of the field. For example, sensory neurons in the fovea part of the retina have very small receptive fields, and they thus provide high resolution of small visual objects, such as small type in a textbook. Other retinal sensory neurons have larger receptive fields, and the fields for the least light-sensitive elements, the rods, have the largest fields and provide the least resolution. Moreover, similar constraints apply when the messages are projected into the brain. For example, sensory neurons in the nasal side of the retina project what they detect with the original resolution and cross over at the optic chiasma to join with axons from the temporal half of the other eye before passing into the thalamus and then on to the visual cortex. Lateral retinal neurons project into the hemisphere on the same side of the brain.
Labeled Lines Information in the nervous system has to be brought in from the sensory organs, moved around, and ultimately may be passed on as movement commands. Circuit anatomy often dictates the path by which information flows, and in many cases, such circuitry is dedicated to carrying specific kinds of information. Historically, a theory of dedicated pathways was developed around sensory function, and the flow of sensory information into the brain was attributed to so-called “labeled lines,” or dedicated pathways. That is, the sensory world is coded in large part by the identity of neurons, as defined by their central pathways and connections and the type of stimulus for which they are most responsive To the brain, neurons have labels, and their pathways are labeled lines that conduct messages equivalent to “something is going on in line so and so.” For example, the pathway followed by visual stimuli is from the retina in the eye to the optic nerve to specific parts of the thalamus and then to the discrete locations in occipital cortex (Fig. 4.1). Connections made by each type of sensory receptor terminate in different sections of the brain, and all the associated input information is registered in distinct,
Properties of Neurons
105 Body senses cortex
Visual cortex
Thalamus
visual labeled line
skin labeled line
Spinal cord
Fig. 4.1 Diagram illustrates how the circuitry can constitute a carrier for specific kinds of information. For example, certain pathways carry sensory information from the skin, while others carry visual information. Thus, one pathway is a carrier of skin sensation, while the other is a carrier for visual input. In short, anatomy helps to the brain to identify the kind of stimulus
specific circuitry. Each sensory system correspondingly has its own modality, which is the complete collection of information conveyed by a specific class of sensory recep tors. There are, however, exceptions to this norm, which will be discussed below. There has been some controversy over the applicability of labeled-line theory to taste and olfactory perception. It does seem to be true that cells that detect tastes and odors have chemicals that they preferentially respond to, but they may detect high concentrations of certain other chemicals. More than that, in taste and odor perception, a given sensation may be the result of a combination of many taste or olfactory chemical inputs, rather than one particular taste or odor input. This accounts for the fact that, while we have a limited variety of taste and olfactory receptor cells, we can distinguish a myriad of different taste and olfactory perceptions. The terms “bouquet effect” or “cocktail effect” are commonly used to refer to this complex mixing of input from different chemical receptor types to produce unique sensations. “Bouquet” perception is probably an example of neural coding in a popu lation of neurons, a matter that is discussed extensively in Chap. 6. The “bouquet effect” may still make use of labeled lines. If you consider activity in various pathways
106
4 Carriers and Repositories of Thought
associated with a given function as a “set” of patterns that collectively contribute to that function (such as complex taste perception) in a way that no one component of activity can, these pathways may be thought of as labeled-line pathways that converge and overlap to yield the bouquet effect. Some neurons seem to require these converging combinations of receptor activation in order for an odor to be perceived. Indeed, investigators have recently demonstrated that binary odor mixtures stimulate olfactory cortex neurons that are not stimulated by the individual components alone. Perhaps the bouquet effect and labeled-line theory can also be used to explain not only perceptions within a given sensory system, but perceptions shared by two sensory systems. For example, in wine-tasting, it is a common practice to take in the aroma of the wine before sipping it as a means of enhancing flavor perception. This suggests that the olfactory and gustatory systems each have labeled-line pathways that converge to combine input from the two sensory systems, yielding a “bouquet effect” that crosses the boundary of sensory systems. Many complex studies have indeed implicated taste and odor perception as two sensations that converge to yield flavor, but there is even a very simple way to do so. Try holding your nose while eating something, and notice how much flavor is diminished, and may even become unidentifiable. This again corresponds to the philosophy associated with combinatorics, that the whole is qualitatively different from the sum of its parts. Remember this principle later in the book when issues of consciousness are considered. One other point should be discussed when considering labeled-line theory and sensory perception. There is a rare condition known as synesthesia wherein one type of sensory input elicits response from not only the expected sensory system, but also an unrelated one. Thus, whereas the “bouquet effect” results in two separate sensory system inputs combining to yield one distinct perception, synesthesia results from one sensory system input yielding two distinct perceptions. For example, sounds or words might also be perceived as colors, or sights might also be perceived as smells. Recent research offers two main possible explanations for this condition, although other theories are also being investigated. First, it may be due to the failure of a normal process known as neural pruning, wherein unused neural connections or connections between different brain areas that are present at birth are terminated. Over time, learning may result in these extra connections becoming synesthetic. The second explanation is similar; it suggests that cross-activation of different, often adjacent, sensory systems results in synesthesia. Taken in the context of labeled-line theory, either of these two explanations would correlate with labeled-line pathways from a certain type of sensory receptor branching to lead to both the sensory system that they correlate with, and another unrelated sensory system. It should be noted that the first explanation emphasizes the important point that although traditionally labeledline pathways are thought of as innate, they can to an extent be altered by learning. The labeled-line theory could also be extended from detection of a stimulus to res ponse to a stimulus. In simple spinal reflexes, for example, sensory information flows in dedicated pathways, and a reflex movement response likewise may flow in other dedicated paths. An obvious example is when you touch a hot stove. The painful stimulus evokes impulse traffic in a pathway that automatically directs movement instructions back to the arm muscles needed to flex the elbow and lift the hand away from the hot stimulus.
Properties of Neurons
107
Another, much more dramatic example comes from experiments conducted in the 1940s by Roger Sperry. He disconnected the eye of a frog from its circuitry and then reoriented the eye 180° and reconnected the optic nerves. This caused the visual perceptive field of the frogs to be rotated 180°, and when the cut nerve fibers regenerated, they followed the same pathways that they originally followed. This led Sperry to develop the chemoaffinity hypothesis stating that axons form precise connections with their targets based on chemical markers. But the resulting behavior was also remarkable. Now, when a bug that such frogs saw was above their nose, the frogs would strike downward to get it. Likewise, when a bug was below their nose, they would strike upward.
Neuronal Circuits and Networks Neurons are organized into pathways where neurons connect to each other. Nerve impulses circulate in these circuits. The circuits may be fixed (“hard wired”) or they may be recruited on the fly for temporary purposes. Circuits are linked together to form networks (Fig. 4.2), sometimes involving hundreds, even thousands of neurons. A given neuron in the brain may participate in numerous, huge networks, involving hundreds and even thousands of neurons. Many of these networks are “hard-wired,” anatomically prescribed during early developmental processes during embryogenesis and early childhood. But even these hard-wired networks are subject to modification with age and experience. Most notable among such anatomical changes is the demonstration that myelination of neurons in the human brain is a long, drawn-out process and not complete until the early 30s.
Fig. 4.2 In the brain, every part of the brain is connected to every other part, not directly (as diagrammed on the left) but indirectly (as on the right)
108
4 Carriers and Repositories of Thought
Many of the hard-wired circuits subserve non-conscious mind, such as spinal and cranial nerve reflexes and the hypothalamic-pituitary axis that controls hormone release. Many circuits of this type consist of as few as three neurons and thus hardly qualify as networks. Especially for subconscious and conscious functions, the brain uses huge groups of neurons, and the networks are typically adaptive. That is, neurons can be recruited or dropped out on an as-needed basis. The brain knows how to allocate its resources as a result of learning. When learning a new task, many neurons and networks may be required, but as learned becomes established, the number of neurons and circuits required diminish. Because everything is connected to everything in the brain, at least indirectly, there is thus the possibility for neurons in different parts of the brain to be recruited into a variety of circuits. The same neuron may well participate in numerous circuits, each of which sustains a given thought, feeling, or behavior. Life experiences and on-going thought processes can dynamically create new circuits. One example of this dynamic flexibility can be illustrated with recent memory research that has shown that the representation for a memory is held by a select subset of eligible neurons. In the study of fear memory in the lateral amygdala of mice, Jin-Hee Han and colleagues observed that neurons compete for inclusion in the memory-trace circuitry, with the “winning” neurons being determined by their relative amount of CREB protein at the time of learning (Han 2007). Neuroscientists have known for many decades that during embryonic development, neurons compete for registration in the hard-wired circuitry of the brain. Now, it seems that neurons also compete for registration in the circuitry that supports a given memory trace. One of the best indications of adaptive networks has come in recent years with brain imaging studies, which map changes in regional blood flow, which is proportional to the amount of neuronal activity. A variety of brain-scan experiments involving both visual and auditory processing, as well as higher cognitive functions show that if a required task is difficult, there is much more neural activity in more areas than if the task has become easier through familiarization and learning. As a task becomes rehearsed and learned, the amount of circuitry needed to accomplish the task apparently decreases because fewer brain areas “light up” in the brain scan. One of the great challenges to modern neuroscience is to learn how neurons know when they need more help and how they recruit helper neurons into taskspecific networks. Likewise, how do neurons know they have all the help they need and therefore release neurons from their task network so that they become available for other purposes? Most brain functions are accomplished through parallel and extensively distributed networks that permit recurrent feedback. The habenula is a good case in point (Klemm 2004). The habenula consists of two tiny clusters of neurons that sit on top and along the midline of the thalamus and mediate diverse functions that include subconscious response to painful stimuli, memory, motor activity, sexual and maternal behavior, stress, affective states (anxiety, depression, and reward phenomena), sleep, and eating and drinking behavior.The anatomical connections that make such diversity possible are arranged so that habenula neurons can participate in the many circuits
Properties of Neurons
109
Fig. 4.3 The habenula is a pair of small clusters of neurons that lies along the midline. It has medial and lateral divisions (mHb and lHb). Both divisions have multiple reciprocal connections with many other parts of the brain that have major functions in subconscious thinking (From Klemm 2004)
that support the diverse functions and behaviors just mentioned. Habenula neurons receive substantial input from multiple parts of the emotion-controlling parts of the brain and send output to a large midline nucleus in the brainstem that in turn relays habenula output to multiple areas of the limbic system and to key nuclei in the brainstem. These circuits provide enormous opportunities for multiple modes of operation. Based on the known hard-wiring, it seems clear that neurons in medial and lateral Hb participate in multiple circuits (Fig. 4.3). Experimental manipulation of Hb and interpedunculus (IPN) neurons produces a huge range of behavioral and physiological effects, indicating that these neurons are shared components of multiple-function networks. The anatomy clearly suggests that the habenula (Hb) and nucleus interpeduncularis (IPN) circuitry provides an anatomical substrate for oscillation and recurrent feedback. Consider, for example, the IPN connectivity. IPN projections to its brainstem target areas have back projections to the IPN. Hb-IPN pathways provide multiple opportunities for feedback and oscillation, which might be elaborated by synchronization analysis of multiple-unit activity or field potentials from multiple, anatomically linked sites in association with specific function or behavior. Unfortunately, such research has never been attempted.
110
4 Carriers and Repositories of Thought
Fig. 4.4 Network connections of the habenula-interpeduncularis nucleus that could support feedback and oscillation. CG central grey, EPN entopeduncular nucleus, lH lateral hypothalamus, R raphe nuclei, S septal nuclei, VTA ventral tegmental nuclei (From Klemm 2004)
Notice that these anatomical connections provide multiple routes for interaction among the nodes in this network. This diagram shows only the known connections of the habenular nuclei. Realize that each of the other brain areas in the diagram has multiple connections, not shown, with other parts of the brain. Careful inspection reveals that such network arrangement provides an anatomical substrate for oscillatory activity (Fig. 4.4). That is, activity in one node can activate another, which activates another, and so on, with feedback going back to the original node to keep the oscillation going. I mention this because neuroscientists are biased to think about oscillatory activity in the neocortex. I suppose this is because the EEG shows clear oscillatory activity, and almost all of the voltage signal in the EEG comes directly from the neocortex only. But oscillation is also well known in the several limbic system structures that have connections with the habenula. These structures are key components of subconscious mind. If you stick an electrode into any one component of this Hb-IPN system, you rarely see any sign of oscillatory activity. That seems strange, given the circuit design is ideal for producing oscillation. Though nobody has studied the matter in this system, I think the answer may lie in the heterogeneity of inputs of given nodes. For example, the lateral IPN gets input from five different places, and it is likely that these diverse sources have varying phase relationships with each other and with the Hb. The reader may wonder why I have used the Hb-IPN system to illustrate networks. Most neuroscientists hardly ever think about this system, and it therefore has not been studied much — or even at all, as a system. Additionally, its strategically placed anatomical connections makes this system a prime candidate for productive systems-level research. Researchers will get around to it someday.
Properties of Neurons
111
Fig. 4.5 Behaviors for which multiple circuits share common Hb-IPN components of the habenula (Hb) and the interpeduncular nucleus (IPN). This highly schematic diagram is not meant to exclude the fact that medial and lateral Hb are fundamentally distinct and only weakly interconnected. Also, the diagram does not take into account that these circuits may involve different subnuclei in either the Hb or the IPN (From Klemm 2004)
Many experiments implicate the Hb-IPN axis in a variety of brain functions and behaviors (Fig. 4.5). These include processing of painful stimuli, learning and memory, motor activity, sexual and maternal behavior, stress, affective states (anxiety, depression, and reward phenomena), sleep, and eating and drinking behavior. I propose that these multiple functions arise because the Hb-IPN network provides shared-components for the multiple circuits that subserve these different elements of behavior. Making lesions or electrically stimulating the habenula of experimental animals makes it clear that these various behaviors arise at least in part via processes in the Hb-IPN axis. Many forms of these behaviors occur subconsciously in humans. Of special interest is the question of whether such an axis is involved in communication between subconscious and conscious processing. Animal research cannot answer such a question. For higher brain functions, such as conscious operations that are enabled via activity in cerebral cortex, the network organization is even more complex and adaptive. Nobody knows how complex networks in the brain form in response to learning experience, but some interesting research indicates that networks can be self organizing. Of particular interest is how such non-hard-wired networks can arise while still being resistant to interruption or failure. Redundancy is one possible solution, but that wastes neural resources.
112
4 Carriers and Repositories of Thought
In general, the hard-wiring of cortical circuits does not change in the short term. Short-term learning and memory effects must be represent by changes in activity patterns in the fixed circuitry. In other words, a fixed circuit can contain multiple representations simply by changing the ongoing patterns of activity within their network without changing the network structure. Some biological systems, such as slime molds for example, provide a model for how such networks might self-organize in the brain. Some organisms grow in the form of a network as part of their foraging “strategy” for new resources. Such systems continually adapt to the environment and do so in a cost-effective way, yet without any centralized control. Using such a biological model, a team of Japanese researchers has constructed a mathematical model that describes how various functional networks can form and be sustained (Tero et al. 2010).
Topographical Mapping Major sensory and motor systems are hard-wired and topographically mapped. This is an extension of the labeled line idea mentioned earlier. That is, the body, both inside and out, is mapped by the nervous system. Major sensory systems map the external world within their own circuitry. Likewise, the nervous system contains a mapped control over the muscles of the body. Mapped regions may have different inputs or outputs or may share the same ones. Maps are interconnected so that projections from one map to another trigger a back projection to the first map. Mapping can persist at all levels in a given pathway. Major sensory systems, such as vision and hearing, map the external world within their own circuitry. Locations in a three-dimensional sensory world are represented in the central nervous system neurons in such a way that neighboring locations in the sensory world also are represented in neighboring neurons in the nervous system. Likewise, in motor systems in the nervous system, neurons that activate certain muscles have neighboring neurons that activate neighboring muscles. To explain this inner model of the body in terms of neural function, we can think of the model as: an IMPLEMENTATION (nerve impulse patterns) of a REPRESENTATION (topographical maps) of an ABSTRACTION (sensory transduction/motor programs) of a REALITY (physical stimuli/?) Note that it is not so obvious what the “reality” basis is for motor programs. The underlying reality must include some kind of combination of muscle and bone anatomy, neural circuitry, and various degrees of intentionality. We know that topographical maps exist because from studies with appropriate monitoring techniques. The mapping idea is most commonly illustrated in terms of the cerebral cortex. Certain areas process vision (visual cortex), others sound (auditory cortex), others language, and so on. But mapping occurs throughout the spinal cord and brain. With microelectrodes that can record responses at various points along a sensory or motor pathway, an observer
Properties of Neurons
113
can witness the point-to-point projections of activity. Conversely, if one knows, from electrical recordings for example, the anatomical location of a projection, the information flow along the pathway can be mimicked by electrical stimulation or be abolished by a lesion that is strategically placed in the topographically mapped area. Not all parts of the brain have clear topographical mapping. True, given neurons within these structures do connect to and from specific target neurons elsewhere, but there does not seem to be an orderly body mapping as found in other parts of brain. Such non-mapped areas include the parts of the hypothalamus and basal ganglia. How these regions interact with the mapped systems remains among the great enigmas of neuroscience. Topographical mapping even occurs within single neurons. Large cells in the cortex are oriented more or less perpendicular to the cortical surface and inputs to the cell are placed topographically along various points along the longitudinal axis. This micro-scale topographic mapping has been studied most intensively with the pyramidal cells in the hippocampus, a primitive part of the cortex.
Cortical Columns The outer mantle of the human brain, is called the cortex, more correctly “neocortex” because in the evolutionary scheme of things it only appeared as mammals came along. Neocortex is most developed in primates and humans. I stress neocortex here (and in the last chapter) because, when properly engaged, it is the seat of consciousness. Anatomically, the neocortex has six layers of cells and they are arranged in column-like patterns that are perpendicular to the brain surface. The columnar appearance, as seen microscopically, has been confirmed by electrical recordings. That is, the tissue functions as a series of columns, each one of which responding to a selected component of a stimulus. The columns are about 1 mm in diameter. A column shares about 2,000 fibers to and from the underlying thalamus and shares about 100,000 fibers to and from other areas of neocortex. Some layers have large pyramidal-shaped cells, each of which can receive inputs at some 10,000 or so synaptic contact points that rise from cells in adjacent and distant cortical columns, and from neurons in the brainstem (see earlier comments on triggering consciousness in Chap. 2). These rich interconnections no doubt provide the essential communication channels to bind information in diverse cortical locations to achieve holistic representations of information. Within a column, a clear unity of function is evident. As an example, Vernon Mountcastle suggested that neurons in the visual cortex that are horizontally more than 0.5 mm (500 mm) from each other do not have overlapping sensory receptive fields (Mountcastle 1997). That is, vision seemed to be processed by multiple cortical columns that differ in what aspects of the image they have captured. Neurons within a given column work together as a mini-network to process the information it received from its small part of the visual field. The coherent representation of a complete image requires coordination of what is happening in each cortical column.
114
4 Carriers and Repositories of Thought
The cortical column concept is central to understanding of consciousness. This subject will be elaborated in Chap. 7.
Connectivity No part of the brain operates in isolation. Every part receives input from somewhere and delivers output to specific target areas. Typically, multiple inputs and outputs exist for any given brain area. The most conspicuous part of a brain when you slice it in most any direction is the huge volume occupied by white-matter fiber tracts. Roughly about one half of the brain substance is white matter. There are tracts connecting cortex to brainstem, tracts connecting subcortical structures both within and across hemispheres, and tracts connecting all parts of the cortex within and across hemispheres. These intracortical connections (Fig. 4.6) are of special interest because they no doubt a key to understanding conscious mind.
Fig. 4.6 Dense network of axonal pathways connecting multiple areas of cortex, within and across hemispheres. Such maps are noninvasively constructed in humans by special imaging techniques (From Hagmann et al. 2008)
Brain Physiology
115
Brain Physiology Post-synaptic Field Potentials Synaptic regions have morphological specialization of both pre- and post-synaptic membranes and adjacent cytoplasm. Transmission across synapses is of two kinds, electrical and chemical. The pioneer in both types of synaptic transmission was Nobel Laureate, John Eccles.
Synaptic Function John Eccles (1903–1997)
Nobody likes to be proved wrong, but it’s really tough when your reputation is built on a mistake and you have to disavow your beliefs with those whom you originally convinced to believe in your mistake. John Eccles first became famous for convincing fellow scientists that all neurons were electrically coupled to each other. The idea was that impulses in one neuron “spark over” to create impulses in adjacent neurons. But then Eccles discovered that this idea would not hold in the face of new data that he himself was among the first to obtain. Eccles conducted his research on synaptic transmission at the University of Otago, New Zealand from the mid 1940s to the early 1950s. Later in the 1950s he moved to the National University in Canberra, Australia, where one of his daughters also conducted neuroscience research and where he conducted the biophysical research on synaptic transmission for which he was awarded the Nobel Prize in 1963.
(continued)
116
4 Carriers and Repositories of Thought
Synaptic Function John Eccles (1903–1997) (continued)
During his study of neuromuscular transmission, Eccles utilized glass microelectrodes filled with electrolytes that allowed him to study DC and slowly changing membrane voltages from inside a cell. These microelectrodes were a very recent technological improvement developed in the 1940s by scientists Gilbert Ling and Ralph Gerard, who had refined the electrode tip size to a scale that could impale a neuron without damaging it. Eccles applied these glass microelectrodes to study reflexes of the spinal cord. He used anesthetized cats, placing a stimulating electrode on a nerve that carried sensory information into the cord, while at the same time looking for electrical responses from the microelectrode as it moved in around in various parts of the cord where he knew the sensory input must go. Whenever the oscilloscope screen showed a sudden deflection that remained steady at around 70 mV (called the “resting potential”), Eccles knew that the microelectrode had penetrated a neuron. Then by stimulating the input nerve, he could monitor how his impaled cell responded to the stimulus. Stimulation resulted in a transient change in polarization; i.e. the beam on the oscilloscope showed a little blip of a few millivolts, which then decayed and returned to the original steady state. The change could be up or down, depending on which cell his microelectrode had penetrated. Eccles called these changes “postsynaptic potentials” (PSPs), because he knew they were coming from a neuron that was the target of the sensory neuron he was stimulating. If the oscilloscope beam went up, it meant that the cell was being “depolarized” (voltage decreased), and if down, the cell was being hyperpolarized. Eccles noted that the larger the stimulus voltage, the larger the response. However, with depolarized PSPs, making the stimulus strong enough caused a large PSP that suddenly exploded into a nerve impulse (an “action” potential, as opposed to the resting potential). He thus called such PSPs “excitatory postsynaptic potentials” or EPSPs. But in the case of hyperpolarized PSPs, there was a limit (about 90 mV), no amount of sensory stimulation could make it any greater. He called these “inhibitory postsynaptic potentials” or IPSPs because they made it harder for a target neuron’s membrane voltage to reach the instability point where impulses could be discharged. But these PSP observations created a dilemma for Eccles’ original adamant stance that neurons were coupled by electrical coupling. Why the dilemma? The PSP data clearly showed a delay of several milliseconds between the time of stimulus and the appearance of a PSP. This was even true for cases where he knew there were no intervening neurons between the stimulated neuron and the recorded neuron (as with neurons that mediate the so-called monosynaptic knee-jerk reflex). If the transmission were electrical, the PSP should appear almost instantaneously with the stimulus. It followed that the delay must be attributed to undiscovered chemical mechanisms of coupling (continued)
Brain Physiology
117
Synaptic Function John Eccles (1903–1997) (continued)
in the synapse. Eccles was forced by his own experiments to recant his own theories. But in so doing, he made some of the most fundamental discoveries in the whole history of neuroscience. Discovery of PSPs was not the limit of his genius. Again using microelectrodes and studying IPSPs, Eccles noted that when chloride ions accidentally leaked into a neuron being recorded, an IPSP-like change occurred, even though a sensory nerve had not been stimulated. This result implicated ionic movement as the force that changes membrane potential. His predecessors, Hodgkin and Huxley, had shown that the nerve impulse resulted from sodium ion influx, followed by potassium ion efflux. Eccles showed that chloride influx hyperpolarizes neurons so they are less likely to fire impulses. In short, Eccles had discovered the mechanism of inhibition. Eccles was also able to work out a mechanism for inhibition from these results. The inhibitory transmitters that were later discovered by others create their effect by opening ionic gates in postsynaptic membrane that allow negative chloride ions in the extracellular fluid to flow into a neuron, causing an IPSP. Other researchers later established that EPSPs were caused by the opening of other postsynaptic membrane gates that allow extracellular sodium to enter. Using a similar technique of stimulating known pathways and recording intracellular responses of target cells also led Eccles and a co-researcher, Masao Ito, to construct a complete “circuit diagram” of neurons in the cerebellum. Similar approaches later led others to discover the circuitry of neocortical columns (see Chap. 8). In his later years, Eccles took on the task of illuminating the mind-brain problem, and developed a quantum theory of consciousness focusing on the application of quantum mechanics at the level of the synaptic cleft (see the section on Quantum Theory of Consciousness in Chap. 8 for more details). Sources: Eccles, J. C. (1975). Under the spell of the synapse. In G. Adelman, J. Swazey, & F. Worden (Eds.), The neurosciences: Paths of discovery (pp. 159–179). Cambridge, MA: The MIT Press. Ito, M. (1997). John C. Eccles (1903–1997). Nature, 387, 664. The Nobel Foundation. (1972). Nobel lectures, physiology or medicine 1963–1970. Amsterdam: Elsevier Publishing Company.
Electrical synapses are prominent in invertebrates, but they also exist in mammals. In an electrical synapse, there is direct electrical coupling between a presynaptic neuron and its post-synaptic target. That is, ions flow directly from one cell to the other. Little-to-no modification of signal occurs in electrical synapses.
118
4 Carriers and Repositories of Thought
Their purpose seems to be the mediation and spread of activity throughout circuits that mediate automated behaviors that need to occur reliably and quickly. Chemical synapses, on the other hand, couple pre-synaptic and post-synaptic neurons via the release of chemical signals in which a “transmitter” chemical is released from a pre-synaptic neuron, whereupon it interacts with specific receptor molecules on its post-synaptic neuronal membrane. Chemical transmission provides enormous capacity for processing information as it passes from neuron to neuron. Release of the chemical transmitter can produce a graded change in the membrane voltage of the postsynaptic neuron. If that potential is depolarizing (that is, reduces the resting membrane potential, making it less polarized), we call that an excitatory postsynaptic potential (EPSP) because at some point a nerve impulse may be triggered. A threshold may be reached that triggers a sudden non-linear voltage spike, which we call the nerve impulse. The impulse is a voltage spike caused by a sudden opening of sodium channels (pores) in the membrane. Sodium rushes in because of its concentration gradient (more is outside the cell than inside) and because of the electrical charge (net charge in the interior of a resting neuron is negative, attracting the positively charged sodium). The internal negativity is attributed to intracellular proteins, because several amino acids are negatively charged. Synapses are nodal points of circuitry, in that a given neuron will receive hundreds or more inputs from multiple sources at more or less the same time. Thus, any neuron has the possibility of being a “decision or choke-point” in the flow of information through a network. Indeed, many circuits contain inhibitory neurons that release a hyperpolarizing transmitter that has the function of temporarily interrupting impulse flow ion a circuit. Thus, such inhibitory synapses may act as a pace maker and cause the circuit to oscillate. A key feature of chemical neurotransmission is that as long as the membrane potentials are below threshold for firing impulses, the membrane potential can summate inputs. That is, if neurotransmitter at one synapse causes a small depolarization, a simultaneous release of transmitter at another point located elsewhere on the same neuron will summate to cause a larger depolarization. This so-called “spatial” summation mechanism is complemented by “temporal” summation, wherein successive releases of transmitter into one synapse will cause progressive increase in depolarization as long as the presynaptic changes occur faster than the decay rate of the membrane potential changes in the postsynaptic neuron. Neurotransmitter effects last several times longer than presynaptic impulses, and thereby allow summation and prolongation of effect. Thus, the EPSP differs from impulses in a fundamental way: it summates inputs and expresses a graded response, as opposed to the “all-or-none” response of impulse discharge. At the same time that a given postsynaptic neuron is receiving and summating excitatory neurotransmitter, it may also be receiving “conflicting” messages that are telling it to shut down firing. These inhibitory influences are mediated by inhibitory neurotransmitter systems that cause postsynaptic membranes to hyperpolarize, as reflected in an inhibitory postsynaptic potential (IPSP). Such effects are generally attributed to the opening of selective ion channels that allow either intracellular potassium to leave the postsynaptic cell or to allow extracellular chloride to enter. In either case, the net effect is to add to the intracellular negativity and move the membrane
Brain Physiology
119
potential farther away from the threshold for generating impulses.When EPSPs and IPSPs are generated simultaneously in the same neuron, the output response will be determined by the relative strengths of the excitatory and inhibitory inputs. Output “instructions,” in the form of impulse generation is thus determined by this “algebraic” processing of information. Inhibition is the critical mechanism in producing organized operation (and thinking) in the nervous system. Think what it would be like if the brain had no ability to inhibit its own activities. In such case, at the first sign of excitation, there could be run-a-way excitation, much like is experienced in epileptic seizures. Indeed, epilepsy is caused by removal of normal inhibitory influences, a process that is typically called disinhibition. Inhibitory mechanisms operate at all levels of the nervous system, from the reflexinduced inhibition of spinal reflexes, to the rhythmic control over respiration and heart rate in brainstem centers, to regulation of cortical activity by the thalamus. Of particular interest for understanding thinking processes is the inhibition in the thalamus, especially inhibitory processes in the part of the thalamus known as the nucleus reticularis thalamus (nrT). Inhibitory neurons here form a thin sheet that surrounds the thalamus. These neurons connect both with large regions of the cortex and with the brainstem reticular formation. Activity between the thalamus and cortex must pass through this nrT. Relaxed mental states, are accompanied by so-called alpha brain-wave activity of the cerebral cortex. Alpha activity is rhythmic field potential oscillation in the range of 8–12 waves per second. The pacing of this rhythm is produced by IPSPs generated out of the nrT. When a person becomes more mentally active, more attentive and mentally challenged, the cortical field potentials shift from alpha activity into low-voltage, fast activity. Animal studies have shown that brain activation is also associated with large (up to 1,000 mV) shifts in ultra-slow potentials. In the cortex, these potentials are electronegative, but in the thalamus, they are electropositive. The slow electropositivity in nrT is much larger (up to 10,000 mV) and presumably arises from synchronous hyperpolarizations (i.e. inhibition) of numerous neurons in the nrT. Normally, these are not seen, because most electrophysiological studies use AC amplifiers that do not register these very slowly changing voltages: DC amplifiers are needed. Incidentally, my student and I did a study once on the brain-wave effects of decapitation of laboratory rats, a common way that rats are sacrificed after experiments designed to study brain chemistry. Decapitation not only activates the cortex, as indicated by pronounced low voltage, fast activity, but also is associated with a massive ultra-slow voltage shift (Mikeska and Klemm 1975). Presumably decapitation is a massive excitatory stimulus, accounting for the ultraslow response. Ultra-slow field potentials must have profound relevance to mental processes, but almost nothing is known because so few studies have used DC amplifiers (because they are sensitive to artifacts). Intuitively, we might speculate that ultra-slow voltages could create a large and relatively long-lasting extracellular environment that can bias the neurons located within that field. Large positive fields tend to impose hyperpolarization (make neurons less able to fire), while the opposite is true of large negative fields.
120
4 Carriers and Repositories of Thought
Fig. 4.7 Brain waves from human scalp during alert wakefulness and drowsiness, as they appear in pen-and-ink tracings on paper. These pen displays have low resolution for high frequencies because pens can’t move very fast. These signals look quite different on an oscilloscope. In both mental states, much higher frequencies are present when more appropriate electronics are employed
Complexities arise when consideration moves from PSPs at a single synapse to summed voltage fields from multiple synapses (“field potentials” or, if electrodes are on the head, the electroencephalogram) (Fig. 4.7). The data presented in an electroencephalogram at any instantaneous point represents the summations of perhaps thousands of PSPs and other sources of bioelectricity from many synapses, including both EPSPs and IPSPs for those synapses. As shown below, the voltage (measured in mV) in the brain ranges from distinct rhythms to apparent randomness over time, which supports the notion of PSP nonlinearity. Even the time trajectory of distinct rhythms seems to be driven by non-linear deterministic processes, commonly described in the context of chaos theory (see Chap. 8). EEG patterns change with changes in mental and behavioral state. While such correlates have medical applications and are of great interest in their own right, so far they have not had much explanatory power for how the brain works (Koch 2004). That limitation seems to be changing now that investigators are studying phase relationships of EEG signals. Most of the voltage in an EEG comes from summated PSPs. This begs the question of how do PSPs and CIPs relate to each other and to states of thinking. A role for PSPs has recently been implicated in support of working memory, which is that form of short-term memory that lasts for a few seconds. This is important because working memory is what we use in thinking (see later comments on “How I Think We Think When Conscious” in Chap. 7). The prevailing view is that working memory is expressed in the form of ongoing CIPs. The complementary view is that memories, both long- and short-term, are stored as facilitated circuit synapses that can be re-activated with appropriate CIP input. The CIP importance for working memory is suggested by studies in non-human primates that are trained to hold in mind for a few seconds the location or identity of a stimulus. During this time impulses can be seen in multiple cortical areas that are specific to the stimulus. Such activity can be sustained in a circuit for much longer times than the few milliseconds of a PSP change. Moreover, many scientists believe that this continual circulation of the CIP representation of a memory is what is necessary to stimulate consolidation into long-term memory.
Brain Physiology
121
Gianluigi Mongillo and colleagues (2008) in Paris and Rehovot, Israel, have presented data that implicate calcium kinetics of neurotransmitter action in working memory. That is, memories may be held by calcium-mediated synaptic facilitation (among other things, calcium promotes transmitter release). Presynaptic residual calcium acts as a buffer that is “loaded, refreshed, and read out” by appropriate CIPs. Calcium dynamics operate with a much longer time constant than occurs with nerve impulses. This explanation holds that CIPs are only secondarily involved in stored memory and that the memory is held by synaptic strength patterns within networks. Mongillo and colleagues demonstrated the feasibility of their ideas with computer simulation of the concurrent temporal dynamics of nerve impulses and PSPs that are consistent with the dynamics of calcium currents. Simulations began by “loading one item into working memory” with external excitation from a cluster of presynaptic excitatory impulses (“population spike”). When a presynaptic terminal discharges a nerve impulse, the released transmitter binds to its stereospecific receptor, and one of the consequences is the increased postsynaptic influx of calcium ions. A memory based on synaptic strength patterns, can enable CIPs to organize circuits in various ways and support multiple memories that can be selected by appropriate input. During retrieval and rehearsal, the synaptic strength patterns are re-generated, and although they decay rapidly, they leave a longer lasting synaptic facilitation that can more readily trigger the appropriate CIPs that represent the memory. Such mechanisms have lower metabolic costs than CIPs alone, because the stream of the appropriate CIPs need not be continuous and because synaptic processes demand much more energy than do impulses. Thus, while memories may be captured and triggered into recall by CIPs, the actual storage and maintenance occur because of synaptic strength patterns. In this way, memories can persist in quiescent form until reactivated and refreshed by appropriate impulse input. A second attractive feature of this model is that a given neural circuit can support representation of different memories without requiring persistent reverberating CIPs for each memory item. If the memory is held in synaptic strength patterns, a strong memory-specific CIP is not needed except during retrieval. Thus, we can hold in mind multiple memories at the same time.
The Nerve Impulse I stress the importance of nerve impulses throughout the book, because they propagate to other neurons and muscles over distances of millimeters to tens of centimeters. PSPs also propagate, but only in their immediate locality of microns to a few millimeters. PSPs from one synapse can have a direct electrical field effect only on other synapses that are nearby. If a given PSP is excitatory, for example, it may bias all neighboring synapses toward excitation or tend to offset any neighboring PSPs that are developing inhibitory postsynaptic potentials. Nerve impulses both reflect and drive the brain’s activity. Impulses provide the “commands” that make things happen. They tell neurons what to do, and they make skeletal muscles contract. Most importantly, they can propagate along the entire length of nerve fibers like a burning fuse.
122
4 Carriers and Repositories of Thought
I emphasize impulses as the currency of all kinds of thought because they are propagated. Real-time thinking requires instruction and commands to be sent to various parts of the brain and body. Of course, PSPs are intimately involved in impulse generation, but I regard them more as information processing sites and as reflecting the repository for memory. Memories are “read out” in the form of impulses. What is an impulse? Technically, it is a reversal of resting membrane voltage. All cells have a more or less steady (D.C.) voltage across the cell membrane. The inside of the cell is about 60–70 mV electrically negative with respect to the outside. Neurons (and muscle cells) are unusual in that their membranes can change permeability to certain ions like sodium and potassium, causing a change in ion current across a membrane. A nerve impulse is an obvious nonlinear property of individual neurons. An impulse results when the resting membrane voltage reaches a certain threshold, at which the membrane becomes unstable and discharges an “all or none” pulsatile voltage spike, referred to as a nerve impulse (Fig. 4.8).
Fig. 4.8 Graph showing the possible kinds of electrical changes in neurons, as are seen when a microelectrode is inserted into the cell body, allowing recording the voltage difference between the inside and outside of the neuron. Neurons have a “resting” voltage (dotted line). Excitatory inputs move the resting voltage toward a threshold that is unstable and will trigger an “action potential” or impulse. Inhibitory inputs do the opposite, driving the resting potential further away from firing threshold, thus producing the equivalent of inhibition
Brain Physiology
123
The most important thing to remember about nerve impulses is that they propagate, especially down the cell membrane extensions called axons to terminal zones where neurochemicals are released onto the membranes of postsynaptic neurons. If the neurotransmitters are excitatory, the impulse propagation is renewed in the next neuron in the chain.
The Nerve Impulse Lord Adrian (1889–1977)
Few of us can call ourselves knights. Even fewer can say that we’ve won a Nobel Prize for innovative research. Lord Edgar Douglas Adrian (1889– 1977), however, could do both, thanks to his devotion to the study of the functions of neurons and their impulse conduction.
Adrian made the key discovery of nerve impulses, even though he only had access to primitive instruments. Combining the amplifying apparatus pioneered by Herbert Gasser with a capillary electrometer, Adrian developed a sensitive and rapid means of detecting impulses. Adrian proved what had long been believed: that electrical impulses were used by sensory nerve cells, in motor nerve cells, and in conducting impulses. The technologies also allowed Adrian to record the nerve impulses of individual nerve fibers that result from stretching a muscle. This method of recording individual fibers is especially significant, given that what we call “nerves” are actually bundles of hundreds, even thousands, of fibers from multiple nerve cells. From his work with individual fibers, Adrian concluded that nerve impulses in a given fiber are of constant size, functioning in an all-or-none fashion. Since impulse size is constant, the perceived intensity of a stimulus depends on impulse frequency and temporal patterning of impulses. (continued)
124
4 Carriers and Repositories of Thought
The Nerve Impulse Lord Adrian (1889–1977) (continued)
In sensory fibers from muscle cells, Adrian observed that impulse frequency depends on the degree that the muscle is stretched and how quickly it is stretched. His studies also showed that impulse frequency of the skin receptors correlates with pressure applied to the skin, and that continued stimulation of sense organs such as the skin gradually leads to adaptation to a stimulus wherein impulse frequency gradually decreases over time. Studies like this led to today’s prevailing view that impulses convey information in the brain in terms of the discharge rate. Adrian conducted studies on pain – mediating impulses as well, which led to the conclusion that pain impulses are received in specific areas of the brain. He also established the modern view that the sensory cortex is topographically divided into sections devoted to specific end organs, and that the size of these sections differs among species according to how important a given sensory area is to their lifestyle. For example, in the primary sensory cortex of rats, certain areas such as the skin on the paws and the digits have more representation in the sensory cortex than other areas like the trunk of the body. In pigs, the snout gets extra representation. Adrian’s later work involved the study of olfaction and electrical activity in the brain, as observed with an EEG. From his studies on brain electricity and specifically, the pattern of alpha waves known as the Berger rhythm, Adrian laid the groundwork for later studies connecting disorders such as epilepsy with malfunctions in these rhythms. Adrian was too modest in saying that the breakthroughs in research attributed to him were merely something that “just happens in a laboratory if you stick apparatus together and see what results you get.” He was clearly a scientist of great intellect and an excellent experimentalist. His research will forever remain important to the foundations of neuroscience. Sources: Hodgkin, A. L. (1977). Lord Adrian, 1889–1977. Nature, 269, 543–544. The Nobel Foundation. (1965). Nobel lectures, physiology or medicine 1922–1941. Amsterdam: Elsevier Publishing Company.
Impulses in Shared Circuitry Impulses are traditionally studied in terms of the output of a single neuron. The train of these spikes from a single neuron, however, is not as important as researchers like to assume. The significance of an individual spike train is important only to the extent that it influences or reflects the throughput of the circuit to which it belongs.
Brain Physiology
125
A given neuron receives inputs from hundreds of neurons and may deliver output to a like number. A given neuron can participate in more than one network. What happens if a neuron gets recruited in several networks at the same time? The impulse patterns driven by one network would be mixed up with impulses from other networks. This is a question I have never seen asked in the research literature, so clearly any answer I could give would be pure speculation. However information is coded in the nervous system, it has to be in the form of some kind of population code. Most researchers assume that impulses constitute a rate code. If a neuron is being driven by inputs from more than one network, it would likely fire more than if just one network were involved. But how would the downstream targets know how to interpret such increased firing? How are the targets to know if the increased activity reflects more intense activity in one network (and which one) or whether they reflect summation of activity in two or more networks? Some sort of multiplexing might operate. But the target neurons would have to get cue signals from somewhere in order to know what time window to use for sampling the primary input that reflects one network and when to sample the input reflecting the other network. Alternatively, if an impulse train is fed simultaneously into two networks that oscillate at different frequencies, the peaks and valleys of the oscillation chop time into different lengths for sampling. Thus, the networks share the same information, but they read it in a different way (Fig. 4.9). What also is the role of synchrony? If two networks sharing the same neuron fire at exactly the same time, there might be an occlusive situation where that neuron responds to only one network. If there is some time-locked phase shift, then the separate influences of the two or more circuits could be differentially responded to, perhaps with some sort of multiplexing sampling. Again, information would be shared but in different ways. What if spike-interval coding is operative (see “Interval Code” below)? It is possible that one circuit drives a circuit-shared neuron to emit certain “byte” patterns whereas another circuit drives it to emit other byte patterns. Target neurons in that case would find it easier to distinguish and selectively respond to different byte patterns, and it may not be necessary for the target neuron to “know” which pattern came from where.
Fig. 4.9 Effect of oscillation frequency on transfer of spike train information. Illustrated here is a hypothetical spike train being fed simultaneously to the input of two oscillators, showing how two different oscillating circuits would sample the train differently because their frequencies are different. Here we make the assumption that the train is “read” only during the peak of each waveform
126
4 Carriers and Repositories of Thought
Rate Code The early research on nerve impulses revealed impulses as information carriers based on the rate of discharge. The more impulses a neuron fires in a given unit of time, the more information it supposedly carries. This “rate-code” idea is especially obvious in sensory neurons. Rate coding also occurs in the brain. It seems, however, to be true that at least for some brain neurons, certain impulses are more important than others. For example, Nicolas Masse and Erik Cook (2008) in Montreal, found such evidence in middle temporal cortical area of monkeys. Their monkeys were trained to release a lever when they detected certain screen-display patterns of movement of white dots inside a grey background patch. Each patch of dots was updated with a fixed interval of 27 ms, producing an oscillatory stimulus that drove a corresponding oscillation in the responding cells in the cortex. Dots within a patch moved a fixed distance either randomly in every direction within the patch or moved coherently all in the same direction. The monkey’s task was to indicate when the movement was coherent, and correct responses were rewarded. Simultaneous recording of impulses focused on when a given impulse occurred during the overall oscillatory firing pattern: for example, whether it was on the rising phase, at the crest, or on the falling phase of a given oscillatory cycle. Spikes during one of the phases (often the rising phase) were more informative about the coherent motion than other impulses from that same neuron that occurred during other phases of the cycle. How does the brain know which spikes carry the most information? I don’t think anybody knows. In visual and body sensations, the principle of receptive fields predominates, but of course, intensity of stimulus is reflected in impulse discharge rate. For sounds, locating the origin of a sound appears to be represented in the auditory cortex by a rate code; that is, how many impulses a neuron fired after stimulus depended on where the sound originated in space (Werner-Reiss and Groh 2008). Single-unit recording in auditory cortex of monkeys was used to determine whether they had receptive fields (coding for place in space) or a rate code for sound direction. Most of the neurons recorded showed a rate code, not a place code, for sound azimuth. Most auditory cortex neurons in monkeys responded to every sound location, and there was a small overall bias for locations on the opposite side of the body. Where the sound was located dictated the post-stimulus firing rate pattern.
Complex Spikes Some neurons can generate spikes so rapidly that the spikes superimpose and create a compounded waveform known as a “complex spike” or “population spike.” This is most well studied in the large pyramidal neurons in the hippocampus and in the cerebellar cortex. Input to the dendrites of these cells in the cerebellum comes from the brainstem’s “climbing fibers,” and the overlapping of spikes is caused in part because hundreds of synapses on the same neuron are activated at nearly the same time. Jenny Davie and colleagues (2008) at the University College of London have recently studied the
Brain Physiology
127
mechanisms of complex spike generation in cerebellar pyramidal cells. They find that such spikes can be generated by simply depolarizing the cell body and that the dendritic activation is not necessary. The role of the dendritic spikes seems to be limited to regulating the pause in firing that follows the complex spike. Thus the output of cerebellar cortex cells, which goes to the deep nuclei of the cerebellum, is a robust burst of complex spikes generated out of the cell body and axon, followed by a silent pause generated out of the dendritic complex spikes. The consequences of complex spikes on targets must be profound, given that the voltages and transmitter release are much magnified, compared with single spikes. Perhaps pausing is also necessary in order for the discharging neurons to re-set the asymmetric distribution of ions across the membrane that is essential for impulse generation.
Interval Code I and a growing number of others think it is also important when each impulse occurs; that is, we emphasize the temporal pattern of impulses. What is important is not just how many impulses occur during a given time, but when they occur. The obvious example would be that three closely spaced impulses would have more influence on their nerve or muscle target than would three impulses that were spread further apart. This is because PSPs decay within several milliseconds after they are initiated, but they can summate if they are closed spaced in time. My colleagues and I, as well as other investigators, have recorded trains of impulses (“spike trains”) in brain that have a few recurring clusters of adjacent intervals with an incidence that is much greater than chance. These patterns are said to contain serially dependent discharges: that is, when an impulse occurs, it is influenced by the interval of one or more preceding impulses. In studies in my lab, we found spike trains that have serial dependencies, as determined by classical Markov mathematics, of four or five adjacent intervals. In other words, some clusters of four or five intervals occur far more often than could have happened by chance, indicating that the interval pattern contains information. It is now clear from multiple sources that higher-order neurons generate certain specific patterns of spiking that occur at much greater-than-chance occurrences. There are only two ways to interpret such findings: (1) serially dependent events do occur even though the odds are against it, or (2) there really are some aspects of information processing that are carried in an interval code. The best test is to show that certain interval “bytes” are associated with a specific function or behavior. So far, that has not been accomplished.
Serial Dependency in Interspike Intervals The vast majority of neuroscientists quantify trains of nerve impulses in one of two ways: they (1) count the number of impulses per unit of time, or (2) count the incidence
128
4 Carriers and Repositories of Thought
of intervals in categories of interval durations. Both measures are usually displayed as histograms, showing the number of impulses in each successive time period or the number of impulses in successive interval-duration “bins.” In my view, both approaches make an unwarranted assumption; namely that the sequence of intervals is irrelevant. Only a handful of investigators have challenged this assumption, but each has used different approaches to prove that there can be serial order in spike trains. The typical interval histogram just mentioned scrambles whatever order might exist in a spike train and makes it impossible to recognize the serial order. In a review (Klemm and Sherry 1982), Cliff Sherry and I summarized some literature that strongly suggested an importance of interval patterns. For example, in 1950, Wiersma and colleagues showed that changing the pattern of stimulation of the neuromuscular junction of crayfish caused profound differences in the strength of muscle contraction, even though the total number of stimulus pulses in a fixed period was the same. Later, Segundo showed from intracellular recor dings of Aplysia ganglia that a train of equally spaced stimuli evoked a successive buildup of postsynaptic potentials that was substantially less than if that same number of stimuli were clustered in pairs. Wakabayahi and Kuroda showed much greater strength of muscular contraction in crayfish when the same number of stimuli were grouped in clusters of 4, 3, or 2 (in that order of effectiveness). Many studies in a variety of species have disclosed spontaneously occurring bursts of spike trains which contain to the naked eye fixed patterns of intervals. For example, Strumwasser reported spontaneous trains in Aplysia in which the first three intervals were progressively shorter, while the last intervals in the burst became progressively longer. He also showed that patterns in one neuron were often tightly coupled to distinct interval patterns in target neurons. Distinct and recurrent interval patterns have been observed in both Aplysia and Helix by Ristanovic and Pasic. Distinct interval patterns reportedly occur in Tritonia command neurons after stimulation. Robertson reported specific interval patterns associated with interneurons and motor neurons that govern flight in locusts. Our review showed that serially ordered interval patterns occur also in vertebrates. Calvin and Sypert observed steady repetitive firing patterns in intracellularly recorded sensorimotor cortical neurons of cats during electrical stimulation. Spike interval patterns changed reproducibly with changes in magnitude of applied current. In another study by Gottschaldt and Vahle-Hinz, neurons of cutaneous mechanoreceptors in cats changed firing patterns according to frequency of stimulation. Crowe and Matthews reported a bursting pattern in stretch receptors of cat muscle in response to stretching. Ranck had reported a pattern of near-equal duration intervals in spike trains from the diagonal band of Broca during alert wakefulness, while the regular pattern broke up into a more random appearance during sleep. Epileptic discharges in both monkeys and humans can show patterned firing. Then there are studies in which interval patterns existed, yet were not studied because the investigators were not looking for it. For example, I see distinct interval patterns in some of the illustrations that Mountcastle had reported from the sensory cortex of monkeys that had their hands electrically stimulated. Mountcastle saw
Brain Physiology
129
these patterns too and became a firm believer in the importance of serial ordering of spike train intervals decades before the current crop of researchers are now starting to “see the light.” If anybody deserves to be called “Mr. Cortex,” it is Vernon Mountcastle. He was among the first to show how sensations are registered in the cortex and has spent a lifetime elucidating how the cortex represents sensation by impulse responses. Statistically rigorous detection of serial order in spike trains requires special mathematical techniques. The usual serial correlation methods don’t work well because for any given interval (such a 1 ms, 2 ms, etc.) there are too few impulse intervals with any given duration to permit robust testing. For example, if one assumes a minimum bin width of 1 ms, then the first order matrix for just two dimensions would consist of a matrix with 1,000 by 1,000 cells. Even if one examined a million intervals, most of the bins would contain counts of only one or zero. One can of course increase the bin width, but that reduces the sensitivity. Even then, for higher sequence order analysis, the number of cells in the matrix becomes unworkable. Cliff Sherry (Sherry et al. 1972) solved this problem by using a relative interval coding scheme, wherein a computer determined whether a given interval was shorter, the same, or longer than its immediately following interval (coded as symbols −, 0, +). Then a transition matrix was constructed to tally transition probabilities for successive intervals. In the simplest digram (two symbol) case, he would tally how often a – was followed by a −, or a 0, or a +, and so on. This describes any serial order present for three adjacent intervals. Matrices for the trigram (three symbols), tetragram (four symbols), and pentagram (five symbols) were generated in a similar manner, specifying sequential relationships of four, five, and six intervals, respectively. As a control procedure, the original spike train can have its intervals shuffled and the same relative interval matrix counts made on the shuffled train. The most obvious statistical test is to calculate by Chi Square methods whether the distribution of symbol set is random or not. It should be obvious, but isn’t even to many scientists, that any serially ordered interval code could hypothetically occur in spike trains, yet go undetected because of the way spike trains are usually quantified (Fig. 4.10). Consider the scenario below: In a variety of studies, Cliff and I observed that certain patterns occurred at much greater than chance incidence in original spike trains but not in the shuffled versions of those trains. We showed statistically significant, non-random deviations in the distribution of percent maximum entropy of specific patterns of intervals (Klemm and Sherry 1981a). We have also seen significant non-random deviations in the relationship between entropy measures and the number of patterns of intervals that changed in a statistically significant manner after ethanol injections (Sherry and Klemm 1980). This caused us to suggest that the brain is a “byte processor,” a living computer that processes information in terms of clusters of adjacent intervals. Serial dependency is rigorously identified by Markov transition mathematics, and we have used this approach in a study of spike trains from rat cerebellar cortex
130
4 Carriers and Repositories of Thought
Post-stimulus Time Histogrom
16
Frequency
12
8
4
4
8
12
msecs
16
20
24
28
Individual Spike Trains 1 2
Stimulus Trials
3 4 5 6 7 8 9 10 STIMULUS
2 msec
Fig. 4.10 The usual way of quantifying intervals (above) is to calculate a frequency histogram. This approach prevents detection of any serial ordering of intervals. In the bottom of the figure are shown ten hypothetical spike trains from which the histogram above was calculated. Note that each train contains a “byte” of four sequential intervals, each of which is longer than the next (++++). Real data in different taste neurons show that a neuron that responds to ten different tastes codes the difference with specific temporal patterns of impulses
(Sherry et al. 1982). Statistically significant Markovian dependencies were observed to at least four sequential intervals, and more in some neurons. Observed incidence of certain groups of adjacent intervals differed from the independence case, as tested by Chi square goodness–of-fit tests. Each of these measures establish that certain clusters of intervals are not randomly distributed but rather must contain information. Such clusters might be indicative of a byte-processing mode that the brain uses to read and use the information contained within the cluster. When we looked at the
Brain Physiology
131
absolute time occupied by such interval clusters or bytes, it appeared that the “memory” of the system seems to last at least 36–45 ms. Sherry and I (Klemm and Sherry 1981b) showed that these serial dependencies are independent of non-sequential interval distribution variability. Our methods for calculation of fractional entropy show that information theory is appropriate analysis for spike train intervals. The fractional entropy (% maximum) of interval clusters does not have a Gaussian distribution but rather takes on surprising low-entropy distributions according to the specific number and relative durations of adjacent intervals (Klemm and Sherry 1981c). Our intuition was that Markovian dependency order should inversely correlate with entropy; that is, as Markovian order goes up, entropy should go down. However, when we tested the same spike trains with the same matrix procedure, we noted that entropy did NOT correlate, accounting for only 13.5% of the variability found with Markov order. Entropy also did not correlate with measures of central tendency or their variability (Sherry and Klemm 1984). What can we conclude about the apparent unrelatedness of these various measures of information in spike trains? Perhaps an analogy can help. Consider a painting on the wall of a dark room. If we shine a red light on it, only a portion of the painting becomes visible, while if we shine blue light, another portion is seen. Only if all visible-light components are used will the true image be perceived. In the case of spike trains, our attempts to illuminate with measures of central tendency, independence testing, entropy measures, Markovian analysis, etc. produce only isolated views of the signal. We must consider the possibility that a complete and correct view of the signal can only be obtained if all of these approaches can be coalesced into a unitary examination of spike trains. Clearly, no such approach has been developed either for spike trains or any other time series. Our lab was not the only one to make the case for byte processing. Nakahama and colleagues found serial dependencies and Markovian dependencies using serial correlation methods on absolute intervals (Nakahama et al. 1972a, b; Nakahama 1977a, b). Other approaches showing similar results were Sherry’s original use of runs tests to show statistical significant incidence on certain groups of three adjacent intervals. Rudolpher and May (1975) have used a different interval category method but likewise showed the existence of Markovian serial dependencies. Gerstein developed pattern-recognition algorithms. The nervous system at its most elemental level, i.e. neurons, seems to have a memory for around four sequential intervals. That could be the reason that the brain as a whole has a working memory for about four items, a matter that is discussed in some detail in the working memory section of Chap. 7. It is true that serially ordered patterns are not always evident in spike trains. How do I explain that? Three ideas come to mind: 1. Some neurons may operate with an interval-pattern code, while others rely solely on a firing rate code.
132
4 Carriers and Repositories of Thought
2. A given neuron may use an interval-pattern code only under certain circumstances. 3. An interval-pattern code may exist but go undetected, either because the definition of what constitutes equal-duration intervals is of necessity assigned arbitrarily (and incorrectly) by the investigator or because interval patterns are stochastic and investigators have not established the probability distributions for all the possible interval patterns. For some of the simple cases that I mentioned earlier, the physiological significance was obvious. But where interval patterns are stochastic, the acid test for the byte processing hypothesis has not yet been performed. What is needed is to show that certain statistically unique bytes correlate with some aspect of physiology or behavior. Even so, this hypothesis should not be dismissed, as is typically done. Too many scientists using a variety of rigorous analytical methods have clearly established that spike trains can contain certain statistically significant interval bytes of information. Unfortunately, the evidence for interval coding has been largely ignored for many years. However, today a new generation of scientists is reviving interest in impulse interval patterns. The new approaches focus on when in time each impulse in a spike train occurs. This is equivalent to quantifying intervals among and between spikes, but is more intuitive because the emphasis is on events (spikes) rather than silent periods. The analytical methods are illustrated in the recent studies by Foffani et al. (2009), where spikes were counted (1 ms bin size) in the thalamic ventrobasal complex of the rat during discrimination of tactile stimuli. Their main finding was not only that spike timing provided additional information via spike count alone, but that the temporal aspects of the code could be more informative than spike count in the rat ventrobasal complex. The recent report from DiLorenzo et al. (2009) elucidates the significance of temporal coding through the analysis of taste-responsive neurons in the “taste center” of the brainstem, the nucleus of the solitary tract, which is the first relay station along the taste pathway. Some of these neurons are narrowly tuned, responding selectively to one of the four primary tastes: (salt, sweet, sour, bitter). Others are broadly tuned, responding to each primary taste as well as to mixtures of two tastes. In all cases, there is a unique temporal pattern of impulses in response to the stimulus. A single neuron that is broadly tuned can represent the entire stimulus domain because each kind of stimulus evokes a unique temporal pattern of impulses. Moreover, the temporal coding of broadly tuned neurons is more efficient for representing multidimensional stimuli. It is now believed that impulse patterns of individual neurons are an important representational mechanism for sensory systems in general (for review see Lestienne 2001). Given that temporal impulse patterns are so efficient for representing multiple inputs, it seems likely that higher-order neurons, as in cerebral cortex, also use interval coding. Each circuit in which such neurons operate thus has a circuit impulse pattern (CIP) that is collectively temporally coded.
Brain Physiology
133
Compounded Voltage Fields Extracellular voltage fields accompany impulse discharge, and these fields often reflect underlying oscillation of the underlying neuronal activity. Oscillations can be observed simultaneously at different frequencies and in different brain areas, suggesting that they were isolated and independent. They are not, although they do have different source generators. Often, different-frequency voltage fields occur in the same part of brain and summate to produce a composite waveform of merged frequencies.The voltage field detected at any electrode on the scalp or even in the brain is a mixture of oscillations. High-frequency gamma waves, for example, are nested within alpha and theta waves, and “ride on top of” irregular slow waves. Historically, researchers did not notice this from looking at pen and ink traces of EEGs. But the nesting of higher frequencies only became evident with the advent of computerized digital sampling and display of brain waves. For years, all of us who did this kind of recording saw high-frequency “ripples,” but we just thought of it as noise or insignificant. It took the insights of Michael Steriadae in Canada to show us we had been overlooking something really important (Steriade et al. 1996a, b, 1998). We are still trying to figure out what this co-existence of slow and fast activity means, but at least we now know not to ignore it. See later comments in Chap. 7 on “Sleep Versus Consciousness.” Fast activity is largely confined to the depolarizing phase of slow waves, both in sleep and anesthesia, and disappear during the hyperpolarization phase. Thus, the slow oscillations seem to time-chop the fast activity, allowing it to pass only periodically through circuitry. The slow waves of sleep might prevent conscious thought because they prevent throughput of gamma activity. The slow waves are paced by recurrent hyperpolarizations in networks in the cortex (they are not generated in the thalamus and can occur in cortex when thalamic connections are severed). This probably explains why primitive animals show mostly fast activity in their EEGs. They don’t have much of a cortex to generate slow waves. What this means of course is that the brain has multiple generators of currents at different frequencies and they are all being produced at the same time. So what we see in the EEG signal (or its magnetoencephalogram counterpart) is a composite of all these simultaneous signals. Each signal may represent a particular aspect of thinking. No researcher has tested this yet, but it may prove useful to track the instantaneous mixture of frequencies as thinking progresses. Wavelet analysis is especially suitable for the frequency analysis because it makes no assumptions about the stationarity of the signal. Unfortunately, few neuroscientists use wavelet analysis. Modern computerized analysis allows a researcher to filter a composite signal into its component frequencies, allowing one to separately investigate the relative abundance of any given frequency band at any short period, say a succession of 1-s epochs. The time relationships of the various frequencies during a given thought task are surely inter-dependent and can probably be quantified by Markovian mathematical approaches that can track serial dependencies.
134
4 Carriers and Repositories of Thought
Brain Waves Mircea Steriade (1924–2006)
In the middle of his career, Mircea Steriade found himself at a crossroads: stay in his Communist-ruled home country of Romania, where the government was making free communication with the rest of the scientific world increasingly difficult, or flee to a foreign country, where he would have to make a fresh start but could pursue science uninhibited. He ultimately chose to exile himself from his own country and eventually found himself established as a professor in Quebec, Canada. He made great advancements in our understanding of thalamocortical oscillations and systems neuroscience.
One aspect of Dr. Steriade’s research, studying activity during the sleepwake cycle, demonstrated the dynamic nature of sleep which results in thalamocortical oscillations at different frequencies. He also described the neuronal properties and network operations of the thalamocortical system that result in brain rhythms and oscillations during various states of brain activity and wakefulness, including abnormal states such as electrical seizures. Credit is also due to him for attributing the generation of sleep spindles to so-called pacemaker rhythms provided by the thalamic reticular nucleus. Most often, Dr. Steriade is cited for three groundbreaking papers that he and his research colleagues published. The focus of these papers was socalled “slow” oscillations generated by the cortex during slow-wave sleep. This slow-oscillation concept successfully unified observations of deltafrequency patterns, cortical spindles, and K-complexes of sleep. Slow-wave oscillation is marked by an alternation between active “up” states of synaptic activity in thalamocortical and reticular thalamic neurons and silent “down” state of reduced activity. (continued)
Brain Physiology
135
Brain Waves Mircea Steriade (1924–2006) (continued)
Toward the end of his career, Dr. Steriade studied how sleep rhythms relate to plasticity. Along with co-researchers, he theorized that rhythms serve to modulate synaptic plasticity. This holds important implications for memory formation, since memory formation is often considered as a result of synaptic plasticity. Many colleagues of Dr. Steriade attested to his tireless work ethic, and indeed such was his dedication that he never retired. He was a true believer in the importance of research, and has been a mentor to many scientists that will hopefully carry on his passion for research. Sources: György, B., & Paré, D. (2006). Mircea Steriade 1924–2006. Nature Neuroscience, 9, 713. Steriade, M. (2008). Society for Neuroscience. http://www.sfn.org/index. cfm?pagename=memberObituaries_steriade Timofeev. (2006). Mircea Steriade (1924–2006). Neuroscience, 142, 917–920.
One approach of interest is the recent report from Timothy Senior (Senior et al. 2008) at Oxford. He and his colleagues studied gamma oscillations in the hippocampus of rats and digitally filtered the combined multiple-unit activity and simultaneously occurring gamma activity. They saw that the phase locking varied with behavioral state and with cell type. Most of the phase locking (~85%) occurred in the interneurons. Phase locking was most evident during alert wakefulness and least during rapid-eye-movement (REM or “dream”) sleep. Hippocampal pyramidal cells fired before the interneurons, presumably driving the interneurons, which because most of which are inhibitory, probably fed back inhibition on the pyramidal cells and thus created the cyclic activity (similar to the idea illustrated in an earlier figure). The phase difference between pyramidal cell activity and gamma waves was smallest during awake theta behavioral states and largest during REM sleep. Interneuron activity seemed largely independent of behavioral state. All pyramidal cells fired during the trough of gamma oscillations, but during wakefulness, two classes of pyramidal cells were seen. One group fired near the peak of gamma waves, while the other fired during the rising phase of gamma waves. During different kinds of wakefulness, such as exploring novel or fami liar environments, many pyramidal neurons fell into two classes, one in which the neurons were phase locked to gamma and one group that was not in a given environment. For example, 61% of pyramidal cells fired with fixed phase in familiar environments but failed to do so in novel environments. In novel environments, 53% of pyramidal cells were phase locked to gamma but not in familiar environments.
136
4 Carriers and Repositories of Thought
Next, the investigators considered the relationship of theta waves to gamma. For example, that the amplitude of gamma waves was greatest during the falling phase of theta waves, even though gamma waves were always present during any phase of theta. They also examined the behavior of so-called “place cells” in the hippocampus. These cells selectively fire impulses when the animal is physically located at a particular place. Presumably, these cells tell the rest of the brain where the animal is in space. Analysis showed that even when signaling place location, the pyramidal place cells still maintained their phase relationships with gamma waves. Finally, they examined pyramidal cell behavior during movement on a linear track. The cell groups differed during such movement. A similar compounding of theta and high frequencies was recently reported in humans by Le Van Queyen and colleagues (2008). By recording field potentials and action potentials simultaneously, they observed that 120–200 Hz (waves per second) oscillations were riding on theta and that hippocampal pyramidal cells in the CA3 region were firing preferentially during the highest peak of theta waves. Contrary to other studies in rodents, firing of interneurons began discharging earlier than pyramidal cells. Cell firing of both types of cells was phased locked to the high frequency oscillation, which of course strongly adds to the general belief that the field potentials are actually a manifestation of the impulse activity. While the significance of studies such as this is not abundantly clear, they do indicate that different neuronal populations encode and process information in the frequency and phase of the firing patterns. In this case, one pyramidal cell group’s oscillatory firing encodes place information and the other reflects movement trajectories. Since both types of encoding are dynamically linked to real-time events, it seems likely that the timing of these synchronous firings differentially reflect shortterm memory of the events.
Stimulus-Bound Oscillation Back in the 1970s and 1980s, research on “steady state” visual evoked responses was a popular field of study. Typical methods included recording the EEG while subjects viewed a computer screen of counter-phased bars or checkerboards in which dark and light areas reversed at specific frequencies. This kind of stimulation drives circuits in visual cortex into oscillatory activity that corresponds to the frequency of counter-phasing. Response robustness is markedly affected by the size of the bars or checks (“spatial frequency”), their contrast, and by the temporal frequency of reversals. One of the striking things that our lab observed in a study of bar stimuli reversed as 6, 11, and 16 per second was that everyone’s brain could be driven by such stimulation, but that there were marked individual differences (up to 76-fold in evoked response magnitude), in contrast sensitivity, and in responsiveness to spatial and temporal frequency (Klemm et al. 1980). Some subjects could barely follow the 6 per second stimulus, while a few could develop time-locked oscillations at 11 or 16
Brain Physiology
137
per second, including responses at the higher harmonics. Responses were lateralized and correlated with handedness, even though this was solely a visual stimulus. Nobody knows what to make of this finding and it has not been pursued. Responsiveness was reasonably consistent during replicate trials. However, across subjects there were enormous differences in the spectral power developed at the stimulus reversal frequency. These differences could not be explained by differences among subjects in alpha power (before or during stimulation), by degree of attentiveness, or by a person’s subjective impression of how well they were following the stimulation (Klemm et al. 1982). We assume that variations in robustness of response are governed by the degree to which a person’s visual cortex is organized to process bar stimuli in parallel at high-speed sequential change. Parallel processing by edge-detector cells in visual cortex would be expected to generate coherent oscillations that grow in size proportionately to the degree of coherence. Counter-phase frequency stimulation also imposes serial-processing demands, and subjects varied widely in their ability to respond to increasing counter-phase frequencies. While all subjects have a capacity to develop steady-state responses, the wide inter-subject variability suggests fundamental differences in the ability of people to process information serially and in a parallel (Gestalt) fashion. Whether these differences are innate or acquired has never been examined.
Clustered Firing and Oscillation Oscillating field potentials are likely caused by clustered impulse firing at one or more points in the network. The correlation between local cortical field potentials and multiple-unit activity has been confirmed by Destexhe and colleagues (1998). Such correlation was seen in various cortical sites of cats in various stages of sleep and wakefulness, and the correlation held even at the high gamma frequencies of field potentials that predominated during alert wakefulness. Thus, it seems reasonable to conclude that synchronous EEG between a pair of scalp electrodes signifies that the units underlying that region are firing synchronously. A main way this has been verified experimentally has been to calculate the temporal correlation of spikes in one neuron with nearby local field potentials. Many studies using simultaneous recordings from two or more closely-spaced microelectrodes outside and near single neurons have shown that single neurons give rise simultaneously to impulses and associated slower postsynaptic membrane potentials. Because the slower-frequency postsynaptic potentials summate much more readily than do impulses and are less attenuated by capacitative filtering, the EEG fields do not readily reflect the contribution from impulses. However, the size and patterns of field-potential oscillations do not always indicate what they seem. You would think that large field potentials indicate large numbers of oscillatory firing in neurons. One study has suggested by mathematical modeling that beta-band field potential oscillations in olfactory sensory epithelium that is stimulated with odorants have patterns of amplitude change that do not arise from
138
4 Carriers and Repositories of Thought
synchronous oscillators. Rather, the amplitude is modulated like AM radio signals (that is, size goes up and down over time), but the AM is not caused by changes in the amount of synchronous oscillation. Actually, the AM is produced by multiple asynchronous oscillators. In other words, neural oscillation, at least in this system, can summate from multiple oscillators that are uncoupled or unrelated. In catfish olfactory epithelium, odorants induce AM oscillations, but the patterns are not the same with repeated stimulation with the same odorant (Diaz et al. 2007). These oscillations arise in the epithelium and are not reflections of oscillations in the nearby olfactory bulb, because cutting the olfactory nerve, which stops bulb activity, does not affect the oscillations in the epithelium. In this system, the odorant stimulus (a mixture of three amino acids) induced an immediate 20 Hz oscillation in which the frequency declined to about 10 Hz within 6–7 s. The amplitude of each wave was not constant but rose and fell (that is, was amplitude modulated) in different temporal patterns, and the patterns were not consistent with repeated stimulation, even though a rest period of at least 3 m was given between stimuli. Where does the AM come from? … not likely from the properties of the stimulus. Because previous research in this kind of experiment has failed to show any relationship to olfactory neural circuits that could produce such results, the investigators postulated that such results could occur simply from summation of multiple sinusoidal signals of a narrow frequency band that have random phases with respect to each other. Their mathematical simulations showed that such AM oscillating signals were produced artificially by mathematical addition of simulated asynchronous oscillators in a narrow frequency band. Thus, they demonstrated that physiological coupling is not needed. If this kind of observation can be generalized to oscillations of brain local field potentials that are summed from multiple oscillating circuits, we cannot assume that oscillation in the summed signal amplitude arises because the contributing oscillators are operating in synchrony. Or in other words, the amplitude of field potential may not mean much in and of itself, but rather the real information carriers are the oscillations in impulse activity of each of the contributing oscillators. Nonetheless, this study is consistent with what is known from many other sources. Namely, that the nervous system operates with numerous local oscillatory circuits. We know that in many situations, the oscillations are coupled, with fixed phase shifts that no doubt reflect interdependent time delays imposed by their coupling. So, there is the temptation to conclude, as did the authors of the catfish study, that the AM has no biological meaning. Yet this conclusion is not valid if the summed oscillations are large enough to exert electrostatic effects (see discussion of Adey’s studies later in this chapter). The catfish study does raise interesting issues about how the olfactory mucosa is coding the stimulus quality. Phase relations of oscillations don’t seem to be a factor, because the phase of the oscillators seems to be randomly distributed. Frequency is no doubt a big part of the coding, because all frequencies for this stimulus were tightly constrained between 10 and 30 Hz and the progressive slowing was consistent with repeated stimulation.
Biochemistry
139
Biochemistry Release of Transmitter Although finite numbers of molecules (“quanta”) are released from a given vesicle in axon terminals when the membrane depolarizes, the postsynaptic potential response is not linearly related to the amount of presynaptic depolarization (Fig. 4.11). Research by Katz and Miledi revealed a non-linear relationship that produces an S-shaped curve, resembling a “dose–response” curve for pharmaceuticals. The curve indicates that minimal presynaptic depolarizations have no effect, but beyond that point, small presynaptic voltage changes produce large, exponential changes in postsynaptic voltages. Beyond that point, further increases in presynaptic voltages produce diminishing response and ultimately produce no further change in postsynaptic voltage. As the number of molecules increase, the exponential response abates at the point where all binding sites are occupied, whereupon further release of transmitter can no longer have an effect. Communication across synapses is either electrical or chemical. Electrical coupling is prominent in lower animals, but in humans only a relatively few synapses are coupled this way. Chemical coupling involves release of neurotransmitter chemicals that are specific to a given neuron-to-neuron coupling. Any given synapse has complete biochemical support systems for one or two neurotransmitters. These systems include mechanisms for synthesis and storage in presynaptic terminals, postsynaptic receptors that stereospecifically bind the transmitter, and enzyme systems that destroy surplus transmitter.
Postsynaptic Potential (mv)
D
C
B
A- Minimal presynaptic deploarization, no or minimal response B- Exponential increase in response C- Response increase still present but at a lesser rate D- Maximum response reached for high levels of presynaptic potential
A
Presynaptic deplorization (mV) Fig. 4.11 When a presynaptic neuron is sufficiently depolarized, it starts to generate impulses. The more closely spaced impulses are generated presynaptically, the greater the postsynaptic response will be. But note that the postsynaptic response is not linear, showing a slow change with small presynaptic depolarization, an exponential response with moderate depolarization, and a saturation effect with still larger presynaptic depolarization
140
4 Carriers and Repositories of Thought
Neurotransmitters confer flexibility upon synaptic communication. The chemicals are released more or less proportionately to the impulse discharge in presynaptic terminals. But, since a given postsynaptic neuron receives inputs from up to several hundred terminals, the transmitters released from all these terminals add algebraically. In the hippocampus, there is a phenomenon called Long-Term, Post-Tetanic Potentiation (LTPTP). The phenomenon was first discovered by noticing an enhancement of synaptic response after termination of a high-frequency stimulus. Because this potentiation outlasts the stimulus, it is a form of memory of that stimulus. A related phenomenon of postsynaptic depression is also known. An electronic neural network would lose its memory whenever its power is turned off unless provision is made for permanent storage. The nervous system also must have a storage mechanism. Evidence to date is that learning causes lasting biochemical and structural changes in the synapses that participated in a given learning experience. What is generally agreed upon is that these changes bias certain synapses and pathways so they can reconstruct the response to an earlier stimulus condition; that is, the memory of a learned event is encoded in electrical activity that resembles that which was originally activated during learning. The relevant electrical properties that lead to biochemical change are those than occur in the synapses; namely, postsynaptic responses, either IPSPs or EPSPs. If these potentials are evoked often enough at high-enough rates, then they become potentiated. Presumably, such potentiation makes it more likely that more permanent changes in synaptic biochemistry will occur. One biochemical basis for LTPTP has recently been reported by Whitlock et al. (2006). They found that LTPTP was associated with phosphorylation of glutamate receptors. The counterpart phenomenon, Long-term, Post-Tetanic Depression, was not affected by changes in this transmitter receptor, indicating that some other transmitter system must be used for that phenomenon. Only in recent years has proof been mounting to show that LTPTP is the actual basis for long-term memory. For example, it is necessary to show that LTPTP occurs as an animal undergoes a real-life learning experience. This has now been demonstrated in mice that were conditioned to blink in response to a tone and in rats that had learned to avoid an area where they had been shocked. It is also necessary to show that memory is erased if LTPTP is removed after learning. This has now been demonstrated in rats that had learned to avoid a “shock zone” by injecting into the hippocampus an enzyme that is needed to sustain the synaptic hypertrophy of LTPTP. The leading theory to explain lasting molecular representations of memory is that LTPTP generates genetic transcription proteins that regulate gene expression (Barco et al. 2006). Recent studies in molluscs, fruit flies, and in rats establish that in the later stages of LTPTP the expression of certain genes is changed, thus providing a way for permanence of the molecular changes associated with memories. Proteins that act as transcriptional factors, known as CREB proteins (cAMP response element-binding proteins) bind to DNA segments and regulate their expression.
Biochemistry
141
CREB proteins are generated secondarily to persistent activation of neural circuitry by learning experiences and their attendant transmitter-receptor binding. These transmitter actions generate second messengers, such as cAMP or calcium ions, which in turn activate protein kinases. Protein kinases move from cytoplasm to the nucleus, where they activate a CREB protein that affects gene transcription. CREB is also implicated in mediating anxiety and drug addiction, which may reflect a learning component in those conditions. Up-regulation of neural cell-adhesion molecules (NCAMS) is also likely to be involved. The form of NCAMs that have numerous sialic acid residues has recently been shown to be up-regulated in the dorsal hippocampus after contextual fear conditioning, and enzymatic cleavage of these sialic acid residues disrupts the memory of the learning (Lopez-Fernandez et al. 2007). There is much more that could be said about the importance of biochemistry for all nervous system functions. Neurochemical systems number over a hundred different ones for neurotransmitters. Without these systems the brain could not be the magnificent information processing system that it is. I am admittedly giving biochemistry short shrift in this book because the theories explored to explain the three minds seem to depend most heavily on the electrical properties of the brain. Of course, without biochemistry, there would be no electrical properties and thus no minds.
Receptor Binding Stereospecific binding is a non-linear process, whereas non-specific binding can be linear over a large range of concentrations. As far as we know, only stereospecific binding has information processing and communication value in the nervous system. Stereospecific binding is characterized by a molecular receptor with a high affinity for a specific ligand (the molecule that binds to the receptor). This means that when a ligand is present in low concentrations, it will bind almost exclusively at the stereospecific site. Nonspecific binding, however, is characterized by a receptor with low affinity for a specific ligand, meaning that a ligand will typically bind to a nonspecific site only when it is at a high enough concentration to have filled all of the finite number of more discriminatory, high- affinity sites. At the point where all stereospecific sites are filled, the system is saturated and adding more neurotransmitter will not produce more conformational change or physiological consequence. Nonspecific binding increases linearly with increasing ligand concentration; a given proportion of ligand will always bind to nonspecific sites due to chance alone. Stereospecific binding, however, will level off when all stereospecific binding sites are filled, resulting in a nonlinear relationship between binding and ligand concentration. It follows that the summations of these two types of binding is also nonlinear (Fig. 4.12).
142
4 Carriers and Repositories of Thought
Fig. 4.12 Stereospecific binding of neurotransmitter (the “ligand”) to postsynaptic membrane receptors produces the physiological action in the target neuron. That action may be excitation, inhibition, or adjustment of sensitivity, depending on the neurotransmitter system involved. The point here is that the stereospecific binding is a non-linear process, as distinct from non-specific binding. The lower point of the total-binding curve is almost exclusively stereospecific binding. At the upper part of that curve, stereospecific binding sites are saturated
Second Messengers Second messengers are biochemical cascades commonly coupled to a series of postsynaptic biochemical reactions that produce activated, phosphorylated proteins which cause the ultimate change in cellular activity. The intracellular response to neurotransmitter action at the membrane surface that is mediated by these receptors is simultaneously magnified by them. Graphing the response shows that the cellular response is exponential.
Elementary Thinking Mechanisms Analog Computing The nervous system is more like an analog computer than a digital one. Analog computation of changing variables using integration or differentiation is much faster and easier than with digital computers. The various combinations of capacitance and electrical resistance in nervous tissue are the fundamental reasons why so many neural operations are nonlinear. The built-in electrical resistance and capacitance in nervous tissue create electrical circuits that perform analog computations not much different in principle from those
Elementary Thinking Mechanisms
143
in analog computers. For example, a neural circuit can perform a simple two-variable addition from two current sources in parallel. If the two pathways converge on a common target at the same time, the currents add algebraically and affect the target directly or may do so indirectly via neurotransmitter intermediaries. In the nervous system, inherent capacitance can store (integrate) electrical voltage. Differentiation is automatically accomplished in a neural equivalent circuit that takes as input a voltage waveform into the equivalent of a capacitor linked to a resistance, giving as output across the resistance an approximation of the derivative of that waveform. In the nervous system, the derivative is another waveform that captures the rate of change of the original input. Calculus-like differentiation has recently been described as a causal agent of roundworm behavior by Shawn Lockery and colleagues at the University of Oregon (Hiroshi Suzuki et al. 2008). Two closely located chemosensory neurons direct the worm’s behavior to approach or withdraw from an odor source. One neuron acts like an “on” switch and the other like an “off ” switch. Working in tandem, they tell the worm how to react to a chemical stimulus, either to seek out food or avoid poisons. The neurons do this by the calculus equivalent of differentiation, based on the rate of change of the strength of various taste chemicals. For example, when entering an area of salt gradient, increasing salt concentrations activate the “on” neuron, which keeps the worm in approach movement. If salt concentrations decrease, the “off ” neuron makes the worm stop and move in another direction, having the effect of telling the worm it has gotten off course. The ON and OFF responses to preferred stimuli were transient in the face of a maintained concentration change. These responses therefore signal changes in salt concentration rather than its absolute level and thus provide the basis for computing the time derivative of concentration. This kind of processing is reminiscent of how a mosquito flies upstream into an odor plume to find its target. The servo-mechanism that keeps it on course probably also uses calculus-like differentiation, though this has not been looked for as far as I know.
Gating Gating refers to the opening or closing of a neural pathway, usually by the action of inhibitory neurons. Gating is a built-in consequence of oscillating neural activity in that flow of impulse information in a pathway waxes and wanes in response to the peaks and valleys of the oscillation. Gating does not have to rely on oscillation. One-time events can close a gate that is normally open, and this idea is the basis for a popular theory by Melazck and Wall about how pain is regulated in the spinal cord (Melzack and Wall 1965). The basic idea is that stimuli that cause pain initially flow from the periphery into the back part of the spinal cord, as indicated by the narrow line in Fig. 4.13. From there, the paininducing information is sent in fiber tracts (not shown in the diagram) to reach the thalamus and the somatosensory cortex. The spinal nerves that contain pain fibers
144
4 Carriers and Repositories of Thought
Fig. 4.13 Diagram of the famous theory of Melzack and Wall for pain regulation in the spinal cord. Impulse activity in the touch/pressure paths (dark lines) activates inhibitory neurons ( filled closed circle) that inhibit neurons in the pain pathway
also contain larger fibers (darker lines) that convey touch and pressure information, which also converge in the same dorsal part of the spinal cord. When touch/pressure inputs occur simultaneously with painful stimuli, inhibitory neurons shut down information flow along the pain pathways. These same inhibitory neurons may also be activated from descending fibers coming from the brain, thus providing a way for the brain to influence the perception of pain. This gating scheme is thought to be part of the explanation for why treatments such as massage, liniments, and acupuncture can alleviate pain. It is also why dentists twist your cheek before injecting anesthetic into your gum. The influences on the gate from descending paths from the brain can explain such things as how Hindu ascetics walk on nails or how wounded combat soldiers may not feel their pain until after the battle is over.
Oscillation Oscillations in extracellular voltage fields can obviously result when impulses are generated in periodic bursts. A recent study examined spike- and voltage-field correlations by recoding single spikes and field potentials from secondary somatosensory cortex of monkeys while delivering vibrational touch stimuli to the contralateral hand. Time course of the high gamma field voltages and the spike bursts were, on average, tightly correlated. But for any given spike, the co-variance could be weak (Ray et al. 2008).
Elementary Thinking Mechanisms
145
Oscillation is a carrier of information in the sense that it is the way impulses are packaged. Many neurons in the brain deliver impulses in oscillatory bursts. There are two main ways that oscillatory activity arises in the brain. One way comes from a series of excitatory neurons arranged like Christmas tree light bulbs in series wherein the last neuron in the chain has an output that feeds back to re-excite the first neuron in the chain. For example, in an interconnected series of neurons, neuron A excites neuron B, which excites neuron C, which excites neuron D, which in turn re-excites neuron A. Each synaptic junction imposes a several millisecond delay, so that it may takes tens of milliseconds or more to re-excite the first neuron in the chain. The time it takes for each cycle depends of course on the number of neurons and synaptic delays that are in the pathway. The time delay dictates the period of oscillation. If total delay were 100 ms, then the oscillatory period could appear as 10 waves per second in an EEG. A probably more common source of oscillation involves circuits with inhibitory neurons (Fig. 4.14). The oscillations arise because they are being paced by firing of inhibitory satellite neurons located “downstream” from a primary neuron that excites them. Such inhibitory neurons have axons that feedback the inhibitory influence on the primary neuron that excited them. After a delay, the primary can resume responding to its excitatory inputs from “upstream” neurons. Inhibitory neurons have been discovered throughout the spinal cord and brain. In addition to their role in pacing oscillations, inhibitory neurons are absolutely essential in preventing runaway excitation. In the cortex, for example, neurons are so
Fig. 4.14 Illustration of how inhibitory neurons can drive oscillation. When an excitatory neuron (open circle) is activated, part of its excitatory output is diverted to an inhibitory neuron ( filled circle) that then suppresses the neuron that excited it. When the inhibition decays, the excitatory neuron can fire again. Such on-then-off activity typically occurs in large arrays of neurons that have similar architecture. This kind of circuit was first discovered in the spinal cord (“Renshaw” circuit), but it occurs throughout the nervous system
146
4 Carriers and Repositories of Thought
intimately and richly interconnected that any excitatory input without the brake of inhibition would recruit all neurons into unstructured and sustained firing, as indeed happens in epilepsy, which is caused by a failure of inhibitory neurons.
Types of Oscillations Oscillations from brain were first discovered from recording the electrical signal from electrodes on the scalp; that is, the electroencephalogram (EEG). Later, investigators discovered that they could see similar signals from electrodes implanted into the brain itself. The EEG is superficially similar in all vertebrate species. It looks like a squiggly line. The main difference is that in higher species, large slow waves can appear. Large waves are summated from the activity of neurons that are firing in near synchrony. Slow waves are usually larger than fast waves for two reasons: (1) tissue acts as a capacitive filter that attenuates the amplitude of faster waves, and (2) the longer time course of slow waves provides time to recruit more neurons and their activity adds to the total integral of activity in the recorded area. When enough neurons oscillate synchronously, their extracellular voltages, called “field potentials,” become large enough to be detected. Alpha Oscillations The original discovery of brain wave oscillations was observed by Hans Berger shortly after the EEG machine was invented. He saw rhythmic oscillations coming from electrodes placed at the back of the head and called these “alpha” rhythms. The naming was prescient. Berger could not have known that there are many other rhythms in the electroencephalogram (EEG), because his primitive equipment could not display them. What is the role of alpha rhythm in thinking? To this day, there is still argument. We do know that a small subset of people do not generate alpha rhythms. Another subset of people generate alpha all the time. But the vast majority of people generate alpha mostprominently when their eyes are closed and they are mentally relaxed. For example, in such people during times when they are generating alpha, the alpha goes away when they try to solve a complex math problem. One might conclude, therefore, that alpha rhythms indicate that the brain is not doing much thinking. However, that might not be a fair assessment of those people who generate alpha all the time. Alpha rhythms arise at multiple sites in the visual cortex. The voltage fields spread in three dimensions, though nobody that I know of has paid any attention to the spread into the brain beneath the visual cortex. All the emphasis has been on the alpha rhythms as they are detected by electrodes on the scalp. The amplitude of the scalp voltage fields is greatly attenuated because of the electrically insulating barrier of membranes, bone, and scalp that are interposed between the neuronal source of the currents and the surface electrodes. The same attenuation occurs with all electrical fields arising in brain. The higher the frequency of oscillation, the smaller the voltage field is at the scalp, primarily because tissue capacitance shunts high-frequencies away from the electronic amplifiers.
Elementary Thinking Mechanisms
147
Theta Oscillations Had Berger recorded EEGs from animals, he probably would have discovered “theta” rhythms, officially defined as 4–7 EEG waves per second, but in reality more like 4–12 waves per second, depending on the species. Most of the research on theta rhythm has been done by recording from the hippocampus, because it has two large populations of neurons that generate theta. The hippocampus, was so named because it is curled and reminded people of sea horses (whose genus is Hippocampus). Actually, it is a C-shaped structure, with the open side of the “C” facing toward the midline of the brain, surrounding the thalamus, which lies on both sides of the midline. Theta waves in animals have been associated with all manner of behaviors, particularly arousal, orienting, voluntary movements, and dream stage of sleep. In the 1970s, and even today, a leading theory for theta was that it specifically correlates with voluntary movement. I reminded these advocates in the 1970s that theta can occur independent of voluntary movement, and in fact it has many correlates. I posited the theory that the reason theta rhythm has so many correlates is that it is driven by excitatory (or disinhibitory) influences from the brainstem reticular formation. I showed this in several ways: electrical stimulation of the brainstem core in curareparalyzed animals elicits all of the components of the “Readiness Response,” which includes theta rhythm. Secondly, theta rhythm occurs in “animal hypnosis” (a.k.a. death feint) in which an animal’s brain is aroused yet the animal does not move. Theta also occurs during involuntary movements, such as during the early stages of struggling in ether anesthesia (Klemm 1976a, b, 1972). Though my explanation seems to me to be an obvious unification of otherwise conflicting ideas, many prominent neuroscientists cling to their pet theories even today, especially the idea of voluntary movement. Gamma Oscillations Many parts of brain have neuronal populations that generate high frequency oscillations called gamma waves. The frequency range is arbitrarily defined, but can span frequencies ranging from 25 to 100 or more waves per second. Because they are high frequency, the capacitative shunting at scalp electrodes causes them to be very small, so small in fact that they are hard to detect by the slow-speed pen-and-ink recorders originally used in EEG machines. With the advent of digitization technologies and computers, researchers realized how common gamma activity is and have correlated it with a variety of cognitive functions. Extent of gamma activity is often used as an index of how hard the brain is working to solve a task.
What Do Oscillations Do? Clearly, oscillating circuits must have primary effects, because of the propagating nature of burst impulse firings that give rise to oscillating field potentials. Whether or not the oscillating field potentials have electrostatic effects in their own right depends on how large they are and the voltage of the oscillations.
148
4 Carriers and Repositories of Thought
Many scientists think of EEG oscillations as “epiphenomena,” a byproduct of underlying impulse activity that has no effect of their own. However, oscillations, especially large, slow ones, can cause electrostatic changes in excitability of nearby neurons. Ross Adey (1988) and others spent years presenting evidence that oscillating field potentials directly influence target cells and fiber tracts that are nearby. To the extent that this is true, it means that such oscillations would drive corresponding oscillation in their targets. The problem is that extracellular fields attenuate rapidly from their source, and could only influence the closest neurons or their fibers. But one place where this could be of major significance is in the outer layer of the cerebral cortex, made up almost exclusively of fiber terminals (see “Neocortex as the Origin of Consciousness” in Chap. 8. These fibers are associated with both inputs and outputs of the circuits of the neocortex and, moreover, this is the location where EEG fields, as detected on the scalp, are the largest. Yet, no one I know about has examined this outer layer of neocortex for possible electrostatic effects. This is the obvious place in the brain for researchers to look. Magnitude of the EEG field is crucial for determining whether or not there is any “electrostatic” effect on targets in the field. So, on the one hand, the EEG fields are largest in the area where they could have widespread influence. But on the other hand, the EEG frequencies most relevant to consciousness are the high-frequency, low-voltage gamma fields. Because oscillatory extracellular field potentials are such a prominent feature of the EEG, we may be misled into thinking that these recordings are what is important. But the most clearly important aspect of oscillations comes from the underlying information carrier, nerve impulses, which when firing in periodic bursts of closely spaced impulses, create the oscillating extracellular field potentials. The field potentials can summate the impulses as well as the postsynaptic potentials that are associated with the impulses. Whether the influence of oscillation is direct or indirect, it is clear that the brain uses oscillation for information processing. Some scientists, particularly Walter Freeman, view oscillations as the fundamental indicator of the brain as a “complex system,” because these oscillations are conveniently evaluated in the context of chaos theory. I discuss chaos theory in the last chapter as one of the theories for consciousness, so for now I will just say that brain oscillations reflect dynamic non-linear processes. These processes are described as a motion vector in multi-dimensional space, and the sequence of data points in that space is called a “trajectory.” Any given trajectory is influenced by input, the present state (also called initial conditions) of the system, and the past experience of the system. The output of a complex system is hard to predict from the behavior of lower-level components, and has led to the popular notion that higher brain functions, such as consciousness, are an “emergent property” that cannot be explained by knowing basic ideas of information processing in neurons. This notion is not much different from the idea that the whole is greater than the sum of its parts. This widely held view among scientists is defeatist and nihilistic, for it says that there is no point in learning how neurons or even networks of neurons work, because we can never learn from such information how brains really work.
Elementary Thinking Mechanisms
149
I don’t subscribe to such defeatism. It seems counter-intuitive to me to reject all that we have learned about nerves and circuits in the last 100 years. I am not alone in being suspicious of claims that “top-down” approaches like chaos theory can explain the brain. Whether by direct or indirect action, different-frequency oscillations do influence each other. For example, theta rhythm and gamma rhythms often occur together and they must be interacting. But how? György Buzsáki has observed that the power of gamma activity varies as a function of the theta cycle, being large during the valleys of theta and small during theta peaks. Other workers have shown cross-frequency phase synchrony between alpha and gamma oscillations during mental arithmetic tasks. In other words, a slower signal can power-modulate a faster signal. At least four primary functions of oscillation have been identified experimentally: (1) gating impulse flow through circuitry in a rhythmic manner, (2) biological clock functions, (3) creating windows of time (time chopping) that govern how many impulses flow in a given epoch, (4) stabilization of brain servo systems.
Oscillatory Gating Oscillations have a peak and a valley, or in the case of impulse discharge that gives rise to oscillation, a repeating cycle of impulses bursts and silence. This on-thenoff-then-on nature of activity creates a time window that allows information to be spread around in the brain in message packets of impulses. The period of oscillation dictates the size of the packets. For example, slow oscillations allow propagation of larger packets, while the opposite is true of fast oscillations. The distances over which packets are spread may also be affected, because conduction delays in axons and neurochemical delays in synapses impose time constraints. Thus, fast oscillations may influence highly localized activity, compared to processing over longer distances that could be enabled by slow oscillations. Because the brain operates with multiple oscillation frequencies, often concurrently, it can simultaneously process information in small and large packets, over short and long distances. These notions are reminiscent of earlier comments we made about impulse interval coding of information and the notion of “byte” processing. This mode of operation should add to the brain’s capacity to conduct multiple kinds of processing all at the same time. Control of Timing To create a train of rhythmic waves, impulses have to propagate through a circuit in clusters of impulses with interposed periods of relative silence. In addition, the circuits that sustain rhythmic activity may need to feed back onto themselves so that the activity reverberates. The cycle time is determined by the circuit’s number of synapses, each of which imposes a few milliseconds of delay. Obviously, this creates the basis for a biological clock. The brain has multiple biological clocks whose periods range from milliseconds to weeks and even months. Long-period clocks are not readily controlled by impulse oscillations in circuitry. They are typically regulated by outside forces such as seasonal changes in day length and length of light
150
4 Carriers and Repositories of Thought
exposure. Short-period clocks are another matter. Such functions as breathing and heart rate, body temperature, and cortisol secretion, are paced by reverberating activity in circuits localized in the brainstem’s non-conscious mind. The best known bodily rhythms are daily, or circadian. How body functions are regulated in neural circuitry over a 24-h period is not at all clear, but the fact remains. Even when placed in artificial environments, such as a cave, where there are no external clues as to day and night, the body’s circadian rhythm will run on a 24 h period for many days and even when it starts to “free run” there will still be a circadian rhythm that differs by only a few hours, albeit increasingly out of phase with the outside world day-night cues. Bodily functions such as eating and sleeping will continue on the new circadian rhythm. The out-of-phase drifting does not occur in a normal sensory environment, because light and darkness cues keep the clock repeatedly reset. When the body clock becomes greatly out of phase, it may take several days of cues to get it correctly set, as any international traveler can verify from jet-lag experiences. Individual cells, including non-neurons, can have self-regulating clocked functions in cellular activity and metabolism. One particular part of the brain, the suprachiasmatic nucleus of the hypothalamus, has clock neurons that have widespread influences that are largely influenced by day length. Time Chopping Oscillations have a peak and a valley, or in the case of impulse discharge that gives rise to oscillation, a repeating cycle of impulse bursts and silence. This on-then-offthen-on nature of activity creates a time window that can regulate how information spreads through the brain. Time chopping has been studied in the mechanisms that guinea pigs use to discriminate conspecific vocalizations (Hueretz et al. 2009). Single-unit activity in the thalamus and cortex exhibits a spike-timing code that underlies the four common sound patterns. The timing pattern of impulse bursts, not the spike count, is the important information carrier that allows the guinea pig brain to discriminate sounds of other guinea pigs from other sounds. Explicit experimental exploitation of these ideas is being pursued by linguists (Hickok and Poeppel 2007). Modern studies make it clear that language processing is not restricted to the Broca and Wernicke patches of neocortex in the left hemisphere, but rather is distributed and bilateral. Language lexical items operate on different time scales, with the smallest sound processing window on the order of 20–50 ms. For whole syllables, the required time window is 150–300 ms, which is the average duration of syllables in most languages. Syllables appear to be the fundamental unit of language. David Peoppel at the University of Maryland has proposed that these time windows are generated by the time chopping that oscillations can produce (Luo and Poeppel 2007). Experimental alteration of these oscillations and their corresponding time windows alter linguistic performance. Data from fMRI studies indicate multi-time resolution processing that is hemispherically asymmetrical, with the right hemisphere being more selective for long-term language integration.
Rhythmic Change in Excitability
151
Magnetoencephalographic studies suggest that lexical segments are sampled by the time chopping cortical oscillators in the gamma range, while syllable-level input is sampled by theta-range time chopping. For example, theta phase in auditory cortex correlates with specific lexical categories of sentences. Phase patterns reliably track and discriminate spoken-sentence types. Phase patterns for a given lexical category of sentence were consistent in the theta band, but not in other frequency bands. This phase-based discrimination was most evident in bilateral theta activity from the temporal lobes. Thus, theta oscillation appears to be a key time-chopping sampling mechanism for representing syllable-level language. In general, we should emphasize that time relationships are extremely important to mental functions. What is happening in any one area or circuit of brain affects, and is affected by, what is happening in other parts of the brain with which it is functionally interacting. The timing of activity in two or more interacting areas may be have a fixed lag or may “jitter” somewhat randomly. Activity may shift in and out of synchronization, and this may well be the cause of changes in thought as time progresses. The activity processes for which timing relations are important include timing relationships of different spike trains, of oscillations of field potential from different brain areas, and even of time relationships of different frequencies. There is also the key matter of timing relations of spike trains to the field potentials with which they are associated. Servo-System Control A great deal of brain function involves homeostatic regulation, not too different in principle from human-engineered systems like thermostats. Among the more obvious such brain functions are control of respiration, heart rate, blood pressure, and regulation of hormone levels. Engineers know that servo systems work best if they are designed to oscillate around a set point. Apparently, the same principle applies for brain functions like those mentioned.What then would be the point of having oscillatory activity in neocortex, which is far removed from what we commonly think of as servo system controls? This invites us to think about the possibility that higher cognitive functions may also employ certain servo-system features in the neocortex that have not been discovered (nor looked for).
Rhythmic Change in Excitability As impulses cycle rhythmically through a circuit, they obviously produce a rhythmic change in excitability of their targets. This effect may be excitatory or inhibitory, depending on whether IPSPs or EPSPs are produced on the target. This effect can be demonstrated in many ways and at various cognitive levels. For example, when the brain is engaged in demanding conscious-brain tasks, prominent gamma waves appear. It is crucial to know how such high-frequency oscillations contribute to mental processing. They are likely to provide a clue as to the mechanisms of consciousness.
152
4 Carriers and Repositories of Thought
Even lower frequency brain waves (theta and alpha) seem to participate in cognitive processing. Studies in monkeys by Peter Lakatos and colleagues (Lakatos et al. 2008) revealed that these oscillations control the rhythmic shifting of excitability in local neuronal populations. Monkeys were trained to perform a selection task by making a manual response when an atypical stimulus appeared randomly in the midst of a rhythmic stream of auditory stimuli (beeps) that alternated with visual flashes. At the same time, local field potentials and their associated multiple-unit activity were recorded in the visual cortex. Not surprisingly, the electrical activity was greatest when the monkey was required to focus on visual stimuli and respond when an atypical visual stimulus was presented, as compared to the response when they were required to focus on beep stimuli. The stimuli were presented in a deltaband rhythm (1.5 per second), and the neuronal response entrained to this rhythm when attention was required. It seems that when the brain detects a rhythm in a stimulus stream, attentiveness forces phase resetting and entrainment of neuronal oscillations that have the effect of changing response gain and amplifying responses to disparate events in that stream. In other words, the entrainment of brain rhythms to the rhythm of stimulus serves as a mechanism for selective attention. Other workers have shown that such entrainment can also occur in other brain-wave frequency bands. What do we make of overlapping oscillatory fields of different frequencies? Intriguing things happen when different-frequency oscillations occur in the same place at the same time. Depending on the frequency ratios and the phase, considerable amplitude modulation and waveform distortion will occur as a result of direct interference (Klemm 1969). The physiological implications of these physical pro perties of electromagnetic phenomena have not been explored. Each frequency component in the EEG may represent a particular aspect of thinking. It may prove useful to track the instantaneous mixture of frequencies and their shifting synchronies as thinking progresses. For example, when alpha rhythms during relaxation disappear and are replaced by higher frequencies, what happens to the nested higher frequency signals? Modern computerized electronics and mathematical algorithms allow a researcher to filter a composite signal into its component frequencies, allowing separate investigation the relative abundance of any given frequency band at any short period, say a succession of 1-s epochs. The time relationships of the various frequencies during a given thought task are surely interdependent and can probably be quantified by Markovian mathematics to provide a marker for differ kinds of thought. Recall the previously mentioned report which showed that the phase relationships between multiple-unit activity and simultaneously occurring gamma activity co-varied with behavioral state and with cell type. While the significance of studies such as this is not abundantly clear, it seems likely that different neuronal populations encode brain processes in the frequency and phase of the firing patterns. In this case, one pyramidal cell group’s oscillatory firing encodes place information and the other reflects movement trajectories. Since both types of encoding are dynamically linked to real-time events, it seems likely that the timing of these synchronous firings differentially code short-term memory of the events.
References
153
Cross-frequency Interactions Different local networks can generate different frequencies. Presumably, this is the brain’s way to keep information coded in separate information channels. But how then does the brain prevent “cross contamination” and interference among different frequency generators? Cortical columns, for example, have rich interconnections. It may be hard for them to sustain oscillations at different frequencies at the same time. Anita Roopun and colleagues (Roopun et al. 2008) argue that interference cannot occur if the phase relationships between two different frequencies are unstable, i.e., not stationary. They cite such an example as co-existent cortical gamma and beta rhythms surviving without interference. Synchronization Multiple generators of oscillation may operate independently or may become time locked, that is, synchronized. When oscillations from different generators become synchronized, new consequences and capabilities emerge. Such emergence may include the onset of consciousness.When time-locking synchronization occurs, the participating generators influence each other and in some sense share their information in ways that must differ from when they are out of sync. Various experiments have suggested that synchronous oscillations assist conscious attention to stimuli, preparation for movement, and the maintenance and recall of representations in memory. Some computer neural network simulations even show that certain heterogeneous cell populations can self-organize synchronization of non-periodic activity (Thivierge and Cisek 2008). In physical network systems, we know that networks can form randomly and evolve. The addition of a small number of connections can cause a large portion of the network to become linked (Achlioptas et al. 2009). Because these effects are often widely distributed across multiple networks, I will defer further consideration to the Chap. 6, on Global Interactions.
References Achlioptas, D., D’Souge, R. M., & Spencer, J. (2009). Explosive percolation in random networks. Science, 323, 1453–1455. Adey, W. R. (1988). The cellular microenvironment and signaling through cell membranes. In M. E. O’Connor & R. H. Lovely (Eds.), Electromagnetic fields and neurobehavioral function, Vol. 27: Progress in clinical and biological research (pp. 81–106). New York: Alan R. Liss. Barco, A., Bailey, C. H., & Kandel, E. R. (2006). Common molecular mechanisms in explicit and implicit memory. Journal of Neurochemistry, 97, 1520–1533. Davie, J. T., Clark, B. A., & Häuser, M. (2008). The origin of the complex spike in cerebellar Purkinje cells. Journal of Neuroscience, 28(30), 7599–7609. Destexhe, A., Conteras, D., & Steriade, M. (1998). Spatio-temporal analysis of local field potentials and unit discharges in cat cerebral cortex during natural wake and sleep states. Journal of Neuroscience, 19, 4595–4608. Diaz, J., et al. (2007). Amplitude modulation patterns of local field potentials reveal asynchronous neuronal populations. Journal of Neuroscience, 27(34), 9238–9245.
154
4 Carriers and Repositories of Thought
DiLorenzo, P. M., Chen, J. Y., & Victor, J. D. (2009). Quality time: Representation of a multidimensional sensory domain through temporal coding. Journal of Neuroscience, 29(2), 9227–9238. Foffani, G., Morales-Botello, M. L., & Aguilar, J. (2009). Spike timing, spike count, and temporal information for the discrimination of tactile stimuli in the rat ventrobasal complex. Journal of Neuroscience, 29(18), 5964–5973. Hagmann, P., et al. (2008). Mapping the structural core of human cerebral cortex. PLOS Biology, 6 (7),e159: 0001–0015. Han, J.-H. (2007). Neuronal competition and selection during memory formation. Science, 316, 457–460. Hickok, G., & Poeppel, D. (2007). The cortical organization of speech processing. Nature Reviews. Neuroscience, 8, 393–402. Hiroshi Suzuki, H., et al. (2008). Functional asymmetry in Caenorhabditis elegans taste neurons and its computational role in chemotaxis. Nature, 454, 114. doi:10.1038/nature06927. Hueretz, C. P., Philibert, B., & Edeline, J.-M. (2009). A spike-timing code for discriminating conspecific vocalizations in the thalamocortical system of anesthetized and awake guinea pigs. Journal of Neuroscience, 29(2), 334–350. Klemm, W. R. (1969). Animal electroencephalography. New York: Academic. Klemm, W. R. (1972). Ascending and descending excitatory influences in the brain stem reticulum: A re-examination. Brain Research, 36, 444–452. Klemm, W. R. (1976a). Physiological and behavioral significance of hippocampal rhythmic, slow activity (“theta rhythm”). Progress in Neurobiology, 6, 23–47. Klemm, W. R. (1976b). Hippocampal EEG and information processing: A special role for theta rhythm. Progress in Neurobiology, 7, 197–214. Klemm, W. R. (2004). Habenular and interpeduncularis nuclei: Shared components in multiplefunction networks. Medical Science Monitor, 10(11), RA261–RA273. Klemm, W. R., & Sherry, C. J. (1981a). Entropy measures of signal in the presence of noise: Evidence for “byte” versus “bit” processing in the nervous system. Experientia, 3, 55–58. Klemm, W. R., & Sherry, C. J. (1981b). Serial ordering in spike trains: What’s it “trying to tell us?”. International Journal of Neuroscience, 14, 15–33. Klemm, W. R., & Sherry, C. J. (1982). Do neurons process information by relative intervals in spike trains? Neuroscience and Biobehavioral Reviews, 6, 429–437. Klemm, W. R., et al. (1980). Hemispheric lateralization and handedness correlation of human evoked “steady-state” responses to patterned visual stimuli. Physiological Psychology, 8(3), 409–416. Klemm, W. R., et al. (1982). Differences among humans in steady-state evoked potentials: Evaluation of alpha activity, attentiveness and cognitive awareness of perceptual effectiveness. Neuropsychologia, 20(3), 317–325. Koch, C. (2004). The quest for consciousness. Englewood: Roberts & Company. Lakatos, P., et al. (2008). Entrainment of neuronal oscillations as a mechanism of attentional selection. Science, 320, 110–113. Le Van Quyen, M., et al. (2008). Cell type-specific firing during ripple oscillations in the hippocampal formation of humans. Journal of Neuroscience, 28(24), 6104–6110. Lestienne, R. (2001). Spike timing, synchronization and information processing on the sensory side of the central nervous system. Progress in Neurobiology, 65, 545–591. Lopez-Fernandez, M. A., et al. (2007). Up regulation of polysialylate4d neural cell adhesion molecule in the dorsal hippocampus after contextual fear conditioning is involved in longterm memory function. Journal of Neuroscience, 27(17), 4552–4561. Luo, H., & Poeppel, D. (2007). Phase patterns of neuronal responses reliably discriminate speech inhuman auditory cortex. Neuron, 54, 1001–1010. Masse, N. Y., & Cook, E. P. (2008). The effect of middle temporal spike phase on sensory encoding and correlates with behavior during a motion-detection task. Journal of Neuroscience, 28(6), 1343–1355. Melzack, R., & Wall, P. D. (1965). Pain mechanisms: A new theory. Science, 150, 971–979.
References
155
Mikeska, J. A., & Klemm, W. R. (1975). EEG evaluation of humaneness of asphyxia and decapitation euthanasia of the laboratory rat. Laboratory Animal Care, 25, 175–179. Mongillo, G., Barak, O., & Tsodyks, M. (2008). Synaptic theory of working memory. Science, 319, 1543–1546. Mountcastle, V. B. (1997). The columnar organization of the neocortex. Brain, 120(4), 701–722. Nakahama, H. (1977a). Dependency as a measure to estimate the order and the values of Markov process. Biological Cybernetics, 25, 209–226. Nakahama, H. (1977b). Dependency representing Markov properties of spike trains recorded from central neurons. Tohoku Journal of Experimental Medicine, 122, 99–111. Nakahama, H., et al. (1972a). Markov process of maintained impulse activity in central single neurons. Kybernetik, 11, 61–72. Nakahama, H., et al. (1972b). Statistical inference of Markov process of neuronal impulse sequences. Kybernetik, 15, 47–64. Ray, S., et al. (2008). Neural correlates of high gamma oscillations (60–200 HZ) in macque local field potentials and their potential implications in electrocorticography. Journal of Neuroscience, 28, 11526–11536. Roopun, A. K., et al. (2008). Temporal interactions between cortical rhythms. Frontiers in Neuroscience, 2, 145–154. Rudolpher, S. M., & May, H. U. (1975). On Markov properties of inter-spike times in the cat optic tract. Biological Cybernetics, 19, 197–199. Senior, T. J., et al. (2008). Gamma oscillatory firing reveals distinct populations of pyramidal cells in the CA1 region of the hippocampus. Journal of Neuroscience, 28(9), 2274–2286. Sherry, C. J., & Klemm, W. R. (1980). Entropy correlations with drug-induced changes in specified patterns of nerve impulses: Evidence for “byte” processing in the nervous system. Progress in Neuro-Psychopharmacology, 4, 261–267. Sherry, C. J., & Klemm, W. R. (1984). What is the meaningful measure of neuronal spike train activity? Journal of Neuroscience Methods, 10, 205–213. Sherry, C. J., Marczynski, T. J., & Wolf, D. J. (1972). The interdependence series matrix: A method for determining the serial dependence of neuronal interspike intervals. International Journal of Neuroscience, 3, 35–42. Sherry, C. J., Barrow, D. L., & Klemm, W. R. (1982). Serial dependencies and Markov properties of neuronal interspike intervals from rat cerebellum. Brain Research Bulletin, 8, 163–169. Steriade, M., Amzica, F., & Contreras, D. (1996a). Synchronization of fast 30–40 Hz spontaneous cortical rhythms during brain activation. Journal of Neuroscience, 16(1), 392–417. Steriade, M., Contreras, D., Amzica, F., & Timofeev, I. (1996b). Synchronization of fast (30–40 Hz) spontaneous oscillations in intrathalamic and thalamocortical networks. Journal of Neuroscience, 16(8), 2788–2808. Steriade, M., Timofeev, I., Dürmüller, N., & Grenier, F. (1998). Dynamic properties of corticothalamic neurons and local cortical interneurons generating fast rhythmic (30–40 Hz) spike bursts. Journal of Neurophysiology, 79, 483–490. Tero, A., et al. (2010). Rules for biologically inspired adaptive network design. Science, 327, 439–442. Thivierge, J.-P., & Cisek, P. (2008). Nonperiodic synchronization in heterogeneous networks of spiking neurons. Journal of Neuroscience, 28(32), 7968–7978. Werner-Reiss, U., & Groh, J. M. (2008). A rate code for sound azimuth in monkey auditory cortex: Implications for human neuroimaging studies. Journal of Neuroscience, 28(14), 3747–3758. Whitlock, J. R., et al. (2006). Learning induces long-term potentiation in the hippocampus. Science, 313, 1093–1097.
5
Examples of Specific Ways of Thinking
This chapter will summarize some specific examples of how the brain performs a wide range of certain specific tasks. Scientists don’t know everything there is to know about how these tasks are performed, but enough is known to contribute to an understanding of some general principles of how the brain thinks. More to the point, these examples will reveal certain common denominators that will guide our thinking in the discussion on Theories of Consciousness in Chap. 8.
Time Processing When most scientists think about time processing, they think about bodily rhythms, as was briefly mentioned earlier in the consideration of what oscillations do. Many books have summarized the brain’s control over daily rhythms. I won’t spend much time on this topic, since it is so well known. The basic fundamental is the existence of end ogenous rhythms that can be entrained by external stimuli, the most important of which is the daily light-dark cycle. If people are put in caves without clocks and with constant lighting, for example, the daily rhythm of sleeping, eating, body temperature, hormone release, etc. continues for many days until it eventually becomes free running, but still remaining within a couple of hours of the normal rhythm. In mammals, pacing of the rhythm emerges from the suprachiasmatic nuclei, a component of the hypothalamus that receives entrainment light stimuli from the eyes. These nuclei pass timing information in the form of nerve impulses to the pineal gland, which releases the hormone, melatonin, in the dark phase of the daily cycle to assist in maintaining the rhythm. Much less attention has been given to how the brain keeps track of time, both elapsed, present, and predicted. Some brains are better at this than others. Some brains can even tell time during sleep. I know that many times I will wake up in the morning at the same time, plus or minus about 5 min. Many people wake up at approximately the same time every morning. Many times when I must set an alarm clock to catch an early flight, for example, I will wake up about 5 min before the alarm goes off. The traditional theory of how brains tell time is that there must be some kind of internal clock that generates a kind of count at periodic intervals. But the evidence W.R. Klemm, Atoms of Mind: The “Ghost in the Machine” Materializes, DOI 10.1007/978-94-007-1097-9_5, © Springer Science+Business Media B.V. 2011
157
158
5 Examples of Specific Ways of Thinking
for this idea is not compelling. Dean Buonomano at UCLA suggests a physical model that operates without using a clock (PsychOrg.com 2007). Here is his analogy for how it might work: “If you toss a pebble into a lake, the ripples of water produced by the pebble’s impact act like a signature of the pebble’s entry time. The farther the ripples travel the more time has passed. He proposes that a similar process takes place in the brain that allows it to track time. Every time the brain processes a sensory event, such as a sound or flash of light, it triggers a cascade of reactions between brain cells and their connections. Each reaction leaves a signature that enables the brain-cell network to encode time.” The UCLA team used a computer model to test this theory. They showed that an artificial neural network could tell time by changing over time in response to stimuli. Their simulations indicated that a specific event is encoded within the context of events that precede it. Brains can distinguish the structure of time. They can move forward or backwards in time. They can even unscramble epochs that have been experimentally manipulated. How these things occur is not known. But neural impulse activity certainly underlies the capability. One recent study by Hasson et al. (2008) studied time structure with fMRI, which reflects the degree of impulse activity. He evaluated time structure over a scale of seconds to minutes using silent movies with complex story lines. Movies were presented to subjects in forward and backward natural order. Movies were also cut into segments of varying length that were randomly reordered to create time shuffling. Time reversal or shuffling is unnatural and presumably harder for the brain to process, and should therefore be reflected in greater fMRI activity. As expected, fMRI usually showed greater activity, for example, when watching movies backward or scrambled. Yet another aspect of time processing is how the brain chops time into segments. Figure 5.1 illustrates the difference between how the human mind typically thinks about time and how the brain actually tracks it.
Fig. 5.1 The human mind typically thinks of time as a steady arrow of time moving from past through present to future. The brain, however, due to the consequence of its oscillations at various frequencies produces the effect of chopping time into a series of segments in which impulses occur in clusters
Sound Localization
159
Given that circuits oscillate at different frequencies, time is tracked at different rates in different circuits. This has many consequences, but few scientists have thought about it in the context of time tracking. However, increasing attention is being given to the physiological consequences of such time chopping. As mentioned earlier, for example, some linguists now believe that language processing depends on time chopping. No doubt, so do many other cognitive functions.
Sound Localization Because humans have two ears positioned on opposite sides of the head, most sounds from a point in space arrive in the ears at slightly different times. Even though the difference is on the order of microseconds, relay neurons along the auditory pathway can code this time difference. Animals localize sound because certain neurons in the brainstem nucleus, the medial superior olive, are very sensitive to the microsecond time differences at which sounds arrive at each ear (Jeffries 1948). We now know that three clusters of neurons in the brainstem participate in this coding, and this local processing can occur at subconscious levels. Indeed, many of the studies of the coding mechanisms have been performed in anesthetized animals. The first group of neurons, in the cochlear nucleus, receives direct input from the auditory nerve. Some of these neurons send impulses to a group of coincidence detection neurons in a cluster called the medial superior olive (MSO). The cochlear neurons that project to the MSO respond to low-frequency sounds (less than 1,000 cycles/s) and code the frequency by increasing spike discharge in proportion to increases in frequency up to the point where the cells are no longer very sensitive to higher frequencies. Also, spikes are discharged in oscillating bursts, with the burst interval decreasing as the sound frequency increases. Thus, sound information is packaged as packets of impulses and the time window for the packets is inversely proportional to sound frequency. Sensitivity to time differences is established by coincidence detection of excitatory inputs from both ears. Tuning the sensitivity of these neurons for time delays is accomplished by systematic arrangement of input axons with different conduction times. The relative delay of binaural inputs adjusts the sensitivity in superior olive neurons. The result is essentially a place code for three-dimensional sound location in space. In 1948 Jeffries had suggested a model that has guided most subsequent research. The idea is that the anatomical pathway to a given MSO neuron is shorter from the ear on one side of the head than from the other ear. The time difference of sound arriving at the two ears is registered in a topographically mapped way in MSO neurons. These anatomically based time delays are magnified by the fact that some inputs to the MSO on one side of the brainstem cross over to the MSO on the other side and conduction to the other side is delayed accordingly. However, the length-of-pathway difference produces a time delay of less than 1 ms for a given MSO cell to receive impulse input. Thus, this may not be sufficient to account for all of the observed behavioral capacity for animals and humans to localize sound in space.
160
5 Examples of Specific Ways of Thinking
In the most recent studies by Michael Pecka and colleagues (Pecka et al. 2008) in Munich, Germany, another supplementary mechanism has been described. The cochlear cells that convey input to the MSO also send impulse input to a network of inhibitory neurons in the brainstem’s trapezoid body, which in turn projects inhibitory input to the MSO. Recordings from MSO cells in anesthetized animals showed clear effects of microinjection of drugs that affect the inhibitory neurotransmitter, glycine. Reversible blockade of glycine transmission increased MSO firing rates, and shifted the sensitivity to inter-aural time differences. Augmenting glycine transmission had opposite effects on firing rate, but did not alter time difference sensitivity.
Locating Body Position in Space Animals need to know where they are in space, and this is particularly important for species that forage for food. Localizing where the body is in space is largely accomplished by the hippocampus. We can assume that place coding can operate subconsciously because there is no evidence that the hippocampus is implicated as a central player in consciousness. Such coding has been demonstrated in animals that don’t have a cerebral cortex that is fully developed to sustain consciousness at the level of humans. Of course, in humans, conscious mind does have access to the place coding information of the hippocampus (it does have an output to neocortex via the entorhinal cortex). Some but not all neurons in the hippocampus code spatial location. These so-called “place” neurons fire impulses when an animal is in a certain location. Place cells are large pyramidal cells that get inputs from the entorhinal cortex, which in turn is connected to visual and bodily-sense parts of the neocortex that are vital for spatial navigation. There are also place neurons that fire selectively for certain sequences of movement (such as turn left, then right, then right again, etc.). Spatial field size in the rat is about 25 cm. It is akin to the visual-field idea for neurons in the retina, but here the field is an area where a given hippocampal neuron fires in its place field. Collectively, the place cells form a virtual map representation of the environment. The discovery was made initially by John O’Keefe in 1971 at the University College of London, who recorded impulse activity from the large pyramidal cells in the hippocampus. A given neuron fired spontaneously when a rat was moving about in an enclosure and arrived in a certain place that was specifically mapped by a given cell. The “place fields” in the rat hippocampus are cone shaped, with the highest firing rate indicating the center of the field, irrespective of the direction from which the animal entered the place field. The two-dimensional location is indicated by the cues in the environment. Not many cues are needed. Removal of cues does not degrade performance as long as a few cues remain. If room cues are rotated together
Locating Body Position in Space
161
or if the animal test chamber is moved to another room, the place cells re-map accordingly. If landmarks stay put, the place-cell mapping does not change for months. Numerous studies of place cells have also been conducted in so-called eight-arm radial mazes. Rats forage for food placed at the end of each arm and they display remarkable ability to remember which arms they have already visited and which arm they are currently in. Most recently, studies in rodents and even humans reveal that place cells exhibit conjunctive properties, for example, responding optimally to a particular combination of position and other features of the environment, such as odor or auditory signals (Barry and Doeller 2010). Evidently, upon during first exposure to an environment, a spatial map-like representation is formed in the hippocampus and items and events are then encoded onto that in their spatial context. This is an automatic association process and illustrates the importance of associational cues in memory formation and retrieval. As far as I know such conjunctive functions of place cells have not been investigated in the context of memory formation and retrieval. Two recent studies (Wills et al. 2010; Langston et al. 2010) of place cells reveal interesting aspects about the relative role of genetics and learning experience. Place cell wiring is under genetic control. Rudimentary spatial mapping ability is already present in rat pups as young as two and a half weeks, evidenced during their very first open-field exploration experience. But maturation and learning are also important. As pups mature, the mapping becomes more precise and stable.
Relations to Phase of Hippocampal Theta Rhythm When one records electrical field potentials from the hippocampus, the prominent theta rhythm of about 6–9 waves/s (up to 12/s in rodents) occurs under numerous behavioral conditions. In the early days of hippocampal research there was a great deal of controversy over behavioral correlates, ranging from general arousal, to orienting, to bodily movement. I presented evidence that theta could even occur in immobile animals (Klemm 1971, 1972, 1976). Buzsáki has spent a career contributing to the theta rhythm field and was an earlier champion of the movement theories that have dominated hippocampal research. The more interesting findings in hippocampal theta research have involved “place cell” activity and its phase relationship to theta rhythm. In general, total spike activity in the hippocampus is phase-locked to theta activity, but some neurons exhibit some striking exceptions. In 1993, John O’Keefe and Michael Reece observed that place cell firing shifts systematically relative to the phase of the ongoing theta. Specifically, as a rat enters the place field of a given neuron, that neuron begins firing near the peak of theta and the spikes can become delayed as much as a full cycle as the rat moves through its place field. As the rat continues its movement, a new cell assembly emerges to code the new location.
162
5 Examples of Specific Ways of Thinking
Physiologists have coined the word that physicists use for changing angular momentum: precession. Precession is a change in the orientation of the rotation axis of a rotating body. In neuroscience, the word refers to a phase change in the relation of impulse firing with the phase of an oscillating waveform, particularly hippocampal theta rhythm. The hippocampus has at least two main distinct generators for theta. When the rhythms are slightly out of phase, they can create interference patterns that will affect the timing of spike discharge. Buzsáki (2006) provides some provocative speculation on the mechanisms for how interference patterns can affect timing of spike discharge. Firing rate becomes greatest at the trough of a theta wave when the animal is in the center of a place field. However, firing rate is not a reliable indicator of passing through a place field. Firing rate does not depend on speed of movement but rather on the size of the field and location within it. What impulse rate does code for remains unknown, but it must be for something other than location — perhaps learned associations of a place with events in that place. Precession has also been observed in two-dimensional movements by Itskov and colleagues in the Buzsáki lab. They found that spikes on different phases of theta were best predicted from the immediate past or future locations of rats moving in an open field. Moreover, such precession was consistent across a population of place cells in the CA1 region of hippocampus, providing a mechanism for coherent representation of changing place location throughout a large number of neurons. While it is tempting to conclude, as the these investigators did, that spike burst phase locking to theta carries information about changes in location. I think it would be helpful to remember that theta comes ultimately from spike firing. This suggests to me that there are at least two classes of pyramidal cells: those that generate the field potentials of hippocampal theta and another class whose timing relationship to the theta adjusts according to the history of locomotion in space. Why and how interneurons adjust the timing in response to locomotion trajectory is not known. Buzsáki (2005) regards theta activity as a “temporal organizer,” a way for “chunking events and places together in a proper temporal and spatial context.” This is consistent with this book’s earlier comments about time chopping. My main objection to Buzsáki’s thinking is the almost exclusive emphasis on locomotion. I and many others have shown that theta rhythms can correlate with situations and behaviors other than movement or location. In such cases, individual neurons may still be signaling spatial location, but not likely navigation because an immobile animal isn’t going anywhere. Theta rhythm may tie assemblies together in time and no doubt has functions other than space. The most obvious other function is memory formation, which can include episodes that have nothing to do with spatial location or movement. Although decades of study have established the hippocampus as essential for consolidation of explicit memories, relatively little attention has been given to the role of theta or frequency shifts within the theta frequency band during consolidation, or even during other processes associated with “readiness responses.”
Locating Body Position in Space
163
Spatial Scale-Sensitive Neurons Lesion studies have indicated that destroying the ventral hippocampus did not disrupt spatial locating function. However, recent studies by Kjelstrup and colleagues (Kjelstrup 2008) in Norway show that there are place cells in the ventral hippocampus too, except that these operate on a larger scale of space. Ventral and dorsal parts of hippocampus are contiguous and obviously must share their respective information about space. The ventral hippocampus gets its inputs from other areas of the limbic system (hypothalamus, septum, and amygdala. Lesions here disrupt autonomic and defensive responses. The existence of place cells in the ventral hippocampus had not been discovered sooner because traditional testing environments consisted of small cages or containers. But when tested in large enclosures, such as the 18-m long linear track used by Kjelstrup, place cells were seen that coded over long distances. Scale increased almost linearly from dorsal hippocampal place cells (less than 1 m) to ventral place cells (up to 10 m). Corresponding spatial gradients have recently been reported in entorhinal cortex neurons that supply input to the various regions of hippocampus. It takes longer for a rat to traverse 10 m than the time constants of most neuronal properties. So how can the coding be accomplished? Since walking rats generate prominent theta oscillations, phase relations between impulses and theta field potentials in the same area could account for the difference in spatial scale if running through a small space causes a more fixed phase relationships of spike and wave rhythm in a small spatial field and a larger and more variable change of phase when running across long distances. The idea is that a given place cell fires with a certain phase relationship to the ongoing theta wave oscillations and that the phase shifts are more or less variable depending on whether it is coding on a small or a large scale. The data of the Kjelstrup study are consistent with this interpretation. The explanation for behavioral differences between dorsal and ventral hippocampus includes the obvious fact that locating a hidden submerged safe platform in tank of water, for example, requires a swimming rat to use the high-resolution coding of dorsal hippocampus, whereas coding of the aversion of a foot shock in a large room needs the low spatial resolution of ventral hippocampus. Obviously, the scale parameters of place cells would vary with size of species and the range of its movement. Human place cells in the ventral hippocampus, might code over a few kilometers of space, as opposed to a few meters.
Multiple Place Fields for Each Neuron Two possible explanations could account for how place cells code spatial location. One is a dedicated-coding hypothesis, wherein the current activity of a given place cell is an independent estimate of location within the place field. In this view, the average of the location estimates by many place cells would represent the current location. This view requires place cells to have only one place field. Otherwise the coding would become corrupted.
164
5 Examples of Specific Ways of Thinking
Alternatively, place cells could have multiple fields and yet code correctly if the simultaneous activity of multiple place cells created a vector that indicates the spatial location as long as there is a unique vector for each spatial location. As a test of these two possibilities, André Fenton and colleagues (Fenton et al. 2008) recorded hippocampal place cell activity under two conditions: the standard small test chamber and a chamber that was six times larger. In the large chamber, more neurons gave evidence of being place cells and each had multiple and enlarged place fields. Most of what I have said thus far applies to one-dimensional movement along a linear track. When navigation in two-dimensions occurs, as in an open field, an animal can enter a given place from multiple directions. Thus, the brain needs a way to construct a two-dimensional map of such space. This is achieved by the joint action of hippocampal place cells and so-called “grid cells” in the entorhinal cortex, a major supplier of input to the place cells.
Grid Cells Once place cells where discovered in the hippocampus, it took about two decades for investigators to ask whether space was mapped in brain areas “upstream” from the hippocampus, namely in the area that supplies input to the hippocampus, the entorhinal cortex (EC). Sure enough, there is a spatial map there too, only with differences in the way space is coded. In the EC, the place fields are equilateral triangles that span the entire two-dimensional surface of the environment. EC neurons are activated whenever the animal’s position coincides with any vertex of an imaginary grid of the triangles overlaid on the surface of the enclosure. Thus, these EC neurons are called “grid cells.” Grid-cell responses are innate and do not require a few days of learning in a new environment, as is the case for hippocampal place cells. Grid cells do not specify location in space, because they fire any time the rat is near any of the vertices of the virtual lattice of spatially periodic triangles. What then do grid cells specify? Ila Fiete and colleagues (Fiete et al. 2008) believe that grid cells use a modulo arithmetic coding scheme to combinatorially represent and update estimates of the location of a moving animal. The mechanism is more like a global positioning system than a map. Unlike place cells, grid cells specify navigation over a range comparable to the natural foraging range, which in rats is on the order of 100 m to a kilometer. If grid cells coded position the same way as place cells, it would take on the order of 1016 neurons to cover the same dynamic range. The rat hippocampus contains only about 106 neurons. Animal studies also indicate the presence of head-direction cells that fire when the head is facing a certain direction. About 10% of neurons in the entorhinal cortex function as “barrier cells” that selectively fire when an animal approaches a wall or other kinds of barrier such as a ledge from which that the animal might fall (Solstad 2008).
Face Recognition
165
Face Recognition Visual recognition of objects and even specific faces occurs with astounding speed. In a study that used three different tasks to measure recognition of natural images with various exposure times, investigators observed that human subjects distinguished images faster and more accurately when the categories differed (such as birds vs. cars) than when images were in the same category (such as pigeons vs. another kind of bird) (Grill-Spector and Kanwisher 2005). At each exposure duration, some as brief as 33 ms, by the time subjects knew an image contained an object at all, they already knew its category. This suggests that the same neurons that detect objects are also the ones that recognize the category. Identifying the specific object took longer, suggesting that other circuits performed this function. Note that conscious realization was delayed from the time when identification was actually made. This has implications for free-will experiments that are explored in Chap. 7. The nub of the issue is that one can make a conscious detection or decision, but significant time elapses before one realizes it. Objects are visually recognized by neurons in specific regions of the ventral visual pathway in the temporal cortical lobe. For example, Macaque monkeys have six discrete, bilateral patches of face-selective cortex (Moeller et al. 2008). These can be identified by fMRI scans or electrical recording when monkeys are presented with pictures of other monkey faces. Neurons in these areas respond with increased impulse rates to the Gestault form of seeing faces and were much less or not at all responsive to seeing non-face objects. Electrical micro-stimulation of any one of four of the patches produced strong fMRI activation in a subset of the other patches. Thus, cells in a given patch seem to be strongly and specifically interconnected with network counterparts in other patches. Many of these connections are more than one synapse apart. Experiments outside of the face patches suggest that this organization recapitulates a similar arrangement for processing general shapes. Non-face objects evoked highly distributed fMRI response patterns in inferior temporal cortex outside the face-patch areas. Face recognition is a unique visual capability. Patients with certain cortical lesions have deficits in conscious face recognition. These deficits can be specific to faces and not occur with recognition of objects. Much remains to be learned about how different cortical patches coordinate and interact to perceive faces and make specific differentiations among them. This may be done via combinatorial codes or synchrony of activity among the face patches. At one time, face recognition areas were thought to be lateralized to small zones in the right occipital and fusiform cortical areas. But a recent study that combined perceptual performance and fMRI images in normal subjects and face-recognition impaired subjects suggests that the processing of faces is bilateral, requiring normal conjoint function of all the known face patches (Minnebusch et al. 2009).
166
5 Examples of Specific Ways of Thinking
The utility of this mechanism is not clear. It would seem that most objects activate multiple networks. While this seems like a redundancy, other factors may operate. A given neuron may capture only a fraction of the image. For example, an activated face-patch neuron could signal, “I saw a face,” whereas it takes all the neurons in all the face patches to discern whose face it was. Thus, the total information of a given visual stimulus is captured in multiple networks and it takes some form of unified “binding” or combinatorial coding across all the networks to represent all the details of the stimulus. Of course, the face-recognition patches do not operate in isolation from non-face recognition parts of the brain. Face-patch neurons have intimate connections with the amygdala and other limbic areas that mediate reinforcement associations (Rolls 2008). Some face-selective neurons in monkey inferior temporal visual cortex have responses that are invariant with respect to the position, size, view, and spatial frequency of faces and objects. Which face or object is present in the stimulus field is encoded using a distributed representation that is independent in its firing rate, with little information evident in the relative time of firing of different neurons. This kind of ensemble encoding has the advantages of maximizing the information in the representation useful for discrimination between stimuli. Another main class of neurons that arc in the cortex in the superior temporal sulcus encodes other aspects of faces such as face expression, eye-gaze, face view, and whether the head is moving. This second population of neurons projects its face representations to areas of brain, such as the orbitofrontal cortex and amygdala, that mediate emotional and social functions.
Visual Motion Computation The pre-processing of visual input is accomplished in the retina of the eye. You can think of the retina as a small isolated neural network that is selectively activated by light being absorbed in receptor cells, with signals passed through the network and ultimately routed to output cells, called ganglion cells that propagate impulses into the brain via the optic nerves. There are about a dozen different types of ganglion cells, each forming a complete array that covers the visual field and each of which report a specific computation on the raw visual image. By simultaneous intracellular and multi-electrode extracellular recordings of retinal cells and their ganglion cell outputs, Stephen Baccus and colleagues discovered that one type of retinal cell, called bipolar cells, detects object movement in the visual field. The output of many bipolar cells summates by non-linear spatial summation, and the pooled response is then reported by movement-sensitive ganglion cells with trains of impulses carried in the optic nerve (Baccus 2008). Another type of retinal cell, called amacrine, responds to movement of the whole visual field, and the output of those cells combine and project inhibitory influence on the objectmotion sensitive ganglion cells. The apparent algorithm used to compare motion of both object and its background involves matching of responses to object and background. Both object and
Attaching Value to Actions
167
background movement generate a sequence of impulses. If the timing of impulses coincides, the ganglion cell remains silent. Otherwise, the ganglion cell fires.
Attaching Value to Actions Here I extend the earlier discussion on how the brain processes positive reinforcement, that is, reward. To work effectively, the brain must accurately predict the consequences of the actions it commands (see later discussion of Bayesian probability in Chap. 8). The decisions the brain makes about whether to issue commands for a given action must be based on what past experience has taught about whether a given action is likely to be a good thing. That is, value is often assigned, and the brain uses that value in the decision-making process. This value assignment is most obvious in conscious operations, as in deciding to buy brand X versus brand Y in the grocery store. But values can be assigned subconsciously too. As a brain learns which actions have a beneficial payoff and which ones don’t, it refines its decision-making processes so that future decisions will be more appropriate. Such processes are influenced by reinforcement mechanisms in the brainstem. Negative reinforcement arises out of portions of the brainstem reticular formation, especially the central grey region surrounding the ventricular aqueduct. We know that because animals regard mild electrical stimulation here as uniquely aversive. This region also mediates the unpleasantness of pain. This brainstem area is part of a larger system that processes other kinds of negative reinforcement, such as conditions that generate dysphoria, or depression, or related negative emotions. The circuitry for processing negative influences involves the brainstem’s central gray region, the dorsal anterior cingulate cortex, the insula, somatosensory cortex, and certain areas of the thalamus. Positive reinforcement arises out of dopamine transmission in multiple locations in the brainstem and basal ganglia that are linked through the fiber tract, the “medial forebrain bundle,” that courses through the lateral hypothalamus. The critical circuitry seems to be the ventral tegmental area, the ventral striatum, portions of the amygdala and the ventromedial prefrontal cortex. Dopamine is not alone in mediating reward. The forebrain bundle also carries fibers that release norepinephrine and serotonin transmitters. Dopaminergic neurons in the ventral tegmental area of the brainstem have recently been shown to encode a reward-prediction error. Apparently, the coding is expressed simply in the firing rate of these dopaminergic neurons. Lau and Glimcher (2008) recorded from neurons of the caudate nucleus of monkeys that were engaged in a probabilistically rewarded eye movement task. At the beginning of the task, monkeys fixed their eyes on a point on a screen. Then a cue light appeared randomly at some point in the periphery, and the original fixation point disappeared, signaling that the monkey was to move its eyes to the new target point. Rewards were delivered 30–50% of the time the eyes moved correctly on cue.
168
5 Examples of Specific Ways of Thinking
About one half of the neurons showed a peak firing rate response after the movement had been completed, suggesting that they had not participated in causing the movement. Post-movement activity is interpreted to indicate movement evaluation and any associated rewards. Firing rates of each neuron were examined to see if they encoded eye movement direction, whether a reward had been delivered, or both movement and reward. About one half of the neurons independently coded either movement or reward, but not both. Of the other half of neurons that responded to both, there was a distinct bias toward greater firing for either movement or reward. Thus, there appear to be two separate information channels, one for movement toward the target and the other to track the history of reward. The movement-tracking neurons serve to tag actions that are eligible for reward, and the reward neurons tag the probability of reward. Together, the two information channels help the brain to associate rewards with previous actions. Tracking history of reward is of course only one aspect of reward processing. Predicting whether a given action will yield positive reward is fundamental to goaldirected behavior. Recent studies from Amsterdam by Esther van Duuren and colleagues (Van Duuren et al. 2008) have examined population coding of reward magnitude in the orbitofrontal cortex of rat. This part of the cortex is thought to help direct behavior through neural representations of predicted outcomes. A variety of studies have shown that firing rates of single neurons in this part of the cortex represent rewardpredictive information. When van Duren and colleagues recorded simultaneously from many single neurons, they assessed coding at a population level by two approaches, the Bayesian method and template matching. Recordings were made as rats learned an olfactory discrimination task in which positive reinforcers of differing magnitude were associated with predictive odors. Ensemble activity was seen to code for reward magnitude when animals moved to the rewarded location, when they anticipated reward while not moving but anticipating reward, and when they actually received reward. Coding robustness increased with learning. The reward representation suggested a redundant and distributed representation of the reward information. Yet many neurons in the same adjacent region made minimal or even negative contributions to coding for reward magnitude. This raises the question of how target cells that receive output of the orbitofrontal cortex are able to sort through the confusion and “read out” the population code for reward magnitude. Targets of orbitofrontal cortex must have a circuit capability of some sort to read the population code in the face of noise and even negative representation from those neurons that are not coding for reward magnitude. This enigma probably applies to a variety of information kinds in the nervous system other than reward magnitude. To date, few studies have examined this possibility. Another relevant issue is how the brain weighs the relative impact of positively and negatively reinforcing input to make decisions and influence emotions. Few studies have examined both positive and negative valence in the same study. Since the primate orbitofrontal cortex seems to hold the nerve impulse representation for the reward value of stimuli, Morrison and Salzman (2009) asked whether aversive information is represented in the same place and perhaps the co-mingling of both inputs provides the opportunity for that part of the cortex to weigh the relative strengths of the two opposing stimuli. Impulse activity in monkey orbitofrontal cortex was monitored during conditioning studies involving a large liquid reward,
References
169
a small liquid reward, or an aversive air-puff. Neurons in this area often responded to both rewarding and aversive stimuli and after conditioned learning generated the learned impulse response to both rewarding and aversive stimuli. Finally, the neural responses correlated with the monkeys’ behavioral response to rewarding and aversive conditioned responses. Such results suggest that rewarding and aversive stimuli converge on the same circuits, which presumably combine both inputs simultaneously to make decisions and generate emotional responses. Recent evidence indicates that these fundamental motivational systems in the brain become engaged in processing emotional responses to social stimuli (Lieberman and Eisenberger 2009). The study monitored human fMRI brain responses when subjects compared themselves to people they read about. When the target person’s level of possession and importance was superior to that of the reader, the reader developed strong envy and MRI signs of neural activity in the negativereward circuitry. When reading about an envied person who experienced misfortune, the readers generated strong schadenfreude (feeling good about the misfortune of a perceived superior) and increased activity in the positive reward brain structures. In other words, we conclude that the brain uses its reward system for processing social pleasures and its pain system for negative social stimuli in addition to the function of these systems for physical pleasures and pains.
Common Denominators To summarize, all of these examples of specific thinking operations feature major principles of how brains think: • Nerve impulse patterns are the representation of sensory, thought and movement command events. • A given stimulus may activate multiple networks. • Information flows as nerve impulses in distributed networks and may constitute a population or combinatorial code. • Many neurons are selective for what they code for. • Information often concentrates in specific information channels. • Timing relationships among coding neurons greatly modulate the processing. • Oscillatory activity can govern how impulses are packaged and delivered. • Neuron impulse patterns in any given neuron can be affected by activity in other circuits and may even be an integral part of those circuits. • Some sort of unified binding or combinatorial coding of activity in many networks is often needed to represent all the details of a stimulus or situation.
References Baccus, S. A. (2008). A retinal circuit that computes object motion. The Journal of Neuroscience, 28(27), 6807–6817. Barry, C., & Doeller, C. (2010). Conjunctive representations in the hippocampus: What and where? The Journal of Neuroscience, 30(3), 799–801.
170
5 Examples of Specific Ways of Thinking
Buzsáki, G. (2005). Theta rhythm of navigation: Link between path integration and landmark navigation, episodic and semantic memory. Hippocampus, 15, 827–840. Buzsáki, G. (2006). Rhythms of the brain. Oxford: Oxford University Press. Fenton, A. A., et al. (2008). Unmasking the CA1 ensemble place code by exposure to small and large environments: More place cells and multiple, irregularly arranged, and expanded place fields in the larger space. The Journal of Neuroscience, 28(44), 11250–11262. Fiete, I. R., Burak, Y., & Brookings, T. (2008). What grid cells convey about rat location. The Journal of Neuroscience, 28(27), 6858–6871. Grill-Spector, K., & Kanwisher, N. (2005). Visual recognition. As soon as you know it is there, you know what it is. Psychological Science, 16(2), 152–160. Hasson, U., et al. (2008). A hierarchy of temporal receptive windows in human cortex. The Journal of Neuroscience, 28, 2539–2550. Jeffries, L. A. (1948). A place theory of sound localization. Journal of Comparative and Physiological Psychology, 41, 35–39. Kjelstrup, K. (2008). Finite scale of spatial representation in the hippocampus. Science, 321, 140–143. Klemm, W. R. (1971). EEG and multiple-unit activity in limbic and motor systems during movement and immobility. Physiology & Behavior, 7, 337–343. Klemm, W. R. (1972). Effects of electric stimulation of brain stem reticular formation on hippocampal theta rhythm and muscle activity in unanesthetized, cervical- and midbrain-transected rats. Brain Research, 41, 331–344. Klemm, W. R. (1976). Physiological and behavioral significance of hippocampal rhythmic, slow activity (“theta rhythm”). Progress in Neurobiology, 6, 23–47. Langston, R. F., et al. (2010). Development of the spatial representation system in the rat. Science, 328, 1576–1580. Lau, B., & Glimcher, P. W. (2008). Action and outcome encoding in the primate caudate nucleus. The Journal of Neuroscience, 27(52), 14502–14514. Lieberman, M. D., & Eisenberger, N. I. (2009). Pains and pleasures of social life. Science, 323, 890–891. Minnebusch, D. A., et al. (2009). A bilateral occipitotemporal network mediates face perception. Behavioural Brain Research, 198, 179–185. Moeller, S., Freiwald, W. A., & Tsao, D. Y. (2008). Patches with links: A unified system for processing faces in the Macaque temporal lobe. Science, 320, 1355–1359. Morison, S. E., & Salzman, C. D. (2009). The convergence of information about rewarding and aversive stimuli in single neurons. The Journal of Neuroscience, 29(37), 11471–11483. Pecka, M., et al. (2008). Interaural time difference processing in the mammalian medial superior olive: The role of glycinergic inhibition. The Journal of Neuroscience, 28(27), 6914–6925. PsychOrg.com. http://www.physorg.com/news89469897.html. Accessed 31 Jan 2007. Rolls, E. T. (2008). Face processing in different brain areas, and critical band masking. Journal of Neuropsychology, 2, 325–360. doi:10.1348/174866407X258900. Solstad, T. (2008). Representation of geometric borders in the entorhinal cortex. Science, 322, 1865–1868. Van Duuren, E., Lankelma, J., & Pennartrz, C. M. A. (2008). Population coding of reward magnitude in the orbitofrontal cortex in the rat. The Journal of Neuroscience, 28(34), 8590–8603. Wills, T. J., et al. (2010). Development of the hippocampal cognitive map in preweaning rats. Science, 328, 1573–1576.
6
Global Interactions
Memories Imagine what we would all be without memories. It would be devastating to live without all of the recollections that shape us into individuals. Would any sense of identity even remain without memory? And if we lost memory of procedural tasks, how could we perform the tasks necessary for our survival, such as feeding ourselves or how to avoid dangers? There have been interesting instances documented where an individual has lost either the ability to form or recall memories. The most famous memory case is that of a man referred to as H. M. (Corkin 2002). He underwent a radical surgery to treat uncontrollable epilepsy that involved removal of sections of his temporal lobe, including the hippocampus, from both hemispheres. H. M. lost the ability to form new long-term memories, a condition known as anterograde amnesia. H. M.’s impairments were such that he literally did not remember activities that he had engaged in just a few minutes before. He retained the ability to form shortterm and procedural memories, and he retained some of his memories from before his surgery, though he suffered also from retrograde amnesia, and could not recall them all. His sense of self was more or less derived from remnant memories. Sense of self is explored in depth in Chap. 7, but here I want to emphasize that how one thinks of himself is learned. In H. M.’s case, he had no good way to realize how his personhood was changing as his life progressed. His view of himself was preserved in past memory of who he used to be. H. M.’s memory loss entailed loss of the ability to form declarative, explicit memories. During many attempts that were made to teach H. M. the same game, H. M. would always insist that he had never seen the game before and would have to relearn it. Nonetheless, H. M. took less and less time to re-learn the game each time it was played, providing evidence that some residual implicit memory remained to facilitate certain types of new learning.
W.R. Klemm, Atoms of Mind: The “Ghost in the Machine” Materializes, DOI 10.1007/978-94-007-1097-9_6, © Springer Science+Business Media B.V. 2011
171
172
6 Global Interactions
There are only a few documented cases of memory disruptions on the same level as H. M.’s. For most of us, memory impairment means occasionally forgetting where we put our keys, or not being able to recall someone’s name. For most of us, the neural mechanisms of memory run smoothly except for a few minor glitches. Behavioral expression of memory reveals that there are different kinds of memory, each of which has an associated set of separate yet interacting neural systems and subsystems. The two most basic kinds are memories are declarative, operating consciously, and procedural, often operating subconsciously. Examples of declarative memories include memorization of concepts and facts, or of rules of procedure, or of the words and notes of a song, or any number of things that we can consciously recall. Because conscious effort is need to access declarative memories, they are referred to as explicit memories. The other kind of memory deals with movement procedures. These are implicit and are accessed subconsciously. They involve cognitive or motor skills, which often develop over long-term practice. Examples include components of piano playing, touch typing, or playing basketball, as well as stereotyped behavioral responses (Fig. 6.1). Studies involving surgical lesions in animals and observations of naturally occurring disease in humans show that the important neural structures for explicit memory involve the medial temporal lobe, especially the hippocampus. Key structures in the temporal lobe, besides the hippocampus, include the anatomically related entorhinal, perirhinal, and parahippocampal cortices. Other important structures are the anterior thalamic nucleus and the mediodorsal thalamic nucleus. Presumably, these structures and their connections with the neocortex establish lasting explicit memories by binding together the multiple areas of neocortex that collectively subserve perception and short-term memory of whole events. Gradually, the neocortex acquires the capacity Kinds of Learning & Memory Sensory Perception
Contextual Analysis
DECLARATIVE (conscious)
Motor Learning
Conditioning
PROCEDURAL (subconscious)
Fig. 6.1 Diagram of some kinds of learning and their relationships. Note that conditioning can occur from sensory stimulation that does not necessarily involve conscious perception. Other procedural memories may evolve from repetition and rehearsal of declarative memory processes
Memories
173
to support long-term memory without the continued involvement of medial temporal lobe and midline structures. David Olten and colleagues performed an interesting experiment that implicated the hippocampus in the functions of short-term memory (Olton and Samuelson 1976). It involved lesioning the hippocampus and then placing rats in a maze with multiple radial arms in which food was placed at the end of some of the arms. Rats with their hippocampus lesioned would go down the same arms over and over, and couldn’t remember where they had already visited, apparently because they lacked the ability to hold a visual representation of where they had been in their working memory. Humans such as H. M. who have wounds or disease of the hippocampus or temporal parts of the brain have severe deficits in the formation of declarative memories, but relatively little impairment of procedural memories. Conversely, people with damage to the cerebellum or other motor-control parts of the brain will have severe limitations in acquiring procedural memories, while their declarative memories may remain relatively intact.
Coding for Memory Few would argue that the information that is coded for remembering begins in the form of impulse patterns in specific circuits. Often, these circuits are widely distributed throughout several regions in the brain and not necessarily limited to primary processing areas such as auditory or visual cortex (recall, for example, the Hubel and Wiesel work on vision). Ultimately, if such firing patterns are sufficiently robust and prolonged, certain synapses are strengthened and certain genes are activated in those neurons to create the biochemical changes needed to preserve the experience via synaptic changes. What should perhaps interest us more here is the matter of how the memory gets retrieved. Are those same circuit impulse patterns resurrected during recall? Of special interest are the patterns during free recall, as opposed to recognition memory in which the memory is recalled only when the original stimulus is repeated. The repeated stimulus itself could be expected to generate the same circuit impulse patterns as during original learning. In free recall, however, the recall is intrinsic to the brain and has to be internally generated without much help from stimuli. The question is, can the brain, on its own, generate the same impulse patterns during recall or does recall entail some kind of new form of neural expression? One provocative recent study of the human hippocampus and adjacent areas showed that some of the original impulse patterns were reproduced during free recall (Gelbard-Sagiv et al. 2008). Many previous studies have shown that the hippocampus and surrounding structures in the medial temporal lobe respond to learning events in a highly specific way to complex stimulus features, to stimulus categories, and to individual people or landmarks. The responses often outlast the stimulus, but not usually longer than for a second.
174
6 Global Interactions
How does the brain produce sequential responses to successive episodes of stimuli? This study tested subjects who had intractable epilepsy and had electrodes implanted to help localize the source of the epilepsy, and these electrodes were used to also record responses during learning and recall. Single neurons were recorded when subjects first viewed audio-video clips and again later when they freely recalled these images. Many of these neurons exhibited selective firing during the stimulus episodes, which could persist as long as 12 s after the end of viewing. For example, a given neuron might fire readily during one scene but not others. During free recall of the clip sequences, the investigators observed reactivation of the same hippocampal and entorhinal cortex firing patterns as had occurred during learning. In other words, it would appear that the code that was used to register the information in the first place was used again as the neural representation during recall. Firing patterns were not analyzed in terms of sequential intervals, but rather as the number of impulses discharged at successive time bins before, during, and after each given episode in the movie clips. Similar observations have been made in fMRI studies that have shown that the distribution of brain activity patterns during learning can be reproduced during cued or free recall. As parsimonious as this may seem, the hippocampus is not normally regarded as a central player in the readout of already formed memories. However, in this particular case, the recall was performed within 1–5 min after original stimulus. Obviously, this is not enough of a delay for permanent memory to have been formed, and so we cannot conclude that the same firing patterns in these particular brain areas would occur during recall if the video clips had been memorized on a permanent basis — that is, had been consolidated.
Consolidation H. M.’s impairment involved a process known as consolidation, whereby newly formed memories are made permanent. Without consolidation, a new experience may only be recalled for a few seconds or minutes, just as with H. M. Biochemical processes are needed to convert short-term memories, which are held in a spatially distributed pattern of electrical activity, into a form that can outlast that particular pattern of electrical activity. Moreover, the biochemical representation must be reflected in lasting change of gene expression. These processes are poorly understood, but are a hot area of research (see below Fig. 6.2). The hippocampus, though crucial for the creation of long-term declarative memories, apparently is not a memory storage site. Also, many studies have shown that forgetting is often a matter of faulty retrieval: the memories are in storage, but just not accessible in the absence of the right cues. Memory retrieval can occur both subconsciously and consciously and relatively little is known about retrieval processes. A common experience that illustrates the consolidation idea is when a person dials a telephone number that has just been found in the telephone directory. The dialer remembers the phone number long enough to dial it, but typically, does not
Memories
175
Fig. 6.2 Diagram of the memory consolidation process. Learning stimuli are initially held in the nervous system in a short-term memory form of CIPs, available for recall only for a short time after the experience. After a certain amount of time, the learned experience may consolidate into a permanent form that can be retrieved from memory stores many days, months, or even years after the initial experience
retain memory of that number more than a few seconds. Sometimes, if you are not concentrating, you might not keep the number in your “working memory” long enough to get it dialed, and it has to be looked up again. In most cases, consolidation of the phone number does not occur. In experimental animals, consolidation processes are typically studied in learning situations that involve only one trial, so as to control for variables associated with repeated learning trials. For instance, if a rat is placed on an elevated, insulated platform, it will normally step off and move to the nearest wall. However, if the floor of the test chamber is electrified, and a strong-enough foot shock is received upon stepping down, a rat usually learns in that one-time experience never to step down from the platform. If removed from the chamber and placed on the same platform the next day, the rat will refuse to step down. However, whether or not this one-trial experience is remembered (consolidated) depends on allowing the rat a time, on the order of a few minutes or even an hour or so for the brain to convert the learning experience into a long-term memory. If shortly after learning, the rat is subjected to intense distractions, or brain-disrupting treatments such as drugs or electroconvulsive shock, the learning experience is not likely to get consolidated. The key is how much time elapses between the initial learning and exposure to the disruption (drug, shock, etc.) (Fig. 6.3). Consolidation is also event dependent, in the sense that the intensity of the learning experience can increase the probability that long-term memory ensues rapidly. A strong foot shock is more likely to be remembered than a mild one. Memory consolidation is not quite as simple as it may seem, in part because testing for it is always confounded with other variables such as stimulus registration, recall processes, and internal drive states. Likewise, the central role of the hippocampus has some apparent exceptions. Hippocampal damage does not impair consolidation
176
6 Global Interactions
Fig. 6.3 Illustration of experimental results that can be obtained when one tests the time dependence of consolidation in experimental animals exposed to one-trial learning situations. The degree of long-term retention (tested several days after initial experience) depends on how much “uninterrupted” time occurs between the initial experience and the imposition of some new state (such as a set of powerful new stimuli, injection of a sedative drug, or electroconvulsive shock)
of some tasks, typically procedural tasks. Recall what was said earlier about H. M.’s ability to form implicit memories, despite destruction of his hippocampus. In recent years, many lines of research indicate “off-line” consolidation processing during sleep of events experienced during the previous day (Korman et al. 2007; Yoo et al. 2007). Supporting the replay hypothesis are experiments involving recordings from 144 microelectrodes in four regions of neocortex of primates. During rest or sleep, neural representations of an experience learned that day were replayed. Moreover, when one part of a recent memory was replayed in a given cortical location, the other parts of the memory representation were replayed in other parts of the cortex. Another study in rats had shown a similar replay of synchronized bursts of neuronal firing in both hippocampus and neocortex during non-dreaming sleep as occurred when the task was being learned during wakefulness. This finding supported an earlier observation that such replay rehearsal occurred shortly after learning during wakefulness. Thus, the off-line processing during sleep seems to be accomplished by coordinated reactivation of the representation of the original learning experience. Both slow-wave and dream sleep seem to contribute to this process, but slowwave sleep, especially Stage 2, seems to be the most important. (Non-dream sleep is scored in terms from the deepest oblivion of Stage 4 to nearly awake, Stage 1). Notably, the amount of time spent in Stage 2 declines with age, which correlates with the decline in memory ability of many elderly people. One explanation
Memories
177
for “sleep learning” is that during sleep the brain may be replaying the neural representations of the learned experience, thus strengthening the synapses that participate in such representation. Another explanation is that sleep may enhance the signal-to-noise ratio for the memory by suppressing extraneous synaptic activity everywhere and thus allowing the stronger signals of recent events to get reinforced and remembered. Memory, even for a single event, does not seem to be confined to a specific brain area, but rather is distributed throughout many widely separated parts of the brain. So, is there some central “memory manager” that integrates the activity in all the distributed regions to produce a complete reconstruction? The only alternative theory for how a reconstructed whole can be produced from the parallel, distributed processes that encode memory is that certain high-frequency electrical rhythms link the various modules into coherent population processing that binds the various memory components to help achieve consolidation and likewise to trigger retrieval. Experiments do support the idea that “binding” of memory elements may involve synchronous oscillatory processes. Recent research (Osipova et al. 2006) has shown that oscillation occurs during both encoding and recall of declarative memory of pictorial stimuli by humans. Large amounts of gamma activity (60–90 waves/s) occurred over occipital areas. Theta activity (4–9 waves/s) increased over the right temporal hemisphere. A similar theta effect was noted in another study that used words as the stimuli rather than pictures. The study did not examine the issue of coherence, but my study of EEG coherence (discussed elsewhere) revealed widespread coherence increases in multiple frequency bands when humans who were viewing ambiguous figures (as in vase-face illusions) suddenly recognized (remembered) the alternative image. A clear example of how synchronization affects brain function is found in intracranial EEG studies that revealed that memorization of words was associated with synchronization of theta-frequency oscillations (4–7 waves/s) between the hippocampus and the entorhinal cortex (Fell 2003).
Location of Stored Memories Experiments in the 1920s by Karl Lashley showed that it was difficult to find where in the brain memory was located. When rats were taught a task, subsequent experimental damage to the neocortex had little effect on the recall until the total area of damage became extensive. That is, regional lesions did not impair the memory, no matter what region of the cortex was damaged. On the other hand, the early speech-center observations by Broca in 1861 and by Wernicke in 1874 suggested that processing of language memories was localized to specific zones in the left hemisphere. We now know that these “centers” are not only connected to each other but also are part of larger circuitry involving other brain areas. The localization is illusory, and is apparent when the expression of a memory requires participation of the two specific cortical areas. Similar ideas probably also
178
6 Global Interactions
apply to other “centers” in the brain, such as the cortical areas for processing sound, vision, bodily sensations, and motor control. Later and more sophisticated experiments used electrical recordings to localize areas of the brain where memory resided by finding its electrical “signature.” Electrical evoked responses associated with any given memory could be found in a wide variety of widely distributed areas, both cortical and subcortical. E. Roy John was a pioneer in this kind of research. Similar results have since been obtained by other topographical imaging techniques such as PET and MRI scans. This led to the obvious conclusion that memory is a parallel, distributed process—as John put it, a process in a population of neurons, not a thing in a place. We probably should conclude that in general memory is a distributed, parallel process in which certain portions of the distributed network can be more important than others. One well-documented example involves visual memories. As Hubel and Wiesel and their followers have shown, the visual cortex contains zones where some neurons respond to color, some to movement, some to shape, some to line orientation, and to other features of an image. Neurons in these areas hold visual representations “on line” in working memory for a brief time after the stimulus is withdrawn. This activity is a neuronal correlate of working memory. The memories associated with such abstracted features of an image appear to reside not only in their initial registration sites in the visual cortex, but also elsewhere in the brain. For example, neurons in the prefrontal area of the neocortex, near its anterior pole, respond to different features of a visual stimulus. One prefrontal region seems to respond to spatial features of a visual stimulus, while neurons in another prefrontal region respond to specific features of a stimulus. In other words, some neurons respond to “what” an object was, while other neurons respond to “where” it was located (Wilson et al. 1993). We so often take memory for granted that sometimes we don’t realize how just defining it can be to our behaviors and thoughts. It is true that when we forget things, we appreciate our memory a lot more and wish we could improve it. Luckily, there are many brain-based mechanisms for improving memory, which I discuss in my book Thank You, Brain, For All You Remember and my blog at http://thankyoubrain.blogspot.com
Keeping Memories from Being Jumbled As we go through life, the brain accumulates and preserves many memories that are similar in general, yet different in detail. For example, you may have memories of different beach trips, to the Florida Gulf Coast, to the Virgin Islands, to Barbados. How does the brain keep such memories distinct and not jumbled? Obviously, it has to have a way of emphasizing the differences in detail, while preserving the commonalities. The method seems to rely on segregation of information, even in the earliest memory processing stages in the hippocampus. In rodents, individual neurons in
Network Plasticity
179
two regions of the hippocampus, the dentate gyrus and the CA3 region, selectively generate a pattern of impulses that encode distinct stimulus features in nonoverlapping circuits (Bakker et al. 2008). For example, hippocampal place cells show unique discharge patterns to similar but different location cues in the CA3 region, but neurons in CA1 are insensitive to the same differences. This notion has been confirmed in the human hippocampus as well. Arnold Bakker and colleagues at several universities used high-resolution (1.5 mm) fMRI to show that activity responses to stimuli indicated non-overlapping segregation in the same zones of hippocampus as had been shown earlier in rodents. Subjects were brain scanned while seeing an image that was initially new to them, and that episode was repeated after being shown 30 different images. Activity was greater during initial presentation than during the second, which was expected because so many other studies have shown that the brain needs fewer resources to process information it has previously experienced. The important new observation came when experimenters varied the second stimulus presentation so that is was similar but not identical to the first. If subjects were fooled into thinking it was the same image as the first one, then the expected decrease in MRI response was noted. But if the second image difference was detected, then the MRI response treated it as a new experience, with the same level of response as occurred with the first stimulus. However, if the brain recognized the subtle differences in the second stimulus presentation, it was treated as a familiar stimulus and the MRI activity was less than for the initial stimulus presentation. But this effect was noted only in two hippocampal areas, the dentate gyrus and the CA3 region. The idea is that neurons in the dentate gyrus respond selectively to details of a visual image that they receive from the primary inputs from entorhinal cortex and project them into the CA3 region, whereupon they are subsequently formed into pattern-segregated lasting memories that are used to test subsequent related stimuli for pattern matching. If the CIPs generated by a subsequent stimulus do not match the one in memory, then the stimulus is treated as new and extra neuronal resources are recruited to create a memory of the new input. So pattern segregation in CIPs can explain how differences in similar sensory input can be distinguished and preserved in memory. It is not clear how commonalities between related stimuli are preserved. However, commonalities may be preserved in non-segregated regions of the hippocampus (CA1, subiculum, and the entorhinal and parahippocampal cortices) that do not segregate similar stimuli in their CIPs.
Network Plasticity After a lifetime of forming memories, one has to wonder how we have enough neural network capacity to learn anything new. Indeed, some people believe that “old dogs can’t learn new tricks,” and learning ability often does diminish with age. But because many older people are adept at learning new things, it means that they must still have plenty of network capacity left. Neuroscientists call that “plasticity.”
180
6 Global Interactions
Because the brain has billions of neurons (somewhere around 1011), each with as many as several hundred connections, the number of possible configurations approaches infinity. Evidence abounds to indicate that many brain networks are “plastic;” that is, they can be re-configured to accommodate changed situations. A common experience for people who get new bifocal or trifocal glasses is that the brain has to learn how to adjust to the changed sensory experiences. Eventually, the “new way of looking at things” becomes second nature because visual cortex circuits have re-wired. A similar phenomenon occurs when people start using hearing aids. Plasticity changes occur in the synapses. Sensory input promotes synapse formation, as indicated by the degree of dendritic spines. Dendritic spines (little outgrowths of dendrite membrane where synapses develop) are greatly reduced if the sensory pathways are blocked during embryogenesis or even during early postnatal life. Spines can also degenerate with aging. There is a critical period during development when the nervous system is maximally sensitive to environmental stimuli. The exact age at which this occurs, the stimuli to which it applies, and the degree of criticality, vary with the species. One unexpected finding is that apparent hard-wired topographical maps are not necessarily fixed. As evidence that they are modified by experience and neural input, it has been observed that maps differ in detail in different individuals. Even within an individual, maps can be reorganized by changed input (see below). The ability to form new neuronal connections and to modify existing ones declines with age in many people (it is harder to “teach an old dog new tricks”). The first and most conspicuous sign of aging is faulty consolidation of short-term memory. Studies of older animals reveal that they have little trouble performing old learned behavior but have more difficulty in consolidating new learning experiences. In humans, some elderly people can remember names of high school friends when they can’t remember what they had for breakfast. The neurophysiological reasons for these effects no doubt include the fact that the total number of neurons in the brain can decrease dramatically with age. Neurons die as aging progresses, and most are not replaced. However, remaining neurons can generate new axonal and dendritic processes, and thus create new synapses. Some of the classic studies of this phenomenon were performed by M. M. Merzenich (Merzenich et al. 1983, 1984; Merzenich 1983). He exploited the known sensory mapping in the sensory cortex. One can stimulate regions of the skin on the hand and digits of monkeys, while recording impulse response in the sensory cortex. What is seen is that the hand and digits are “mapped” on the cortex; i.e., certain regions of the cortex respond to and represent certain portions of the hand or digits. If one of the digits is amputated, the cortical area that normally responds to it obviously becomes unresponsive. But with time, this cortical area assumes new functions and begins to respond to another portion of the hand or digits. In other words, its circuitry has re-organized to create a new sensory receptive field. If one cuts a main sensory nerve from the hand, the median nerve, large areas of sensory cortex become “silent.” However, these silent areas immediately respond to dorsal digital areas supplied by the radial nerve, which were normally unresponsive; that is, these
Modularity
181
inputs are “unmasked.” Over time, these cortical areas that are newly responsive to radial nerve input develop a complete topographical representation of dorsal skin surfaces. Ulnar nerve representations expanded their representation into the former median nerve representation zone of cortex. Within 22 days, almost all of the former median nerve field in the cortex was driven by new inputs. By 144 days, most of the cortex that was formerly responsive to median nerve input became responsive to ulnar and radial nerve input, at the expense of the large cortical expanse that was formerly unresponsive. More recent studies provide other examples of network flexibility. Cross-modal reorganization of cortex occurs in people who receive cochlear implants (Lepore et al. 2006). Brains of people who go blind can commit the unused networks in visual cortex to expand or enrich other sensory functions (Saenz et al. 2008). Blind people use parts of their visual cortex to process the touch sensations in reading by Braille. Similar recruitment of visual cortex networks has been shown for blind people who use prosthetic devices that convert visual images to sound signals. Some verbal memory functions can be performed by visual cortex in blind people. Some interesting effects have been reported in blind people who later recovered some sight after surgery. Magnetic imaging showed that the same area of visual cortex was used for both sight processing and the sound processing functions that had been used during blindness. In other words, both sound and sight co-existed in apparently shared circuits. Thus, we can extend what I said earlier about the vastness of neural network capacity. Not only are there enormous numbers of network possibilities, but potential possibilities can become realities as existing network architecture is expanded and reorganized.
Modularity One of the great paradoxes of modern neuroscience is that modern discoveries have caused a reconsideration of a doctrine once thought to be ridiculous. That doctrine was called “phrenology,” (Simpson 2005) which in its least extreme form held that the brain had special modules for specific functions (Fig. 6.4). The extreme form of phrenology was the really stupid idea that personality traits could be determined by “reading” the bumps on the head. Developed by German physician Franz Joseph Gall around 1800, the discipline was very popular in the nineteenth century. The idea was considered a supposed refinement of Aristotle’s idea that anger could be localized to the liver. While nobody today believes that bumps on the head tell you much about brain function (unless the bump reflects a concussion wound), neuroscientists are resurrecting the idea of functional modules. Research has led us to base the concept of modularity on the fact that specific functions are subserved by specific neuronal systems (Fig. 6.5). Even when the neurons of a given system are not all in the same place, as is sometimes the case, they still function collectively as a module. These modules are partially autonomous, but since they are interconnected, they do influence and are influenced by other modules.
182
6 Global Interactions
Fig. 6.4 This phrenological diagram was made in 1825 by Johann Spurzheim
Fig. 6.5 A contemporary view of brain modularity. Designated areas specialize in certain functions but do not operate independently. A, C prefrontal and frontal cortex, “higher” thought, executive functions, B includes Broca’s “speech center”, C, D motor cortex, E sensory cortex, F, G, H parietal cortex, I includes Wernicke’s “speech center”; J temporal cortex (hearing), K visual cortex, L cerebellum, M pons part of brainstem, N cranial nerve
Modularity
183
Both anatomical and functional boundaries between and among modules are usually fuzzy, and some of the neurons in one module may at times be recruited to participate in the functions of another module. The simplest example of modularity is found in invertebrates, which often have repeated body segments, each with a cluster of neurons that controls the segment. Thus, each cluster of neurons and body segment is modular. This organizational principle is preserved in the vertebrate spinal cord, although the spinal segments and their corresponding body segments overlap, anatomically and functionally. The spinal cord does not look modular, but actually it is composed of successive segments of neuronal groups that give rise to nerves that supply a restricted portion of the body. In principle, humans have body segmentation similar to that of earthworms. This does not apply for the brain. Higher-order, but subconscious, operations are conducted by brainstem and clusters of subcortical neurons (basal ganglia and limbic system) that are grouped together at one end of the spinal cord. Some of the more obvious neuronal modules include circuits in the brainstem that control such functions as respiration, heart rate, blood pressure, vomiting, and orienting to stimuli. One module, the hypothalamus, largely controls neurohormones and nerves affecting viscera. Another module in the brainstem is a general activating system for the brain. The cerebellum exerts modular control over movement coordination. Sensation tends to be divided into modules for sound, sight, and other senses. But even higher nervous functions can have a degree of modularity. Highestorder, consciously operating, functions are conducted by cerebral cortex. Examples include the localized identification of object qualities, mentioned above, special centers, and topographically mapped sensory (Fig. 6.6) and motor cortex. Recent imaging studies have attempted to attribute even amorphous functions, such a morality, to modular function (Miller 2008). Morality arises from the interaction of emotion and logic. In this age of brain imaging, where virtually every cognitive function can be associated with one or more “hot spots” of activity in specific brain areas, it is only natural to think that moral judgments may likewise be generated from a specific place in the brain. If the admixture of emotion and logic are involved in morality, one might expect “hot spots” in the emotion-controlling part of the brain (i.e. limbic system) and in the logic processing part (i.e. prefrontal cortex). In a 2001 paper, Joshua Greene and colleagues showed that when people grapple with moral dilemmas, the parts of the brain that light up depend on whether emotion or impersonal logic are most involved Greene et al. (2001). In highly charged emotional situations, a limbic area known as the medial frontal gyrus becomes especially active, whereas in more impersonal situations the dorsolateral prefrontal cortex becomes more active. Despite the simplistic nature of such data, we can know that some people, especially lawyers, want to use the data to determine whether or not criminals had the moral capacity to tell right from wrong. Even if such a correlation were to be proved, it does not preclude the likelihood that the “abnormal” brain activity was actually created by the lifestyle and the long history of choices a criminal may have made that programmed the brain to respond in abnormal ways. This is one of many examples
184
6 Global Interactions
Fig. 6.6 Topographically detailed representation of the modularity of the skin of anesthetized rats in the primary sensory cortex. Map was constructed by microelectrode recording from neurons in the sensory cortex, while simultaneously stimulating different regions of skin. There is disproportionate representation of the vibrissae (A–E, 1–8) and the skin of the paws (dorsal hindpaws dhp, dorsal forelimb dfl, palm P) and digits (d1–d5). Less well-represented areas include the trunk (T ), nose (N ), lips (UL, LL), lower jaw (LJ ). Zone UZ was unresponsive in anesthetized rats. SII secondary sensory cortex, which was unmapped (From Chapin and Lin (1984))
where neuroscience is being invoked to provide excuses for deviant or criminal behavior. I explore such misuse of neuroscience and the importance of personal responsibility in my book, Blame Game. How To Win It (Klemm 2008).
Module Interactions Despite the rather compelling case that can be made for brain modularity, we cannot over-emphasize the reality that brain modules interact. In the brain, as stated early on in the book, everything is connected to everything else, though sometimes the connections are very indirect, of third or higher order. Many of these interactions are mediated by the corpus callosum, a huge fiber bundle that connects the cortical areas of the two hemispheres. Within each hemisphere, multiple connections exist from one cortical area to another. The other major fiber system in the brain is the internal capsule, which connects each hemisphere’s cortex with multiple areas of the thalamus and brainstem. A smaller fiber bundle, the anterior commissure, connects olfactory structures, the amygdala, and the temporal lobes in the two hemispheres. There is a posterior commissure that connects a few brainstem nuclei. Finally, there is a bundle in each hemisphere known as the fornix, which connects the hippocampus and the hypothalamus on each side of the brain.
Cerebral Lateralization
185
Not only does the anatomy provide a way for all these regions to interact, but numerous functional studies confirm that such interactions are routine occurrences. Take the striatum, for example Dahlin (2008). Though traditionally thought of as a movement coordinating systems, notable for its role in Parkinson’s disease, the striatum has been long known to interact with emotion-controlling structures in the brain. The striatum is viewed as a critical part of the brain’s “limbic-motor” interface that links emotional and movement functions in the brain (Mogenson et al. 1980). Now, recent studies have even shown that the striatum is involved in transfer of learning in working memory tasks that requires manipulation of information. Thus, it is clear that circuits overlap and share some connections that allow CIP interaction with diverse networks.
Cerebral Lateralization The original scientific-based notion of localization came from observing patients who had damage, either through trauma or strokes, to certain parts of the cortex that resulted in speech defects. It soon became clear that there were two “speech centers,” located in one hemisphere but not the other. Generally, these speech centers became known as Broca’s area and Wernicke’s area. Both are located on only one side of the brain, typically the left side of right-handed people (see brain picture above). Broca’s area was discovered by Paul Broca in the 1860s, after he observed the brains of a number of aphasic patients and found that damage to the left frontal hemisphere, specifically the third frontal convolution, was a common factor between most of the cases. Symptoms in these patients manifested themselves as difficulties in speech production, despite apparent understanding of language. If patients were able to produce speech at all, it was generally slurred or malformed. These symptoms are now collectively referred to as Broca’s aphasia. A few years later, Carl Wernicke discovered another area of the brain that serves as a “speech center.” This center, now known as Wernicke’s area, is located on the superior temporal lobe surface, also in the left hemisphere of right handed people. Symptoms resulting from damage to this area of the brain include loss of comprehension accompanied by normal speaking ability. However, despite being able to speak, patients with Wernicke’s aphasia can’t speak coherently. Their speech lacks a logical sense of direction and may include gibberish. They also can’t fully comprehend spoken or printed language. Both Broca and Wernicke asserted that their speech centers were located exclusively in the left hemisphere. But we now know that counterpart speech centers exist in the opposite hemispheres, but they are suppressed. Children who suffer traumatic destruction to a speech center can have the function restored from the corresponding center in the other hemisphere—provided they are young enough at the time of damage to their “default” speech center. Such damage in adults is permanent. However, such damage does not eliminate the ability to understand language, which is preserved in the Wernicke area of the other undamaged hemisphere. Supporting this conclusion are experiments on normal people who have an anesthetic injected into the left carotid
186
6 Global Interactions
artery, which supplies the left cerebral cortex, and thus anesthetizes both speech centers. The anesthetic abolishes the ability to speak, but the subjects can understand the speech they hear because the speech centers on the other side were not anesthetized. Speech center circuitry apparently is hard-wired at birth (Peña et al. 2003). Recent studies of oxygen consumption over the left hemisphere speech center area in 2-day old babies indicate increased activity when the babies heard normal speech compared to when there was silence or when speech was played backward (DehaeneLambertz et al. 2002, 2006). Similar results have been observed with functional MRI techniques in 3-month old infants and there was a strong preference for the left hemisphere. Adult-like responses were noted in the Wernicke area of the left hemisphere when babies heard short sentences in their native language. Roger Sperry looked for and found profound cognitive differences in severe epileptics who had to have surgeons cut the connections between the two hemispheres in order to control the epilepsy. Superficially, these patients seemed to have perfectly normal cognitive function, and they are not aware of their deficits.
Split Brain Roger Sperry (1913–1994)
Imagine a simple test: a word is flashed in front of you, and you have to write it down. You do so, and are then asked to state what you just wrote. Roger Sperry administered this test to a few subjects, and yet they all failed! They were unable to state what they had written. These were people of presumably average intelligence, and yet they failed a basic task. So what caused this odd result?
Sperry’s test subjects were split-brain patients who had epilepsy so severe that they had to have their two hemispheres disconnected to restrict the spread of seizures. A huge fiber bundle, called the corpus callosum, connects the two hemispheres, and severe seizures are reduced when it is cut.
Cerebral Lateralization
187
In such patients, Sperry found that when the words were presented to the right hemisphere (by restricting a screen projection to that portion of the retina that projects only to the right hemisphere), they could easily be written with the right hemisphere-controlled left hand. But no verbal response could be made because the left hemisphere’s “speech centers” never saw the words or had any way to get the information from the right hemisphere. Sperry’s most famous research on “split brains” began in the 1950s. He began his research on cats and monkeys, and soon began to study human subjects who had undergone severance of their corpus callosum as a drastic solution to intractable epilepsy. Amazingly, these people appeared remarkably normal in daily functions. However, following systematic testing of such patients, Sperry reached a number of conclusions on the nature of their brain functioning. He noted corresponding changes in memory retrieval and handeye coordination, which often involved input from both hemispheres. Among the most important of these results to our understanding of brain function is the observation of hemispheric specialization: Sperry discovered that the left brain excels at logical, analytic, and linguistic tasks, whereas the right brain excels in spatial or Gestalt tasks. In 1981 Sperry was awarded the Nobel Prize for his split-brain research. Sperry also made a wide array of contributions to neuroscience. His first line of work in the early 1940s was on the motor system of rats. This led him to propose that brain circuitry is “hard wired.” This idea came from his earlier experiments on amphibians. Sperry developed the “chemo-affinity hypothesis” stating that chemical marks on projection and target cells dictate circuitry formation. Although this hypothesis has been altered over time, the general idea currently has widespread support and is especially prominent in our understanding of developmental neuroscience. Late in his career Sperry explored the idea of consciousness as an emergent property of the brain, stating that neural networks influence consciousness, and that feedback continuously generates new emergent states that constantly revise consciousness. How did Sperry get into this magnificent career? He stumbled on it. He began college as an English major! A freshman course in psychology appears to have been a great influence on him. He returned the favor by becoming one of the greatest influences on the field of psychology. Sources: Bogen, Joseph E. 1999. “Roger Wolcott Sperry.” The Nobel Foundation. 1982. The Nobel Prizes 1981. Ed. Wilhelm Odelberg. Stockholm. Vasiliadis, Maria. 2002. “Split-Brain Behavior.” Serendip. Bryn Mawr College. Voneida, Theodore J. 2008. “Roger Wolcott Sperry.” Biographical Memoirs. National Academy of Sciences.
188
6 Global Interactions
Sperry demonstrated that each hemisphere in “split brain” patients learned independently of the other hemisphere, with no exchange of information between hemispheres. For example, the right hemisphere performs many functions that are not evident in a split brain because such a patient with speech centers in the left hemisphere cannot describe what the right hemisphere does if task information does not go to both hemispheres. Careful testing showed other signs of localization. If patients fixed their eyes on the middle of a line of text, for example, all of the text to the right would be comprehended, but the subject could not describe any of the text to the left. That is because visual stimuli from the left visual field are carried by optic nerve fibers to the right hemisphere and all to the right are carried to the left hemisphere where the language centers interpret what is read. In a split-brain patient, words to the left are projected to the right hemisphere, but it cannot describe what is in its “mind’s eye” because there are no functional language centers there. Another common way that Sperry tested his subjects involved fixing the eyes on the center of a translucent screen, behind which objects were placed either on the right or left. When shown objects on the left, subjects and asked what they saw, they said they saw nothing. Yet if asked to point to or pick up one of the objects on the left, the task was readily accomplished by the left hand, even when the subject verbally said there were no objects there. So the visual information on the left is processed by the right hemisphere, but the right side lacks the ability to verbalize what it sees or is making the left hand do. Such studies not only speak to hemispheric lateralization, but they also clarify a common misperception that language is crucial to consciousness. Our conscious life is certainly enriched by language, but we can also think consciously about images, objects, sounds, and other stimuli without the use of language. A related line of support comes from vivid dreams in which language is not used in the dream. Sperry’s experiments showed that the right hemisphere of the brain is responsible for many of these nonlinguistic types of consciousness. His findings also revealed that the right hemisphere influences a number of imaginative or spatial tasks, including “reading faces, fitting designs into larger matrices, judging whole circle size from a small arc, discrimination and recall of nondescript shapes, making mental spatial transformations, discriminating musical chords, sorting block sizes and shapes into categories, perceiving wholes from a collection of parts, and the intuitive perception and apprehension of geometrical principles.” Typically, the linguistic, analytical, mathematical, and logical functions of the brain are localized to the left hemisphere. Lateralization of function can be demonstrated in a variety of other ways in people who are not split brain. For example, lateralization is seen in the steady-state visual evoked response in humans that I discussed earlier in the section on stimulus-driven oscillation. The data revealed lateralization of response, with largest responses in the speech-dominant hemisphere, even when the stimulus was delivered simultaneously to both eyes (Klemm et al. 1980). This was noted at the primary stimulus frequency (6, 11, or 16 counter-phases/s and their harmonics, such as 12 and 18 Hz for 6-Hz stimulation). Since this is a visual spatial processing task, it might be
Cerebral Lateralization
189
surprising that best evoked responses occurred over speech-dominant hemisphere. Also surprising was that subjects differed greatly in the magnitude of the lateralized response. In another similar study, where responses to red, green, or blue, one of our subjects revealed larger evoked responses over the left hemisphere involving all three colors, with as much as a seven-fold greater response to blue than to the other colors (Klemm et al. 1983). As an aside, with both black-and-white and color stimuli, subjects could not predict their brain performance, including which color yielded the largest responses. Individualized color preferences were noted, even though the luminance of each color was adjusted to be the same. Yet subjects were not consciously aware of their preferred color, and their subjective belief about preferred color did not usually match the evoked response data. This suggests that this processing, though it is being manifest in the cortex, is processed subconsciously. Unfortunately, the implications of these studies have not been pursued. Why is a visual spatial stimulus lateralized in the speech hemisphere? Why are some people vastly more lateralized than others? What other cognitive functions are affected by this kind of lateralization? Is the degree of lateralization under genetic control or learned? If learned, what kinds of experiences produce the lateralization? What are we to make of the differences in ability to track increasing counter-phase frequency? Why are subjects not consciously aware of differences in lateralization or frequencyhandling capability or color preferences? Functional indicators of lateralization based on functional MRI or the EEG are related to functional microanatomical differences. For example, Einstein’s brain had the highest ratio of glial (support) cells to neurons in the left inferior parietal lobe. He may have been born with this asymmetry, but there is no way to know. We know that experience can change brain structure. Modern studies using an anatomical MRI technique, indicates experience-based changes in brain structure resulting from spatial navigation, intense studying, or implicit tasks including learning language skills, musical training, and juggling. At least one study, by R. Ilg and colleagues, combined functional MRI and anatomical MRI in the same subjects before and after learning (Ilg et al. 2008). Subjects in the experimental group practiced a mirror-reading task for 15 min every day for 2 weeks. Behavioral measures showed improvement in performance after the first week. The MRI tests were performed before and at 2 weeks after mirror-reading practice began. Significant training related changes in blood oxygenation were seen in the right dorsal occipital cortex and left thalamus. A significant decrease appeared in the right superior parietal cortex, perhaps because learning made it easier for this part of the brain to perform its role in the mirror-reading task. At the same time, anatomical MRI revealed significant increases in gray matter in the right dorsal occipital cortex, which was the same area of peak increase in blood oxygen. This result is also entirely consistent with many studies that have shown learning enhances synaptic structure, as documented by electron microscope imaging. Just as physical exercise strengthens muscle, mental exercise strengthens brain. The mirror reading training indicates that as learning the skill progresses, participating brain
190
6 Global Interactions
structures change their metabolic demands, which in turn induce anatomical changes that in turn promote the task competence observed with improved task performance. The lateralized differences in function are not surprising. Many studies have shown that performing a wide variety of tasks is accomplished by asymmetric hyperactivity in one or more brain areas. No one knows the mechanism by which the brain recruits localized areas on one side of the brain, but the fact is indisputable.
Combinatorics Many neurons in a given area of the brain, especially in the cortex, seem redundant because they have similar selectivity, such as responsiveness to stimuli. It is only natural then that we could expect the brain to average spike activity from many neurons to extract the essential message from the population. Such averaging reduces noise. But that may not be the way the brain works for everything. Information may not be reducible to average impulse firing rates. “The whole is greater than the sum of its parts.” So many people say this about human mind that the expression is trite. So trite in fact, that the new cliché is to say that mind is an “emergent property” of the brain. What does one do with statements like these? Though intended to explain everything, do they really explain anything? Maybe the place to start is at a simpler, less vague level. Traditionally, researchers have recorded spike trains from individual neurons and assumed that neurons coordinate with each other to encode and process information. What they must coordinate is their differing CIPs. It is possible to record such trains simultaneously from many implanted microwires, but historically that was seldom done because of technical limitations. Today, simultaneous recording of multiple single units is performed, but the trick is in knowing which neurons are in the same circuit. Even so, it should be useful to identify correlated responses to specific inputs. However, looking for synchronous processes is not always reliable. Correlation coefficients of spike counts in large time bins can vanish despite the presence of input correlations (Tchumatchenko et al. 2010). Obviously, one part of the problem is the problem of knowing in advance what an appropriate time bin should be. Second, the coefficients are typically calculated pair-wise, which is not a robust way to examine a network. Finally, there is no recognition of the potential meaning of inter-spike interval patterns. Many studies have indicated that certain functions are using a population code, wherein the over-all activity of many neurons is needed to represent the phenomenon. This can be done in a variety of ways, such as the average firing of a population or based on a multivariate distribution of the neuronal responses. Scientists have not tested the idea that information might be coded by some kind of combination process of impulse patterns from different neurons in a specific circuit. This is a combinatorial problem for which there are few analytical tools used
Combinatorics
191
in neuroscience research. Recall the earlier example of an exception where modulus arithmetic was used to study grid cells in the entorhinal cortex. An argument for combinatorial coding has been made for gene expression where traits depend on many genes (Kobayashi et al. 2000). And combinatorial coding most certainly accounts for the “bouquet” effect of tastes and odors (see below). However, how such coding actually works is not at all clear. In the case of brain function, perhaps a good place to start is with certain sensations. As mentioned in the section on labeled-lined theory, sometimes a series of sensory inputs combines to create a certain unique sensation that no one stimulus alone produces. The ultimate effect of the stimulus is not an average of the impulses across the population of receptors but rather a combinatorial effect that preserves the individual pattern of spikes and silent periods. Averaging could readily destroy enormous amounts of potentially useful information that could only be preserved in combinatorial coding. Evidence that combinatorial codes exist in cortical circuitry was apparently first detected by Reich et al. in 2001. They had sampled up to six visual cortex neurons simultaneously (Reich et al. 2001). However, they could not know if these neurons were confined to the same cortical column circuit. Combinatoric mechanisms have been reported in the medial temporal area of visual cortex. Osborne and colleagues found that patterns of spikes and silence across a population of nominally redundant neurons can carry more than twice as much information about visual motion than population spike count, even when the neurons responded independently to their sensory inputs (Osborne et al. 2008). The basis of their approach was to record spike trains from a set of neurons and count at every successive instant the presence or absence of a spike. For example, in a set of five neurons, only neurons two and four might show a spike, which would be characterized as a binary “word” of 01010. The information content of such coding, expressed as calculated entropy, increased almost linearly with the number of neurons examined. Specific patterns of spiking and silence exist in cortex, suggesting synergy and the likelihood of combinatorial coding in cortex. Combinatorics also seems to be a fruitful way to indentify the genesis of oscillation and synchrony. Recently, Benjamin Staude and colleagues point out the need to study correlated activity among multiple neurons (Staude et al. 2010). Traditionally, such tests have been limited to pairwise comparisons. This group developed correlation method approximations that seem to be useful for multiple spike trains lasting up to 100 s, even if the variability changes during an epoch (i.e., the data are not statistically stationary). The approach is based on firing rate, and does not account for when in time spikes occur. Combinatorics allows the brain to use just a few types of neural machinery for the myriad forms of sensation we can experience. The beauty of a combinatoric scheme is that it not only can carry more information than other schemes but does not require any unusual properties of the neural spiking statistics. Thus, any circuit can carry a combinatorial code. Combinatorics allows the information in multiple chains of neurons to be preserved and thus provides a basis for synergy (Brenner et al. 2000; Schneidman et al. 2006).
192
6 Global Interactions
Fig. 6.7 Illustration of combinatorial pattern coding for chemical sensations (such as smell or taste). A distinct odor or taste sensation, for example, may be elicited by a mixture of the same chemicals in a specific range of concentrations
For example, the retina of the eye has detectors for only three colors, yet we perceive an array of as much as 64 or more shades of color. For chemical senses, each receptor is often specialized to respond preferentially to one chemical, although it may also be able to detect high concentrations of other chemicals. A given sensation can be generated by mixtures of different concentrations of different chemicals (Fig. 6.7). Thus, the tongue has four preferentially-tuned taste receptor types, but even non-gourmets can discriminate a wide range of taste sensations. This is the “cocktail or bouquet effect:” no single ingredient confers the taste that is ultimately perceived. Odor perception operates on the same principle, resulting in the “bouquet effect.” In all of these above cases, it is the combination pattern that confers the perceived sensation, rather than any one component receptor input. By the way, it may not be an accident that the examples cited above are sensations can be consciously perceived. It may be that combinatorics is not only accomplished by conscious mind, but also helps to create conscious mind. That is, consciousness itself may be a property of unique combinatorial codes in widespread circuit. We shall come back to this idea in the last chapter. Much of the evidence supporting the idea of combinatorics came originally from studies of olfaction (Johnson et al. 1998; Peterson et al. 2001; Zou and Buck 2006). Somewhat recently, olfactory receptor genes were discovered. Different receptor genes are expressed in different olfactory neurons, and rarely are two receptor types expressed in the same neuron. Since the olfactory bulb receives input only from neurons expressing one given type of receptor, there must be a mechanism that includes combinatorial pattern response of the various receptor types in the perception of complex odors. Other studies on olfaction show that binary odor mixtures stimulate olfactory neurons that aren’t stimulated by
Combinatorics
193
Fig. 6.8 Illustration of the principle of combinatorial pattern coding as it can apply to timedistributed information processing in a population of neurons. This illustration of activity patterns along four pathways is just one of many ways to illustrate the point that the totality of activity in multiple neural pathways during a given epoch can constitute a combinatorial pattern code that is not reflected in the activity of any one pathway. Shown are two sets of activity patterns (horizontal blocks are periods of impulse discharge, and intervals between blocks are silent periods. Activity can be defined as action potential firing, neurotransmitter release, or receptor-binding phenomena.) The analytical dilemma is in knowing how to collapse the four patterns into one that characterizes each state. One untried way is to sample each train of activity as “on” or “off” and treat the data with matrix algebra
individual components alone. Such neurons appear to be dependent on some kind of combinatoric code for perception. Studies of firing patterns in the retina have shown significant differences in the patterns of groups of neurons and individual elements of that group. The groups of cells are able to convey more information with more efficiency. To extend these points to explaining mind, combinatorics is a way to get higherthan-expected mental capability. For example, combinatorics can be applied to the information processing that occurs in neuronal networks. Patterns from activity in multiple neurons or pathways combine to yield a distinct combination pattern that can contribute to a specific function (Fig. 6.8). Specific functions, in turn, each have their own particular pattern sets. This example assumes that the patterns (four in this case), reside or at least originate in separate pathways or channels. How could they get combined? Consider the possibility that the four pathways have shared circuit elements; that is, the circuits overlap and thus provide a way to combine their informational content. Another possibility for achieving processing of a combinatorial code might be oscillation and synchrony. The peaks and valleys in oscillating can open and close gates to let the code pass through circuitry, and synchrony could allow the spike trains from individual neurons to flow through circuitry at the same time so that the information they contain does not get “smeared” by delays in propagation of the trains. There is a specialty area in mathematics devoted to combinatorics, but neuroscientists are generally unaware of this math. Below I illustrate a simple example of just one way to quantify combinatoric information in spike trains from multiple simultaneously firing neurons (Fig. 6.9).
194
6 Global Interactions
Individual Spike Trains 1 2
Stimulus Trials
3 4 5 6 7 8 9 10 STIMULUS
2 msec 0001101010
Fig. 6.9 Simple way to capture combinatoric information from multiple simultaneously firing neurons. A computerized moving time window can be used to identify which of the various spike trains contain a spike within that epoch. Presence of a spike can be scored as a 1 and absence as a 0. The combinations of 1s and 0s during each successive move of the window can be used to identify any existing biologically relevant combinatorial code (1010000001, 0001101010, 0000010100, etc.). Note also the possibility of performing comparable analysis on sequentially ordered relative intervals (i.e., in a given time window, which neurons show a +, –, or 0 interval)
Firing information in a whole circuit at any one instant can have a high capacity required for decoding. For that reason, modulo arithmetic might provide a useful approach for combinatorial analysis. At this point, the application of these ideas to thinking can shift from impulse activity in individual neurons in a specific circuits to the combined effect of these impulses on the over-all electrical fields that the impulses generate in the fluids outside of neurons. In any given zone of extracellular space, the field potentials reflect summation of all electrical currents, impulses and postsynaptic voltages, particularly the latter because their longer duration promotes summation. This brings us to the subject of electroencephalography (EEG).
The Electroencephalogram: Its Rise and Fall, and Recent Rise In 1929, a German psychiatrist named Hans Berger discovered that feeble electric currents could be detected from electrodes placed on the head. Since the signals were displayed by ink-filled pens that moved proportionally with the voltage, the signal produced basically a graph that automatically plotted voltage as a function of time. It was called an electroencephalogram (EEG, for short). Actually, the amplifiers that Bremer used registered the voltages (microvolts) that were associated with those currents. This discovery quickly led to refinements that made it possible to use
The Electroencephalogram: Its Rise and Fall, and Recent Rise
195
the EEG for clinical medicine, in the diagnosis of EEG abnormalities that could indicate diseases such as tumors, latent epilepsy, and strokes. Apart from its obvious medical applications, the EEG attracted the attention of brain scientists, and indeed it was the focus of much of my own early work in neuroscience (Klemm 1969a). But after a few decades, the EEG fell out of favor with scientists, including me, because it did not seem to have much explanatory power. True, it did reveal predictable changes in amplitude and frequencies with various physiological states, such as daydreaming with eyes shut, sleeping, dreaming, and hard mental effort. But studies typically revealed only correlations. Another problem is that the EEG contains a mixture of frequencies, all compounded (smushed) together, making it hard to know what it all means. What needs to be understood is the relationship between EEG frequencies and the CIPs that are the ultimate field potential generators. I, and many others, got out of the EEG research too soon. Today, powerful new analytical techniques enable us to discern EEG phenomena that have great theoretical significance. Before that is addressed, I need to say a few words about phase relationships in EEG from different electrodes. Depending on the particular mental state and part of the brain recorded from, the brain can generate oscillating waveforms in a wide variety of frequencies. These waveforms may come and go or change their frequency. When multiple areas generate oscillations at the same frequency, phase relations among various oscillators probably help to control the ongoing development of thought. Phase relationships are measured pair-wise. Typically this is done between signals of the same frequency coming from two places, such as theta rhythm coming from two different places in the hippocampus. But in recent years research has started on measuring phase relationships between two signals of differing frequency. In all cases, the issue of interest is to see if the signals are phase locked or jitter with respect to each other (see Fig. 6.10). While many scientists have come to regard the EEG as an “epiphenomenon,” a mere hodge-podge of voltages coming from a variety of generators in the brain’s cortex, the EEG is now gaining new attention and respect with advances in electronics and computers. Now, we no longer print out the signal with pens that dispense ink on paper (which prevents detection of higher frequencies that we now know are present and highly significant). Digitized signals allow a variety of analyses, such as rather precise localization of where a signal is coming from, mapping the spatial distribution of voltages at successive points in time, a precise analysis of the various frequency components and the phase relationships coming from multiple tissue generators, the calculation of fractal dimension, and chaos-theory analysis of the trajectory of successive voltages. In addition, using experimental animals and sometimes even people, putting the electrodes deep within the brain, we can study EEG-like signals (extracellular voltages, “field potentials”) in specific neural pathways. What is especially useful is to use implanted electrodes to record simultaneously the field potentials and the spike trains from multiple neurons in the same area. In the late 1960s, I was an early advocate of recording spike trains from multiple
196
6 Global Interactions
Fig. 6.10 Illustration of various phase relations among simultaneous oscillations
neurons (Klemm 1969b), but many colleagues rejected this approach because the trains were mixed and you couldn’t tell which spikes came from which neuron. This technique never caught on then because at the time it became possible to use microelectrode techniques that could isolate impulse activity from a single neuron. The technique of recording multiple-unit activity (MUA) is accomplished by having tips of implanted recording electrodes sufficiently exposed so that spike trains from several-to-many neurons could be recorded simultaneously. The same electrode detects slower EEG-like field potentials from the same area and immediately adjacent tissue. This technique provided useful insights about brain function some 40 years ago (Klemm 1970, 1971). But it wasn’t until the late 1980s when Grey and Singer in Germany showed the utility of comparing simultaneous MUA and field potentials from the same area. These investigators rightly reasoned that in a place like the cortex, many nearby neurons operate as a cluster, driven by inputs that affect multiple neurons. They made the further advance of paying attention to conditions that induce synchrony of MUA from multiple cortical regions. In cats, visual-stimulus elements were reported to drive local MUA into high-frequency (near 50 Hz) oscillation. Certain multiple oscillating populations in different regions became phase locked, which supposedly was the means by which sensory binding was achieved. They reported that the cat visual cortex exhibits high-frequency oscillating field potentials (40–60 waves/s). Within a functional column, the individual cells fired spikes coherently. This led to the idea that it is oscillatory coherence that binds together the apparently fragmentary representations of an image (Singer 1993a, b).
The Electroencephalogram: Its Rise and Fall, and Recent Rise
197
Oscillation and Synchrony Wolf Singer (1943–)
In 1943 in the midst of World War II, one of the most prominent neuroscientists of contemporary times, Wolf Singer, was born in Munich, Germany. Currently the recipient of over 35 prestigious awards, Dr. Singer has been immersed in the study of neuroscience since receiving his M.D. from the University of Munich in 1968. Following postdoctoral training in psychophysics and animal behavior at the University of Sussex in the early 1970s, Dr. Singer was hired at the Max Planck Institute for Psychiatry in 1972 and became a professor the following year. In 1973 he later took the position of lecturer at the Technical University in Munich, where he rose to the rank of Professor of Physiology in 1980. The majority of his career- since 1982 has been spent as the Director at the Max Planck Institute for Brain Research in Frankfurt.
Dr. Singer’s studies have largely focused on explaining organization of the cerebral cortex, and more recently, the binding mechanisms that unite sensory information into the representations that are perceived. Namely, he has sought to implicate synchronization as this binding mechanism, and his recent paper on the matter, Binding by Synchrony, has been the source of much attention and inspiration. Over 200 scientific papers have been authored by Dr. Singer, as well as three books and a number of essays. Additionally, Dr. Singer is a member of many well-known and well- respected societies, academies, commissions, advisory boards, and editorial boards. Sources: “Prof. Dr. Wolf Singer.” 2008. Max Planck Institute for Brain Research. “Wolf Singer Biography.” 2008. Cajal on Consciousness.
198
6 Global Interactions
Today, many investigators pursue this lead. For example, workers in Tubingen, Germany, reported intriguing results recently from simultaneous MUA and field potential recording in the alert monkey visual cortex during presentation of natural movies (Whittingstall and Logothetis 2009). Using single stimulus trials, they observed that EEG power in the gamma band (30–100 Hz) and phase in delta band (2–4 Hz) significantly correlate with the MUA response. Specifically, the MUA response was strongest only when increases in EEG gamma power occurred during the negative-going phase of the delta wave, thus revealing a frequency-band coupling mechanism that can be exploited to infer population spiking activity. This finding may open up a new dimension in the use and interpretation of EEG in normal and pathological conditions.
The Importance of Oscillation and Synchrony Oscillation is a way of packaging impulses. They can propagate through a circuit as clusters, with flow varying depending on the peak or valley of the oscillating voltage field. There is ample indication that frequency of oscillation can affect information processing, and high-frequency gamma activity has attracted the most emphasis. High-frequency oscillation could allow flow of more impulses in a given period than slow-frequency oscillation. The role for oscillation in thinking is gradually being discovered. Among the many recent findings is the recent discovery (Haenschel et al. 2009) that oscillation of cortical field potentials is critical for working memory, which is that aspect of memory that I think is a key component of orderly thought. The point is emphasized by study of schizophrenics, who have conspicuous deficits in working memory and orderly thought. Working memory actually has three phases: initial encoding the information, maintaining, and then using the information. Schizophrenics, compared to controls, had severe encoding deficits, and at the same time showed severe reductions in oscillation in alpha, theta, and beta oscillations. During the memory maintenance phase, schizophrenics were not much different from normals, but in the latter stages, gamma activity was suppressed. In the retrieval phase, both theta and gamma activity were reduced in schizophrenics. Thus it seems reasonable to suggest that the impaired thinking of schizophrenics arises out of the deficiencies of their cortical circuits in generating oscillation across a wide frequency band. Therefore, effective therapy, if there is any, needs to be aimed at addressing the oscillation deficits. In normal humans “hard conscious thinking” is typically associated with increased neuronal firing in the cortex and with high-frequency oscillations of field potentials. Presumably, the rapid activity makes good task performance possible. How? I suspect that high-frequency oscillation facilitates throughput of impulses, enabling a dense flow of information that is needed in complex processing. The amount of gamma activity is often used as an index of how hard the brain is working. For example, the power (magnitude) of gamma activity in humans increases linearly with the number of nonsense syllables an experimental subject is required to hold in working memory.
The Importance of Oscillation and Synchrony
199
In both monkeys and humans, impulse activity and gamma-band oscillation increase in regions of the parietal cortex when there is conscious intent to perform a movement, such as planning an eye movement or to reach for an object. In humans, different gamma frequencies seem to code selectively for planning eye movement versus reaching movement (Van der Werf 2010). Since these are willed actions under the conditions of study, it seems clear that the intended movements are caused by the circuit throughput of nerve impulses that create the ultimate muscle contractions. It is tempting to consider this evidence for free will, because these cortical areas are upstream from the motor cortex neurons that code, for example, the reaching movements. The idea of free will is not universally accepted by scientists, and in the next Chap. 1 will critique research in this area. Gamma oscillation in cerebral cortex arises out of local cortical circuits that oscillate because they contain fast-switching inhibitory neurons. Inhibition in the cortex arises largely from the release of the neurotransmitter, GABA. When GABA acts on its target neurons in cortical circuits, the impulse flow in the circuit is temporarily interrupted, followed by recovery as the released GABA is removed (by re-uptake, enzyme destruction, and diffusion and dilution). Many GABA-releasing neurons have extensive arborizations of membranous processes that enable them to exert inhibitory pacing of huge neuronal networks. A research team in France recently demonstrated such “hub neurons” in brain slices of hippocampus, using analysis of multi-neuron calcium activity that helps to mediate impulse activity in neurons (Bonifazi et al. 2009). Perturbation of a single hub neuron could influence activity throughout an entire network. Collectively, such hub neurons are thought to orchestrate widespread network synchronization. The functional significance of oscillation is thought to be its influence on throughput of impulses within a circuit, and that in turn should affect thinking. As an example, Richard Eden and colleagues in the U.K. monitored GABA concentrations and field potential activity in the visual cortex of human volunteers who were asked to perform visual discrimination tasks involving detection of small changes in vertical rotation of a pattern of alternating light and grey stripes (Edden et al. 2009). The precision of discrimination correlated strongly with GABA levels and with gamma frequency. That is, the variation among subjects in discrimination capability was accounted for by their difference in GABA content and gamma frequency.
Synchrony Like twinkling lightning bugs at night, isolated neurons can suddenly synchronize, seemingly randomly. A model to explain random rhythms has recently been published by Jean-Philippe Thivierge and Paul Cisek (Thivierg and Cisek 2008). They propose that random synchronization results from positive excitatory feedback originating from recurrent connections between the cells. A few dominant cells become active just before the others do, and when enough cells in the group become active, a threshold is reached at which all cells in the group are recruited into synchronous activity. The benefit of such a mechanism is that it would help account for why the brain can respond so flexibly. It is not “stuck” in any pre-set mode of operation.
200
6 Global Interactions
Oscillations in multiple, linked circuits may interact by becoming time locked to each other. Synchronization refers to the degree in which CIPs or their associated field potentials are time-locked to each other. Oscillations from different circuits that have interconnections have an obvious need for synchronization. Synchronization of different oscillators is like crickets chirping in unison. A group of neurons oscillating in one set of circuits can resonate with circuit activity elsewhere. The underlying CIPs in multiple, linked circuits can likewise be time locked. Such circuits cannot be firing independently of each other, but rather the whole point seems to be to expedite the sharing of impulse influence among the various linked circuits. There are at least three ways that synchronization of oscillation has been studied: 1 . Synchronization of oscillation present in two different locations in the brain, 2. Cross-frequency synchronization, where there is time locking between two different frequencies, and 3. Synchronization between impulses and the field potentials in the same area. I discussed this third type in the earlier discussion of spike precession in the hippocampus that occurs as an animal moves its body through space. Let us recall how CIPs and field potentials in the same region relate to each other. Even if largely created by summated postsynaptic potentials, field potentials are ultimately driven by impulse activity. Even though a recording electrode picks up both kinds of signal, the spikes that are seen are not necessarily the ones that are causing the field potential with which they are associated. Field potentials spread over rather wide expanses of neural tissue, and many neurons may be caught up in an electrical field that they did not participate in creating or sustaining. Their impulses could just be “passing through,” not helping to create the oscillation. Very recent studies suggest that in cortex, the degree of synchrony between impulses and associated field potentials may be affected by stimulus intensity (Vinck et al. 2010). Investigators recorded simultaneous impulse and gamma field potentials in the visual cortex of monkeys during visual stimulation. In a given gamma cycle, neurons fired early in the cycle when stimulus intensity increased. This, of course could be a way of coding stimulus intensity but also for influencing the impact of the impulses on their targets. CIPs of different circuits can synchronize as a result of simultaneous sensory experience. For example, if you see lightning and hear it at more or less the same time, your brain develops time-locked CIPs both for the sight and the sound. The lightning’s visual stimulus may activate a map in visual cortex and the re-entrant propagation could set up oscillation. At the same time similar processes could be occurring in the sound processing going in temporal cortex. Both maps are connected by fiber tracts linking visual and auditory cortex, and both circuits could be oscillating coherently, which would serve to bind flash and sound. Circuits that support this kind of re-entrant interaction have a definite cycle time that varies with the number of synapses involved in projecting impulses from one zone into another and then back again. Moreover, this kind of connectivity provides
The Importance of Oscillation and Synchrony
201
an anatomical substrate for repeated cycling, that is, oscillatory CIPs with the same fixed period. At the 50 or 60 cycles/s brain waves that seem to be involved when the brain is “working hard” consciously, this would take a recurrent pathway of about 10 neurons, assuming that each one had a synaptic delay of about two milliseconds. How consciousness emerges from such CIPs remains unknown. These ideas do raise the possibility that size of active circuits diminishes as the brain shifts from idle to “hard thinking” mode. I say this because during drowsiness or sleep, the brain waves become dominated by large, slow waves on the order of 1–2/s, which of course would be generated by a long cycle time of CIPs involving several hundreds of neurons in large, presumably widely distributed neurons. One of the intriguing facts about synchronization of oscillations in the beta to gamma range is that it often occurs with zero or near-zero phase lag. How can that be, especially in circuits that are widely disbursed? In their review of synchronous oscillations, Uhlhaas and colleagues argue that the brain has no known single center or area where information converges that could be in a position to serve as a supraordinate coordinating center for all oscillations (Uhlhaas et al. 2009). That is not quite true, for as mentioned earlier, the brainstem reticulum does indeed receive convergent input from all senses except olfaction. But interest in reticular formation research has long since waned since its heyday in the 50s, and its possible role in triggering high-frequency oscillations and their synchrony is not being investigated. Yet, it seems clear that the reticulum is a cause of cortical activation and mental arousal, which are in turn associated with appearance of high-frequency, synchronous oscillations. Perhaps the minimal phase lag arises from the global nature of reticular formation control of consciousness. Assuming consciousness arises from simultaneous disinhibition of broad expanses of neocortex, the resulting high-frequency coherence may not be all that surprising. It is long past time for neuroscientists to study concurrent CIP and field potential activity in the reticulum and in the cortex. Such research needs to be done in animals, because sticking electrodes into brainstem of humans can damage vital functions of non-conscious mind. Near-zero lag phenomena, in general, can be driven from any cortical or subcortical area that supplies a common drive that paces the oscillation in all circuits to which it supplies output. The Uhlhaas group describes situations where zero-lag synchrony can emerge as a network property of circuits that operate via recurrent inhibition. Network topology is also a factor. Another possibility for zero-lag coupling is the case where different circuits are coupled by gap-junction communication, which is direct electrical current communication that does not arise from neurochemical transmission at synapses. Near-zero phase lag synchrony is common and involves consistent phase differences of fractions of an oscillatory cycle. In heterogeneous networks, portions of the network may have resonant frequencies that have to be dynamically adjusted to conform to the more global dynamics. Such adjustments can get expressed as slight oscillatory phase shifts. Quantifying phase relationships is an area of current great interest. Many new ideas are under development. One approach is to find some sort of global coherence measure as a way to index ongoing changes during a continuum of all sorts of
202
6 Global Interactions
mental states. For example, a researcher might track global synchrony as a person goes to sleep, shifts from the various stages of slow-wave sleep to REM and then to wakefulness. Such measures include techniques being used by Cimenser and colleagues (Cimenser et al. 2010) or more esoteric methods being developed by Scott Kelso’s group (Kelso and Tognoli 2010). Both approaches assume consciousness is a system-level process and emphasize field potential power spectral coherence in ways that go beyond the limitations of pair-wise correlation measures to provide single measures of global activity that can be tracked in successive points in time from an . However, explanatory power is lost. It is not obvious from such global measures what different parts of the brain are doing to create each given mental state. The measures are not much more than a neuronal correlate, although I submit that these measures could be very significant correlates.
Function of Synchrony What is the functional consequence of coherent firing? Coherence might improve the brain’s efficiency in that many neurons can be recruited into other circuitry. Spike synchrony in converging circuits enhances information transfer and accelerates processing. A recent study of thalamic neurons in the cat revealed that the visual cortex spike pattern was determined by the temporal pattern as well as the rate of synchronous inputs from the thalamus (Wang et al. 2010). Many investigators have implicated synchrony as a mechanism for selective attention, especially high-frequency synchronization in the gamma range. This perspective expands the role of consciousness from just being a veto mechanism to one involving expanded perceptual awareness, judgment, wisdom, and insight—and yes, free will. Phase relationships can govern the relative timing of neuronal spike discharges, thus promoting a temporal information code. This does not mean that all neurons located in a region of synchronized oscillatory field potentials will themselves be time-locked to the oscillation. Such neurons may “belong” to other circuits, that is, be captured by the activity in the circuit in which it belongs. Some investigators are unduly perplexed by finding single neurons in an area that seem to operate independently of the prevailing oscillatory voltage field. The importance of phase relations to spike timing has already been expressed in the earlier discussion of phase precession of “theta” field-potential oscillations and the associated “place” neurons in the hippocampus. These oscillations are not time locked to a stimulus but rather are generated by internal mechanisms. The neurons discharge during particular phase shifts in a context-dependent way. In the cortex, theta oscillation co-exists with beta and gamma oscillations, and the interaction and phase shifting of these frequencies no doubt have a major impact on higher brain function. Incidentally, the memory-consolidation functions of hippocampus have recently been attributed to synchronous oscillations in the gamma region of 30–100 Hz (Jutras et al. 2009). Investigators recorded neuronal activity in the hippocampus
The Importance of Oscillation and Synchrony
203
while macaque monkeys learned a visual recognition memory task. During the encoding phase of this task, hippocampal neurons displayed gamma-band synchronization. Additionally, enhanced gamma-band synchronization during encoding predicted greater subsequent recognition memory performance. Perhaps the mechanism lies in the possibility that synchronization facilitates the synaptic changes necessary for successful memory encoding. Such synchronization is not surprising, given that the learning of such tasks is driven by sensory input. We do need to consider how CIPs and their associated field potentials get time-locked under circumstances when thoughts are not directly triggered by proximate sensory stimuli? That is, how is coherence driven when one is thinking about something remembered, or when thinking about some creative new thought that is only tangentially related to remembered experience? And how does the brain shift between operating in large circuits with many neurons to smaller circuits of fewer neurons? The disinhibition explanation of arousal discussed earlier could account for this. In the earlier discussion of oscillation in Chap. 4, I mentioned that oscillation produces the effect of chopping time into repeated short segments ranging from a half second or more to 10 ms or less. Two things need to be emphasized about that in this chapter. First, the oscillation generated by any given neuronal source generator may be quite localized to a pool of interneurons acting on a nearby population of output neurons. But these output neurons may propagate oscillatory output in parallel to widely distributed targets. The second point is that oscillations can have varying phase relationships to external events and to oscillations in other circuits. The term coherence, as neuroscientists use it, usually means simultaneous or time-locked activity in two or more areas of the brain. Coherence is a statistical measure of phase consistency between two time series. Historically, coherence has been studied by making pair-wise correlations between EEG frequencies from multiple scalp or intracerebral locations. Fourier mathematical methods for examining frequency spectra provide a classical way to study EEG coherence, because phase relationships at specified frequencies can be identified (Morgan et al. 2008). Our own study provided comparable (and I think more correct) analysis using waveletpacket coherence. I am not aware of this coherence method being used by other neuroscientists. Coherent activity in multiple brain areas can also be detected by the other popular method of brain-function analysis, fMRI imaging, though not on a precise time scale. Although this imaging allows investigators to see brain areas light up at more or less the same time, the time-resolution does not allow precise calculation of phase relationships among regions. While technological enhancements of the MRI technique can be expected, they are never likely to have the millisecond-level of time resolution that is always available in electrical recordings. However, newer modifications of technique are promising. One recent study of humans in a relaxed, resting state found coherent activity in thalamic and visual cortex areas that participate in generating alpha rhythm (Zou et al. 2009). The whole thalamus showed negative fMRI correlations with the visual cortex and positive correlations with its contralateral counterpart in eyes closed condition, but which
204
6 Global Interactions
are significantly decreased in eyes open condition. Such events are consistent with previous findings of electroencephalography in which alpha is abolished and is replaced by faster activity during the eyes-open resting state. Previous imaging studies have revealed resting-state correlations that involved motor cortex, visual cortex, and language areas of cortex. Other studies have shown concurrent metabolic increases in some parts of distributed networks at the same time that other parts of the network showed decreases. Task load can produce a dramatic effect on metabolic relationships among brain areas. One function of coherent oscillation, particularly in the gamma band, seems to be “sensory binding,” which is the process where by sensory input from multiple sources in parallel pathways are bound together to reconstruct the original stimulus. For example, recall the earlier mention of the work of David Hubel and Torsten Wiesel showing that a visual scene is “deconstructed” wherein an image is broken down into short line segments in the visual cortex. A given neuron captures only one of these segments a visual scene is “deconstructed,” with the image broken down by sensory neurons in the retina and at various junctures along the pathway to the visual cortex. Each neuron carries only a small piece of the information in the scene, in the form of short lines and edges. Eventually all the pieces have to be put together (i.e. bound) in the visual cortex, and this process seems to be accomplished by gamma activity. Gamma oscillation of isolated neurons readily synchronizes, because large excitatory pyramidal cells in the cortex are reciprocally connected to inhibitory neurons that have large arbors of fibers that can interact with more than one neuron. The time locking of synchronized neurons provides a way for each neuron to share its piece of the total image and thus achieve what is called sensory binding. Synchronization can arise from the connections in network architecture, which often contains strategically placed inhibitory neurons that can pace a repeated firing pattern. Outside stimuli can also serve as pacemakers. One of the most remarkable things about neuron oscillation is that a given neuron can participate in multiple-frequency oscillations. Different regions of a neuron’s dendrites, cell body, and axon have different distributions of ion channels, and these can open and close on different time scales. One advantage of oscillatory synchronization is that it is energetically efficient (remember, the brain consumes about 20% of all the oxygen used by the whole body). If neurons fire synchronously, it takes fewer of them to fire to produce their network effect than if they fired asynchronously. The default state of the brain is slow oscillation, which is most prominent during deep sleep. I say default, because the slow oscillation self-organizes and occurs spontaneously without an external driver. The “rest” that sleep provides may well come from lowered metabolic demands enabled by slow oscillations. A binding problem is inherent in brain function. Because so much of the brain’s information is fragmented and distributed in parallel pathways, when that information has to be “read out,” some mechanism has to exist to bind all the elements together. The leading candidate mechanism is oscillation, particularly coherent oscillation. Unfortunately, relatively few investigators use frequency coherence analysis methods, and the arsenal of statistical correlation techniques for multiple
EEG Coherence and Consciousness
205
simultaneous correlations remains to be adequately developed. We are generally limited to comparing data from one pair of electrodes at a time. Not everyone accepts the idea that high-frequency coherence mediates the binding of fragmented sensory elements, such as bars and edges in a visual science (Shadlen and Movshon 1999). The criticism is that the evidence that synchrony promotes binding is indirect and incomplete at best. It is true, as they argue, that synchrony is only the signature of sensory (and presumably cognitive) binding. No explanation is yet available for how synchrony actually achieves binding. Yet even the critics conclude that synchronicity must be important, and it might be especially important to the issue of consciousness. To be dismissive of oscillatory synchronization is a kind of physiological nihilism and is not warranted by the huge number of phenomena with which it has been associated. Some studies of coherence of the magnetic fields generated by EEG currents, using so-called magnetoencephalography (MEG), show a great deal of coupling across brain areas during both alert wakefulness states and dream sleep. However, in dream sleep, the “outside world” is blocked off, and the gamma rhythm which is so easily reset during wakefulness is resistant to outside input during dreaming. Earlier studies had established the dogma that brain electrical activity is the same during wakefulness and dreaming, but now we have some compelling evidence that coherence analysis can reveal important distinction that often goes undetected.
EEG Coherence and Consciousness Oscillation of brain electrical activity occurs in both subconscious thought and conscious states. Only recently has research been devoted to the possibility that phase and variables might account for the differences in the two states. As explained earlier, oscillation has many consequences. When several or more interacting circuits oscillate, their phase relationships can entrain the respective activities when oscillations become synchronous. Presumably, the information (nerve impulses) in the interacting circuits is shared as packets that transfer during a favorable half cycle of the waves. At a global level, oscillatory phase shifting might underlie the genesis of consciousness. How can synchronous oscillation promote conscious thought? Oscillation packages the delivery of circuit throughput to other circuits (think packet processing). Time locking of different oscillations facilitates circuit sharing of throughput of the participating circuitry. Such sharing augments the richness of information communication and processing, enabling presumably the capability to think in an expanded mode where information is not only processed but done so in reference to an expanded context of self-awareness. In other words, a brain operating coherently may acquire an additional capacity, the capacity to create a frame of reference, the sense of self. It is common to regard oscillation, especially its synchronization, as intimately related to levels of consciousness. Susan Pockett and colleagues (Pockett et al. 2009) found that long-range phase synchrony is present most of the time when the
206
6 Global Interactions
subjects were conscious. As an aside, their paper has provided an excellent illustration of technical issues involved in EEG synchronization research. Their methods facilitated inspection of synchrony across the whole EEG spectrum, and their results (and those from my lab and others) suggest that consciousness may involve not only gamma frequency synchrony, but the whole frequency spectrum generated by brain oscillators. That study also revealed that synchrony develops at the scalp appears to be relatively slow, on the order of 100 ms. One implication is for the studies on free-will, which have generally used the unlikely assumption that conscious choices and decisions are near-instantaneous (see Chap. 7’s discussion of free-will debates). The other implication is for theories of consciousness. The slow speed of development indicates that synchronization is more likely to involve chemical synapses than gap junctions, electric fields, or quantum non-locality (see Chap. 8’s discussion of “quantum consciousness”). It is important to study the topography of synchrony in different states of consciousness, such as alert wakefulness and sleep. In one such study, EEG coherence was strengthened between fronto-temporal cortical regions within a broad frequency range during slow-wave sleep (SWS), but to the detriment of the coherence between temporal and parieto-occipital areas, suggesting underlying compensatory mechanisms between temporal and other cortical regions. In both cases, coherence was built up progressively across the night, although no changes were observed within each SWS period. No electrophysiological changes were found in rapid eye movement sleep (dream sleep) (Cantero et al. 2002). In another human study (Cantero et al. 2003), there was no evidence of thetaband activity coherence in the hippocampus and the cerebral cortex in any brain state. However, at sites within the hippocampus and cortex theta activity was coherent during various stages of alert wakefulness and dream sleep, but not ordinary sleep. Clearly, we don’t have enough information to understand the relationship between EEG coherence and states of consciousness. Another, more recent, illustration of the importance of oscillation phase comes from the recent report by Cavanagh et al (Cavanagh et al. 2009). They were studying the so-called action-monitoring network that involves the medial and lateral prefrontal cortex (mPFC ad lPFC) in humans. This circuitry has been implicated by others in monitoring behavior for error and controlling corrective action. They extracted theta-band oscillatory phase and power data from the EEG during a behavior task. When a behavioral trial produced an error, they noted that theta power diminished in the mPFC immediately before the error and increased afterwards. Also during an error trial the phase synchrony between the two regions increased greatly, suggesting that this might be the way the monitoring system informed itself that an error had been made and that correction was needed. Another example of the role of synchrony in cognition has been observed with changes in attention level. When monkeys and humans make a conscious decision to pay attention to visual stimuli, brain evoked responses and gamma EEG synchrony become enhanced in the ventral visual cortex pathways. At issue is the role of other brain areas, particularly those such as the prefrontal and parietal cortex
EEG Coherence and Consciousness
207
which could be sources of top-down intent to focus visual attention. Electrical stimulation of neurons in the frontal cortex eye field part of the prefrontal cortex enhances responses in area V4 of the ventral visual pathways. And when monkeys are rewarded for focusing attention on a visual stimulus, neuronal firing increased in both the prefrontal and visual cortical areas (Gregoriou et al. 2009). The increased activity appeared first in the prefrontal areas, suggesting that these neurons were driving the circuit coupling that mediated attentiveness. Under these conditions, the field potentials became synchronized in the gamma band between the two cortical areas, while low-frequency activity became unsynchronized. During attentiveness, the correlation between spikes and field potentials within each area also increased. Similar synchronization shifts during attention were seen at another frequency tested (22 Hz), but not at the lower frequency of 5 Hz. Thinking ability, as manifest by IQ for example, appears to depend on coherence. Across a wide range of age, EEG coherence seems to correlate with intelligence (Thatcher et al. 2005). Grouping subjects by high IQ (Weschler scale, above 120) and low IQ (< 90) revealed that the EEG differences between the groups were most readily distinguished by EEG phase and coherence of the frequency bands tested (1–30 Hz). I would expect the effect to be even more marked in the gamma band. The statistically significant effects involved both negative and positive coherences, involving within and between hemispheric locations. Amplitude-independent measures such as coherence were more strongly correlated with IQ in this multivariate analysis than were the power measures of the EEG. This indicates that the network properties of shared information and coupling as reflected by EEG coherence are the most predictive of IQ. Thatcher concludes that general intelligence is positively correlated with faster processing times in frontal connections as reflected by shorter phase delays. Simultaneously, intelligence is positively related to increased differentiation in widespread local networks or local assemblies of cells as reflected by reduced EEG coherence and longer EEG phase delays, especially in local posterior and temporal lobe relations. Since IQ scores are always monitored under conditions of consciousness, this provides yet another indication of the relationship between coherence and consciousness. A refreshing way of looking at synchrony is provided by a recent study that examined the phase relation of action potential oscillations relative to subthreshold membrane potential oscillations (Nadasdy 2009). A three-layer neural network computer simulation showed that such phase relations can preserve the spatial and temporal content of a neuron’s input. A fundamental premise is that the timing of oscillatory spike initiation depends on the phase of the membrane potentials (Llinas et al. 1991). Additionally, numerous investigators have shown that membrane potentials can synchronize with various phase gradients in local field potentials across large populations of neurons, with attendant synchrony of nerve impulses. Another computer simulation study provides a model for how synchrony can provide an efficient learning scheme for detection and later recognition of CIPs (Masquelier et al. 2009). The modeling was inspired by growing experimental evidence that neural networks use CIPs and not just firing rates and that field potential phase
208
6 Global Interactions
participates in the encoding and processing of information. The approach used impulse patterns across an array of simulated spiking neurons at successive increments of time. The results showed that a single neuron could robustly detect a pattern of input pulses automatically encoded in the phases of a subset of its inputs. Oscillations greatly facilitated “learning” of specific simulated CIPs. Comparison to conventional rate-based codes revealed that the interval patterns and phase synchrony were much superior for learning and high-speed decoding. One problem with the findings is the interpretation of the authors that oscillation partially formats the input spike timing, which fails to account for the fact that oscillations are created by oscillatory spike firing. It’s sort of a chicken-and-egg problem. But both from experimental and theoretical perspectives, it seems that the brain thinks with CIPs and their associated oscillatory electrical activity. Another way to make this point is to reflect on the fact that many coherences occur between areas of scalp too far apart for the electrical field in one area to have any direct electrostatic effect on the other. Under many conditions, coherent activity can be detected at the most distant of scalp sites, such as the forehead and the occiput. The effect must be mediated by the underlying impulse activity, which is presumably synchronous whenever the associated field potentials are coherent. Womelsdorf and colleagues analyzed multiple-unit activity (MUA) and field potentials simultaneously from four to eight electrodes while cats and monkeys were visually stimulated with gratings. Gamma coherence between MUA in two locations or between MUA and local field potential determined the strength mutual influences. They observed that the mutual influence of neuronal groups depends on the phase relation between rhythmic activities within the groups. Phase relations that promoted group interaction preceded the effect, suggesting a causal role for phase (Womelsdorf et al. 2007). A practical example of how synchronization of two networks can facilitate activity comes from a study by Yuri Saalmann and colleagues on selective attention in monkeys and the facilitating role of activity in the parietal cortex on responsiveness in the visual pathway to visual stimuli (Saalman 2007). They noted that when monkeys selectively attended to a location in visual space, the neural activity in the two areas became synchronized, with activity in the parietal cortex leading that in the visual pathway. This suggests that parietal cortex can mediate attentiveness by coherently controlling responsivity in the visual pathway. The conscious state of humans is characterized by gamma-band activity that is highly synchronized throughout wide span of cortical tissue. Llinas and Ribary used magnetoencephalography to compare gamma activity in the behavioral states of alert wakefulness, slow-wave sleep, and dream sleep (Llinas and Ribrary 1993). They found widespread gamma synchrony during wakefulness and dream sleep, but not during slow-wave sleep. They also noted that an auditory stimulus could reset the oscillation phase during wakefulness, but not during either stage of sleep. The activity was filtered around 40 Hz, so it is possible that other frequencies, especially higher ones, are also synchronized during wakefulness and dreaming. Several studies have explored this matter further. A recent study out of the Singer lab reported that gamma coherence can reveal different neural correlates of consciously
EEG Coherence and Consciousness
209
perceived stimuli and subliminal stimuli. Thus, this design can elucidate the difference between conscious and subconscious mind. This group tested the effect of conscious language operations on gamma coherence. They recorded EEG responses related to the processing of words that were visible (and consciously perceived) and were invisible (subliminally detected). Both perceived and non-perceived words caused a similar increase of local (gamma) oscillations in the EEG, but only perceived words induced a transient long-distance synchrony across widely separated regions (Melloni et al. 2007). After this transient synchronization, the electrographic signatures of conscious and subconscious processes continued to diverge. Only words reported as perceived induced (1) enhanced theta oscillations over frontal regions during the maintenance interval, (2) an increase of the P300 component of the event-related potential, and (3) an increase in power and phase synchrony of gamma oscillations before the anticipated presentation of the test word. In an experiment designed to test directly the relation between neuronal synchrony and conscious perception, subjects looked at words flashed on a screen for 33 ms. Words were masked by varying luminance so that the words were registered consciously or subconsciously. Shifts in phase synchrony within and between hemispheres distinguished consciously perceived words from unseen ones was transient. But amplitude and spatial patterns of gamma activity were the same in both conditions. Synchronization may provide the mechanisms by which the brain dynamically selects subsets of neuronal activity for representation at a conscious level. That is, conscious awareness of certain representations may emerge when synchronization occurs in those circuits containing the respective nerve impulse representation. Degree of synchronization, and perhaps its topography, could determine whether a given representation is processed subconsciously or consciously. This also implies that a given network domain can support both local (subconscious) and global (conscious) representations, depending on its degree of oscillatory phase locking with other network domains (Buzsaki 2007). Keep this in mind when in the free-will section of Chap. 7, I discuss how conscious and subconscious mind probably interact in generating intentions, decisions, and choices. A corollary of the relationship of frequency coherence and consciousness is that mental abnormalities might be created by diminished capacity for phase locking. Specific research supports this conclusion, at least for schizophrenia and autism. Another example is that the progression of mild cognitive impairment to Alzheimer’s disease (AD) in the elderly can be predicted by coherence changes (Rossini et al. 2006). In several frequency bands, fronto-parietal coherence increased in the right hemisphere of patients who deteriorated into AD during the 14-month observation period, while coherence decreased in this group at the fronto-parietal midline. EEG-alpha band coherence decreases in Alzheimer’s Disease and in multi-infarct dementia (Comi et al. 1998). In these conditions, delta band coherence increases, which probably reflects a disruption of the capacity for high frequency coherences that support higher-level thought. An interesting corollary comes from a Chinese study of patients with mild cognitive impairment (Jiang and Zhejiang 2006). At mental rest, there was no difference
210
6 Global Interactions
between normals and the impaired group, but when performing a working memory task inter- and intrahemispheric coherences in several frequency bands increased in the impaired group. This suggests a compensatory mechanism whereby the impaired group has to “work harder” to perform the task. Though coherence was unfortunately not studied in another clinical study, the amount of gamma activity was enhanced in autistic boys (Orekhova 2007). Though the authors suggested that excess gamma might cause autism, my simplistic explanation is that the autistic brain generates more gamma because it has to work harder than normal brain.
Phase Shifting During Movements Recall the earlier discussion of phase precession of theta as animals move through a place field. I won’t repeat this material here, but I do want to point out that theta phase relations can enable information coding. Is phase precession used by other parts of the brain for other kinds of coding? The pyramidal cells that exhibit phase precession project extensively outside of the hippocampus. Does precession occur with other rhythms, such as the slow delta waves associated with sleep, or even perhaps with the high-speed gamma rhythms associated with complex neural processing? Could phase precession be involved in synaptic plasticity, learning, and memory? Various possible mechanisms for phase precession have also been proposed (Maurer and McNaughton 2007). One of these postulates is that interference between the oscillations of individual place cell activities and theta rhythms, which operate at different frequencies, causes precession. According to this model, the higher-frequency oscillations of the place cells outpace the slower theta rhythm, causing a shift in phase relations. Other forms of oscillatory interference also offer potential explanations of phase precession. For example, another model proposes that interference between dendritic somatic oscillations of a place cell as they alternate between being in and out of phase depending on a rat’s location in a place field may lead to precession. Though the somatic oscillations referred to in this possible mechanism are subject to a few different inputs, they are also notably thetaregulated, so in some ways this second oscillatory model appears to be a more specific version of the first. This interfering oscillation concept is probably the most prominent model for describing the emergence of precession, but other ideas have also been developed. There is one potential mechanism that depends on the reinforcement of synaptic network connections between assemblies of place cells that occurs upon continued activation of certain neural assemblies. This dictates that once these connections are formed, an assembly will receive two types of input: one from an external stimulus, and the other from its reciprocal connections. Assemblies that receive the most external output are activated first, and then they in turn activate assemblies which they have become reciprocally connected with so that in a sense, the rat “looks ahead” beyond the particular assembly activated by its current location.
EEG Coherence and Consciousness
211
In this mechanism, the activity from external input would be present early in theta cycle, whereas the internally generated activity is seen later in the theta cycle. As stated, however, in experiments impulse activity is initially seen only late in the theta cycle and then advances forward, begging the question of why activity is not seen following the initial external input given when a rat enters a cell’s place field. This appears to be due to inhibition of activity generated by external input. Thus, the impulse activity seen comes from internally generated input, and it gradually advances with respect to the theta cycle because the rat is “looking ahead” with each internal input. The phase of theta in various parts of the hippocampus also seems to govern certain behavioral functions. Though hippocampal theta in animals seems to be generally coherent, subtle phase shifts occur in conjunction with specific behavioral tasks. Simultaneous monitoring of field potential from 96 implanted probes during performance of various behavioral tasks revealed task-specific changes in theta spectral power, coherence, and phase in various layers of the hippocampus (Montgomery et al. 2009). Research on precession is ongoing. The O’Keefe and Reece paper describing the experiments that introduced the concept were performed as in 1993 (O’Keefe and Reece 1993) and though many experiments have been performed since, there are still many unknowns about this concept that remain to be explored and described.
Phase Shifting During Thought Singer initially dealt with sensory binding in animals. But there is a case to be made that many conscious thought processes require binding of disparate elements. I call this cognitive binding, which necessarily includes sensory binding, but extends it to interpretation of sensory input and the development of appropriate responses. Many theorists implicate synchronization as the mechanism for enabling cognitive binding. That is, when oscillatory circuits become time locked, the activity in the various distributed circuits becomes more organized and interdependent. Such binding is probably most important in the more complex thinking processes of the brain. Synchronization seems to be a factor in binding elements of memory during memory consolidation processes. For example, during dream sleep, when the conscious mind is very much awake and in its most creative mode, many studies revealed the brain to be consolidating memories of the day’s events. In György Buzsáki’s lab, researchers simultaneously recorded from 96 implanted silicon probes in the hippocampus (Montgomery et al. 2008). Synchrony of the theta and gamma bands of field potential activity, and their underlying impulse activity, was much greater among certain parts of the hippocampus during dream sleep than during regular alert wakefulness, while the opposite was true in other regions of the hippocampus. The assumption is that the field potential coherence reflects the degree of coordination and binding among the various regions and that in turn reflects the formation of memories. The differing coherence patterns between wakefulness and dream sleep suggests, when coupled with other research, a unique role for dream sleep
212
6 Global Interactions
that presumably includes the formation of long-lasting memories. In both ordinary wakefulness and dreaming, the point is that coherence in specific frequency bands is of central importance. However, the correlation of electrical activity synchrony with various kinds of cognition does not prove that synchrony enables the thought process. But it seems likely. At the University of Bonn, Jűrgen Fell and colleagues performed human studies that support a causal role for synchrony (Fell et al. 2001). In epileptic patients with electrodes implanted in preparation for neurosurgery, they monitored nerve impulses in parts of the cortex while subjects tried to recall previously memorized words. When presented a word that was later recalled, there was a burst of impulse synchrony. This did not occur with words that could not be recalled later. In other words, synchrony seemed to be a necessary condition for recalling words. Similar support for synchrony’s causal role comes from a study of monkey neurons that respond to component features of a face (nose, mouth, eyes, etc.). These neurons had to temporarily synchronize before the monkey could indicate it had perceived a face (Hirabayashi and Miyashita 2005). In an intriguing human experiment performed in 1999 by a team led by Francisco Varela, humans were asked to look at ambiguous black-and-white images that look like faces when viewed upright, but are meaningless when viewed upside down (Rodriguez et al. 1999). EEG recordings from 30 electrodes showed the onset of high-frequency activity (30–80 Hz) when an image was presented, but synchrony occurred only when a face was perceived, not when the image was a meaningless shape. Again, we see that a higher cognitive function seems to require synchronization. My lab has demonstrated synchrony changes associated with cognitive binding in humans, using wavelet analysis of the EEG (Klemm et al. 2000). When subjects look at ambiguous figures (such as the vase/face illusion or any of the other nine figures we tested), the conscious mind must recognize that the same line segments can be interpreted two ways (Fig. 6.11). Is it a vase or a face? More than just sensory binding is at play here, because there is only one pattern of sensory stimulus. But the existence of an alternate and hidden image requires conscious analysis to achieve “cognitive binding.” Ambiguous figures have special, but generally unappreciated, value in cognitive research because the physical stimulus is identical for two different percepts. Only one percept can occur at a time; the other is not seen consciously, but may well be seen subconsciously. Most people have a default percept for one of the alternative interpretations of the image. Only after some thought is the second percept consciously realized. Such realization requires recall of memory that differs for the two percepts. Eye tracking studies have not been done, but it is likely that which percept is realized depends on which part of the image one focuses on. For 171 combinations of data from 19 electrodes, obtained from 17 subjects and 10 ambiguous figures, we calculated the difference in correlation between the response to first seeing an ambiguous figure and immediately before the instant when the alternative percept for that figure became consciously realized (cognitively bound). For a given image, across all subjects and images, the median time
EEG Coherence and Consciousness
213
Fig. 6.11 Example ambiguous visual stimuli. Each can be perceived in the consciousness in one of two ways. At any given instant, a given perception is detected subconsciously by the brain, while the other is perceived consciously. In all situations, the actual physical stimulation is identical, regardless of what is perceived
required to realize the alternative percept ranged from 2.8 s to 41 s. Numerous statistically significant correlation differences occurred in all frequency bands tested with ambiguous-figure stimulation, but not in two kinds of control data. Most of the statistically significant correlation changes were not between adjacent sites but between sites relatively distant, both ipsilateral and contralateral (Fig. 6.12). The data showed that at the instant of a cognitive binding event, there is distinct synchronization at multiple cortical areas, not only in the gamma frequency band, but at several lower frequencies as well. Presumably what is being bound are the line segments stored in memory that are derived from the mental image of the alternative figure. That mental image is recalled from memory by the cognitive process of identifying which of the line segments serve as the most robust cues for realization of the alternative percept. We saw that such synchronization of oscillation includes all the other frequency ranges of EEG, including alpha and theta. Nobody knows what to make of this observation, but similar findings of multiple-frequency synchronization in the same mental task have been reported by a few others. It may be that different aspects of the task are processed by different frequencies, or more precisely, by the different periods of the underlying neuronal firings. What kinds of thinking are involved during ambiguous figure analysis? We don’t know, but a few suggestions seem reasonable. First, the default percept would seem to reflect recognition memory. You see a certain configuration of line segments that matches something that you recognize in your memory. Quite different operations must be in play to achieve the alternate percept. Although that percept also resides
214
6 Global Interactions
Fig. 6.12 Topographic summary of EEG coherence increases when subjects suddenly realized the alternative image in an ambiguous figure. Each square represents an electrode placed in the standard EEG montage, with the small circles depicting the location of all electrodes and whether or not the electrode detected synchrony with signal from any of the other electrodes. Connecting lines indicate scalp electrode locations that showed increased coherence at the moment of conscious realization. Note that coherence was seldom seen with nearest neighbor electrodes but with more distant sites, both on the same and opposite sides of the head. Data pooled across all replicates, multiple images, and all subjects in the 25–50 Hz band. Other data (not shown) revealed similarly widespread coherence increases in bands as low as alpha and as high as 62.5 Hz (From (Klemm et al. 2000))
in memory, it is not as readily recalled as the default percept. In the vase/face illusion for example, the brain has to solve a problem that ultimately requires recall of memory of what faces can look like in profile. The problem is a “search and find” kind of operation in which various segments of the image are tested against multiple possible templates of stored memories. Eventually, a match is usually made, leading not only to finding the right match but also the conscious “ah ha” realization. Since this process is accompanied by increased oscillatory coherence (in multiple frequency bands), the increased coherence either causes or is the consequence of the thought processes. Moreover, the coherence could be related to either the cognitive template matching or to its conscious realization or both. The correlation
EEG Coherence and Consciousness
215
changes in more than one frequency band suggest that different coherent frequencies may mediate different components of line drawings. Conscious realization of hidden percepts is cue dependent. This was confirmed by debriefing the subjects after the experiment and also by the fact that experimenterprovided cues (such as “focus on the chin”) were needed whenever subjects had problems in perceiving the hidden image. Thus, the observed EEG synchrony most certain reflected a cue-dependent memory retrieval process as well as the aforementioned aspects of cognition.
Cross-Frequency Coherence Circuits that oscillate at different frequencies can have reciprocal relationships, both direct and indirectly via their overlapping field potentials. When different networks oscillate at different frequencies, the degree of phase locking can greatly affect how well these networks communicate and share information. Phase locking should regulate how well networks in various parts of the brain share, collate, and integrate information in real time, with millisecond-range precision. Thus, phase relationships between different frequencies should be important, but few researchers examine this possibility. Generally, researchers test for coherence at different locations, not coherence of different frequencies in the same electrode site. However, some current work is focused on co-frequency coherence, particularly theta and gamma activities in the subcortical structure that mediates memory consolidation, the hippocampus. In rats, for example, the hippocampus generates not only the 4–7 Hz theta activity, but also two bands of gamma frequency, one fast (~65–140 Hz) and one slow (~25–50 Hz).Measuring the cross-frequency coherence of these gamma bands reveals that the two frequency groups are coupled differently to the theta (Colgin 2009). Amplitude of slow gamma was greatest during the early part of a theta wave, while fast gamma peaked near the trough of theta. In addition, fast gamma in the hippocampus was synchronized with fast gamma in the entorhinal cortex, an area that supplies input to the hippocampus. For purposes of understanding consciousness, more of this kind of study needs to be done in cerebral cortex, especially in comparing various states of consciousness. But hippocampal research dominates at the moment, primarily because theta activity is so prominent there and because labs such as those of György Buzsaki have provided an extensive knowledge base on what goes on in hippocampal circuitry. I noticed several papers at the 2009 and 2010 Society of Neuroscience meetings indicating that several groups are starting to pay more attention to co-frequency coherence. I believe this area of research will increasingly attract more attention. Cross-frequency coherences are bound to be important in conscious thinking processes, but few studies have examined these (Sauseng and Klimesch 2008). For example, gamma activity (40+ waves/s) can become phase locked to theta oscillations (Canolty et al. 2006; Demiralp et al. 2007; Mormann et al. 2005), though the
216
6 Global Interactions
consequences are not fully understood. As networks of different size are expected to oscillate at different frequencies (larger networks may generate slower frequency) interactions of these networks can be monitored by cross-frequency phase synchronization in the EEG. The observation of cross-frequency phase synchrony has also been observed by a group in Finland, headed by J. Matias Palva, using magnetoencephalography (Palva et al. 2005). They observed in humans that robust cortical cross-frequency phase synchrony occurs among oscillations ranging from 3 to 80 Hz. Continuous mental arithmetic tasks demanding the retention and summation of items in working memory enhanced the cross-frequency phase synchrony among alpha, beta, and gamma oscillations. These tasks also enhanced the “classical” within-frequency synchrony in these frequency bands, but the spatial patterns of multiple-band synchronies were distinct and separate from the patterns of cross-frequency phase synchrony. Increased task load enhanced phase synchrony most prominently between alpha and gamma oscillations. The enhancement of cross-frequency phase synchrony among functionally and spatially distinct networks during mental arithmetic tasks or during ambiguous figure processing both support the idea that spectral integration is a mechanism for cognitive binding and perhaps consciousness itself. Andrew and Alexander Fingelkurts have published a review of data that suggest that consciousness may depend on large-scale cortical network synchronization in multiple frequency bands (and not just 40 Hz) (Fingelkurts and Fingelkurts 2002). Moreover, I suggest that different coherent frequencies may mediate different components of the total cognitive-binding process. Of course, we cannot rule out the possibility that increased coherence among multiple sites is influenced by turning off additional independent signal sources that would otherwise disrupt time-locking of multiple oscillators. Supporting research has also been published by Eckhorn’s group. Others reported observations that the mechanism might apply to other sensory modalities and that oscillations extended into other frequency ranges (Eckhorn et al. 1998). Distant sensorimotor sites in human neocortex can exhibit EEG cross-frequency synchronization during a simple thumb movement task (Darvas et al. 2009). Investigators noted that a low-frequency rhythm (10–13 waves/s) combined with a 77–82 waves/s rhythm in a ventral region of the premotor cortex to generate a third rhythm at a sum of these two frequencies in a distant motor area of cortex. The analysis employed a biphase-locking synchronous method that allowed detection of coupling of different frequencies. There is evidence that, at least in the hippocampus, theta and gamma oscillations work together to represent an ordered sequence of items. The coding scheme has been noted during theta precession (Lisman and Buzsáki 2008).
Ultraslow Oscillations My first, and only, encounter with ultraslow oscillations came when my graduate student and I conducted a study on the effects of decapitation on the brain to examine the humaneness of this common method of animal sacrifice in research laboratories
EEG Coherence and Consciousness
217
(Klemm 1987). Laboratory rats were surgically implanted with EEG electrodes, paralyzed with curare and artificially respired, and then sacrificed by guillotine. Intense activation, as reflected in high frequencies, occurred immediately after decapitation and lasted about 13 s before all activity terminated. Also occurring immediately was a massive direct current shift, which could not be studied because the amplifiers were saturated for many seconds. Most investigators never see ultra-slow activity because the electronic equipment they use is set to reject such frequencies. We set our amplifiers to allow such signals to pass and we were able to see ultraslow changes (0.01–0.1 Hz). If one is very careful to avoid motion artifacts (a decapitated head does not move, though there are spasms of facial muscles) and sets amplifier filters accordingly, it is possible to discern physiologically meaningful signals at very low frequencies. And there is good reason to look for such activity. Human performance in controlled experiments often fluctuates in time scales of more than 10 s. Almost no one looks for ultraslow activity in the brain waves. Ultraslow oscillations also occur during sleep, but the significance is not known. They also occur in humans after voluntary hyperventilation and in association with epilepsy (which is reminiscent of what happens in decapitation where there is an initial massive high-frequency discharge). A recent study by Simo Monto and colleagues in Finland has examined the association of ultraslow activity with human pscyhophysiological functions that exhibit long-term fluctuations (Monto et al. 2008). The behavioral task was to indicate response to a mild electrical stimulation of the index finger. The stimulus, delivered throughout long periods lasting many minutes, was constant in strength, relative to detection threshold, and delivered within a relatively tight random range of 1.5–4.5/s. Subjects indicated detection by twitching the thumb. Subjects ability to detect was not constant. Detection (“hits”) was indicated over a few consecutive seconds, followed by segments of misses. Over a range of 10–100 s, clustered periods of hits and misses correlated with ultraslow activity in the EEG. That is, the probability of a hit was greatest with the rising phase of an EEG ultra-slow wave and the probability of a miss was least during the falling phase. Phase, not amplitude, of an ultraslow wave was the reliable correlate. Of special interest was the analysis of ultraslow activity correlations with higher EEG frequencies that are nested within the ultraslow activity. Again, the correlation was with phase, not amplitude, of the ultraslow activity. But the phase of the ultraslow activity affected the amplitudes of faster activity, which increased on the rising phase of ultraslow activity and decreased on the falling phase. Other studies by Mantini and colleagues (Mantini et al. 2007) showed covariation between functional MRI activity and standard frequency-band EEG activity. Since Monto’s group showed correlations between ultraslow EEG phase and magnitude of higher EEG frequencies, this leads to the idea that changes in metabolic neural activity are the cause of ultraslow EEG changes. It seems likely that all nested EEG frequencies owe their origin to fluctuations in metabolic activity, which in turn is caused by fluctuations in impulse discharge. The significance of ultraslow EEG activity to execution of cognitive tasks remains to be discovered.
218
6 Global Interactions
References Bakker, A., et al. (2008). Pattern separation in the human hippocampal CA3 and dentate gyrus. Science, 319, 1640–1642. Bonifazi, P., et al. (2009). GABAergic hub neurons orchestrate synchrony in developing hippocampal neurons. Science, 326, 1419–1424. Brenner, N., et al. (2000). Synergy in a neural code. Neural Computation, 12, 1531–1532. Buzsáki, G. (2007). The structure of consciousness. Nature, 446, 267. Canolty, R. T., et al. (2006). High gamma power is phase-locked to theta oscillation in human neocortex. Science, 313, 1626–1628. Cantero, J. L., et al. (2002). Effects of prolonged waking-auditory stimulation on electroencephalogram synchronization and cortical coherence during subsequent slow-wave sleep. The Journal of Neuroscience, 22(11), 4702–4708. Cantero, J. L., et al. (2003). Sleep-dependent theta oscillation in the human hippocampus and neocortex. The Journal of Neuroscience, 23(34), 10897–10903. Cavanagh, J. F., Cohen, M. X., & Allen, J. J. B. (2009). Prelude to and resolution of an error: EEG phase synchrony reveals cognitive control dynamics during action monitoring. The Journal of Neuroscience, 29(1), 98–105. Chapin, J. K., & Lin, C. S. (1984). Mapping the body representation in the SI cortex of anesthetized and awake rats. The Journal of Comparative Neurology, 229, 199–213. Cimenser, A. et al. (2010). Collective dynamics of high density EEG reveals a single dominant mode of activity during general anesthesia-induced unconsciousness. Society Neuroscience. Abstract 343.4. Colgin, L. L. (2009). Frequency of gamma oscillations routes flow of information in the hippocampus. Nature. 462, 353–357. doi: 10.1038/Nature08573. Comi, G., et al. (1998). EEG coherence in Alzheimer and multi-infarct dementia. Archives of Gerontology and Geriatrics, 26, 91–98. Corkin, S. (2002). What’s new with the amnesic patient H. M.? Nature, 3, 153–160. Dahlin, E. (2008). Transfer of learning after updating training mediated by the striatum. Science, 320, 1510–1512. Darvas, F., et al. (2009). Nonlinear phase-phase cross-frequency coupling mediates communication between distant sites in human neocortex. The Journal of Neuroscience, 29(2), 426–435. Dehaene-Lambertz, G., et al. (2002). Functional neuroimaging of speech perception in infants. Science, 298, 2013–2015. Dehaene-Lambertz, G., et al. (2006). Functional organization of perisylvian activation during presentation of sentences in preverbal infants. Proceedings of the National Academy of Sciences of the United States of America, 103(38), 14240–14245. Demiralp, T., et al. (2007). Gamma amplitudes are coupled to theta phase in human EEG during visual perception. International Journal of Psychophysiology, 64, 24–30. Eckhorn, R., et al. (1998). Coherent oscillations: A mechanism of feature linking in the visual cortex? Multiple electrode and correlation analysis in the cat. Biological Cybernetics, 60, 121–130. Edden, R. A. E., et al. (2009). Orientation discrimination performance is predicted by GABA concentration and gamma oscillation frequency in human primary visual cortex. The Journal of Neuroscience, 29(50), 15721–15726. Fell, J. (2003). Rhinal-hippocampal theta coherence during declarative memory formation: Interaction with gamma synchronization? The European Journal of Neuroscience, 17, 1082–1088. Fell, J., et al. (2001). Human memory formation is accompanied by rhinal − hippocampal coupling and decoupling. Nature Neuroscience, 4, 1259–1264. Fingelkurts, A., & Fingelkurts, A. (2001). Operational architectonics of the human brain biopotential field: Towards solving the mind-brain problem. Brain and Mind, 2, 261–296. Gelbard-Sagiv, H., et al. (2008). Internally generated reactivation of single neurons in human hippocampus during free recall. Science, 322, 96–101.
References
219
Greene, J. D., et al. (2001). An fMRI investigation of emotional engagement in moral Judgment. Science, 293, 2105–2108. Gregoriou, G. G., et al. (2009). High-frequency, long-range coupling between prefrontal and visual cortex during attention. Science, 324, 1207–1209. Haenschel, C., et al. (2009). Cortical oscillatory activity is critical for working memory as revealed by deficits in early-onset schizophrenia. The Journal of Neuroscience, 29(30), 9481–9489. Hirabayashi, T., & Miyashita, Y. (2005). Dynamically modulated spike correlation in monkey inferior temporal cortex depending on the feature configuration within a whole object. The Journal of Neuroscience, 25, 10299–10307. Ilg, R., et al. (2008). Gray matter increase induced by practice correlates with task-specific activation: A combined functional and morphometric magnetic resonance resonance imaging study. The Journal of Neuroscience, 28, 4210–4215. Jiang, Z. Y., & Zhejiang, L. L. (2006). Inter- and intra-hemispheric EEG coherence in patients with mild cognitive impairment at rest and during working memory task. Journal of Zhejiang University. Science, 7(5), 357–364. Johnson, B. A., et al. (1998). Spatial coding of odorant features in the glomerular layer of the rat olfactory bulb. The Journal of Comparative Neurology, 393, 457–471. Jutras, M., Fries, P., & Buffalo, E. A. (2009). Gamma-band synchronization in the Macaque hippocampus and memory formation. The Journal of Neuroscience. 29(40), 12521–12531. doi: 10.1523/JNEUROSCI.0640-09. Kelso, J. A. S., & Tognoli, E. (2010). Spatio-temporal metastability: Design for a brain. Society Neuroscience. Abstract 343.17. Klemm, W. R. (1969a). Animal electroencephalography. New York: Academic. Klemm, W. R. (1969b). Mechanisms of the immobility reflex (“animal hypnosis”) II. EEG and multiple-unit correlates in the brain stem. Communications in Behavioral Biology, 3, 43–52. Klemm, W. R. (1970). Correlation of hippocampal theta rhythm, muscle activity, and brain stem reticular formation activity. Communications in Behavioral Biology, 5, 147–151. Klemm, W. R. (1971). EEG and multiple-unit activity in limbic and motor systems during movement and immobility. Physiology & Behavior, 7, 337–343. Klemm, W. R. (1987). Resolving the controversy over humaneness of decapitation sacrifice of laboratory animals. Laboratory Animal Science, 37, 148–151. Klemm, W. R. (2008). Blame game. How to win it. Bryan: Benecton. Klemm, W. R., et al. (1980). Hemispheric lateralization and handedness correlation of human evoked “steady-state” responses to patterned visual stimuli. Physiological Psychology, 8(3), 409–416. Klemm, W. R., Goodson, R. A., & Allen, R. G. (1983). Contrast effects of the three primary colors on human visual evoked potentials. Electroencephalography and Clinical Neurophysiology, 55, 557–566. Klemm, W. R., Li, T. H., & Hernandez, J. L. (2000). Coherent EEG indicators of cognitive binding during ambiguous figure tasks. Consciousness and Cognition, 9, 66–85. Kobayashi, A., et al. (2000). A combinatorial code for gene expression generated by transcription factor Bach2 and MAZR (MAZ-related factors) through the BTB/POZ domain. Molecular and Cellular Biology, 20(5), 1733–1746. Korman, M., et al. (2007). Daytime sleep condenses the time course of motor memory consolidation. Nature Neuroscience, 10(9), 1206–1213. Lepore, F., et al. (2006). Cross-modal reorganization and speech perception in cochlear implant users. Brain, 129, 3376–3383. Lisman, J., & Buzsáki, G. (2008). A neural coding scheme formed by the combined function of gamma and theta oscillation. Shizophrenia Bulletin, 34(5), 974–980. Llinas, R., & Ribrary, U. (1993). Coherent 40-Hz oscillation characterizes dream state in humans. Proceedings of the National Academy of Sciences of the United States of America, 90(5), 2078–2081. Llinas, R. R., Grace, A. A., & Yarom, Y. (1991). In vitro neurons in mammalian cortical layer 4 exhibit intrinsic oscillatory activity in the 10- to 50-Hz frequency range. Proceedings of the National Academy of Sciences of the United States of America, 88, 897–901.
220
6 Global Interactions
Mantini, D., et al. (2007). Electrophysiological signatures of resting state networks in the human brain. Proceedings of the National Academy of Sciences of the United States of America, 104, 13170–13175. Masquelier, T., et al. (2009). Oscillations, phase-of-firing coding, and spike timing-dependent plasticity: An efficient learning scheme. The Journal of Neuroscience, 29(43), 13484–13493. Maurer, A. P., & McNaughton, B. L. (2007). Network and intrinsic cellular mechanisms underlying theta phase precession of hippocampal neurons. Trends in Neurosciences, 30(7), 325–333. Melloni, L., et al. (2007). Synchronization of neural activity across cortical areas correlates with conscious perception. The Journal of Neuroscience, 27(11), 2858–2865. Merzenich, M. M. (1983). Topographic reorganization of somatosensory cortical areas 3b and 1 in adult monkeys following restricted deafferentation. Neuroscience, 8, 33–55. Merzenich, M. M., et al. (1983). Progression of change following median nerve section in the cortical representation of the hand in areas 3b and 1 in adult owl and squirrel monkeys. Neuroscience, 10, 639–665. Merzenich, M. M., et al. (1984). Somatosensory cortical map changes following digit amputation in adult monkeys. The Journal of Comparative Neurology, 224, 591–605. Miller, G. (2008). The roots of morality. Science, 320, 734–737. Mogenson, G., et al. (1980). From motivation to action: Functional interface between the limbic system and the motor system. Progress in Neurobiology, 14(2–3), 69–97. Montgomery, S. M., Sirota, A., & Buzsáki, G. (2008). Theta and gamma coordination of hippocampal networks during waking and rapid eye movement sleep. The Journal of Neuroscience, 28(6), 6731–6741. Montgomery, S. M., Betancur, M., & Buzsáki, G. (2009). Behavior-dependent coordination of multiple theta dipoles in the hippocampus. The Journal of Neuroscience, 29(5), 1381–1394. Monto, S., et al. (2008). Very slow EEG fluctuations predict the dynamics of stimulus detection and oscillation amplitudes in humans. The Journal of Neuroscience, 28(33), 8268–8272. Morgan, V. L., Gore, J. D., & Szaflarski, J. P. (2008). Temporal clustering analysis: what does it tell us about the resting state of the brain? Medical Science Monitor, 14(7), CR345–CR352. Mormann, F., et al. (2005). Phase/amplitude reset and theta-gamma interaction in human medial temporal lobe during a continuous word recognition memory task. Hippocampus, 15, 890–900. Nadasdy, Z. (2009). Information encoding and reconstruction from the phase of action potentials. Frontiers in Systems Neuroscience. 3:6 doi: 10.3389/neuro.06.006.2009. O’Keefe, J., & Reece, M. L. (1993). Phase relationship between hippocampal place units and the EEG theta rhythm. Hippocampus, 3(3), 317–330. Olton, D. S., & Samuelson, R. J. (1976). Remembrance of places passed: Spatial memory in rats. Journal of Experimental Psychology, 2, 97–116. Orekhova, E. V. (2007). Excess of high frequency electroencephalogram oscillations in boys with autism. Biological Psychiatry, 62(9), 1022–1029. Osborne, L. C., et al. (2008). The neural basis for combinatorial coding in a cortical population response. Journal of Neuroscience Research, 28(50), 13522–13531. Osipova, D., et al. (2006). Theta and gamma oscillations predict encoding and retrieval of declarative memory. The Journal of Neuroscience, 26(28), 7523–7531. Palva, J. M., Palva, S., & Kaila, K. (2005). Phase synchrony among neuronal oscillations in the human cortex. The Journal of Neuroscience, 25(15), 3962–3972. Peña, M., et al. (2003). Sounds and silence: An optical topography study of language recognition at birth. Proceedings of the National Academy of Sciences of the United States of America, 100(20), 11702–11705. Peterson, R. S., et al. (2001). Population coding of stimulus location in rat somatosensory cortex. Neuron, 32, 503–514. Pockett, S., et al. (2009). EEG synchrony during a perceptual-cognitive task: Widespread phase synchrony at all frequencies. Clinical Neurophysiology, 120, 695–708. Reich, D. S., Mechler, F., & Victor, J. D. (2001). Independent and redundant information in nearby cortical neurons. Science, 294, 2566–2568.
References
221
Rodriguez, E., et al. (1999). Perception’s shadow: Long-distance synchronization of human brain activity. Nature, 397, 430–433. Rossini, P. M., et al. (2006). Conversion from mild cognitive impairment to Alzheimer’s disease is predicted by sources and coherence of brain electroencephalography rhythms. Neuroscience, 143, 793–803. Saalman, Y. B. (2007). Neural mechanisms of visual attention: How top-down feedback highlights relevant locations. Science, 316, 1612–1615. Saenz, M., et al. (2008). Visual motion area MT+/V5 responds to auditory motion in human sightrecovery subjects. The Journal of Neuroscience, 28(20), 5141–5148. Sauseng, P., & Klimesch, W. (2008). What does phase information of oscillatory brain activity tell us about cognitive processes? Neuroscience and Biobehavioral Reviews, 32(5), 1001–1013. Schneidman, E., et al. (2006). Synergy from silence in a combinatorial neural code. QuantitativeBiology. arxiv.q-bio.NC( )13July. Shadlen, M. N., & Movshon, J. A. (1999). Synchrony unbound: A critical evaluation of the temporal binding hypothesis. Neuron, 24, 67–77. Simpson, D. (2005). Phrenology and the neurosciences: Contributions of F. J. Gall and J. G. Spurzheim. ANZ Journal of Surgery, 75(6), 475. Singer, W. (1993a). Neuronal representation, assemblies and temporal coherence. Progress in Brain Research, 95, 461–474. Singer, W. (1993b). Synchronization of cortical activity and its putative role in information processing and learning. Annual Review of Physiology, 55, 349–374. Staude, B., Grün, S., & Rotter, S. (2010). Higher-order correlations in non-stationary parallel spike trains: statistical modeling and inference. Frontiers in Computational Neuroscience. 4(16), 1–17. doi: 10.3389/fncom.2010.00016. Tchumatchenko, T., et al. (2010). Signatures of synchrony in pairwise count correlations. Frontiers in Computational Neuroscience. doi: 10.3389/neuro.10.001.2010. Thatcher, R. W., et al. (2005). EEG and intelligence: Relations between EEG coherence, EEG phase delay and power. Clinical Neurophysiology, 116, 2129–2141. Thivierg, J.-P., & Cisek, P. (2008). Nonperiodic synchronization in heterogeneous networks of spiking neurons. The Journal of Neuroscience, 28(32), 7968–7978. Uhlhaas, P. J., et al. (2009). Neural synchrony in cortical networks: History, concept and current status. Frontiers in Integrative Neuroscience. doi: 10.3389/neuro.07017.2009. Van der Werf, J. (2010). Neuronal synchronization in human posterior parietal cortex during reach planning. The Journal of Neuroscience, 30(4), 1402–1412. Vinck, M., et al. (2010). Gamma-phase shifting in awake monkey visual cortex. The Journal of Neuroscience, 30(4), 1250–1257. Wang, H.-P., et al. (2010). Synchrony of thalamocortical inputs maximizes cortical reliability. Science, 328, 106–109. Whittingstall, K., & Logothetis, N. K. (2009). Frequency-band coupling in surface EEG reflects spiking activity in monkey visual cortex. Neuron. 64(2), 281–289. doi: 10.1016/ j.neuron.2009.08.016. Wilson, F. A. W., et al. (1993). Dissociation of object and spatial processing domains in primate prefrontal cortex. Science, 250, 1955–1958. Womelsdorf, T., et al. (2007). Modulation of neuronal interactions through neuronal synchronization. Science, 316, 1609–1612. Yoo, S., et al. (2007). A deficit in the ability to form new human memories without sleep. Nature Neuroscience, 10, 385–392. Zou, Z. A., & Buck, L. (2006). Combinatorial effects of odorant mixes in olfactory cortex. Science, 311, 1477–1480. Zou, Q., et al. (2009). Functional connectivity between the thalamus and visual cortex under eyes closed and eyes open conditions: A resting-state fMRI study. Human Brain Mapping. doi: 10.1002/hbm.20728.
7
On the Nature of Consciousness
The “Holy Grail” of neuroscience is to figure out how the human brain makes itself consciously aware of the feelings, ideas, intentions, decisions and plans that are being generated in its circuits. This has historically been treated as a kind of metaphysical, philosophical, or religious question, and dozens of books have been written about it. One of the philosopher giants in this area, Daniel Dennett (2005) says he has 78 books in his own personal library on consciousness published before February 2004. I have a couple of shelves of such books myself. Philosophy is not particularly satisfying to biologists. Philosophy deals with analogies, metaphors, and allusions (even illusions and delusions!). Biologists believe there has to be a biological explanation for conscious thought. Where do we start to look for such explanation? We start by defining consciousness. That is not a trivial task. Consciousness can be defined many ways. Certainly, consciousness includes being aware of the sensory world. But in that sense, the brain of even relatively primitive animals produces behavior that indicates that the organism’s nervous system is “aware” of the environment. Perhaps one useful way to think of consciousness is that the brain is “aware that it is aware.” In the absence of a lot of philosophical discourse, one operational consideration of conscious awareness seems to be that it emerges from an extra set of neural processes that continues throughout a stimulus condition—and outlasts it! In humans at least, there is self-awareness of awareness. We know that we know. What we sense, know, and intend to do is accomplished in the context of selfhood. The brain operates along an apparent continuum of states ranging from alert wakefulness to sleep to coma, though the transitions among these states may not be smoothly graded, but rather discontinuous. Many people wake up suddenly in the morning, as if a light switch has been flipped. The physiological signs of dreaming appear abruptly in the middle of sleep. Such sudden switched-on states suggest a non-linear process wherein some kind of threshold has to be reached before the state appears. What might such a threshold be based on? Perhaps it is a change in the combinatorial code of CIPs or the degree or extent of oscillatory coherence at certain frequencies. I explore these possibilities
W.R. Klemm, Atoms of Mind: The “Ghost in the Machine” Materializes, DOI 10.1007/978-94-007-1097-9_7, © Springer Science+Business Media B.V. 2011
223
224
7 On the Nature of Consciousness
later in the Chap. 8 explanation of various theories of consciousness mechanisms. To date, no experimenter has tested questions involving what happens during various transition states along the consciousness continuum. My lab’s demonstration of coherence changes during the transition between unconscious and conscious realization of ambiguous – figure stimuli is a first encouraging step for pursuing such research. One way to think about consciousness is to view it from different perspectives. The perspective I have been discussing so far is that of the self. We have a sense of self and engage in introspection about our self’s dealings in the world. Another view comes from the impression that others have a sense of themselves, and we can infer that they think in ways similar to ours. This is the so-called “Theory of Mind,” which holds that every person can attribute to others mental capacities for beliefs, intents, desires, and introspection that are similar to one’s own. A third perspective is to think about how the brain uses consciousness. I will argue in this chapter and the next that the brain constructs consciousness out of its own substance and uses it as a tool for more effective functioning. To be conscious of sensations is to have mental representations of something happening outside of the brain as well as some kind of representation inside of the brain that makes the brain aware of its sensory representations. For humans, sensations occur in association with one or more of the senses (sight, sound, smell, taste, temperature, touch/pressure). You cannot have consciousness without some kind of sensory awareness, although such awareness may be indirectly generated by the brain as memories that simulate physical stimuli. The same idea applies to conscious thoughts, which typically are “heard” as voices in the head or “seen” as mental images. We can speak of the “mind’s eye, mind’s ear, mind’s nose, etc.” Language greatly enriches consciousness, but is not essential for it. So you might now ask, “What about internally generated thoughts that are not stimulus bound?” But they are stimulus bound, if not to the original stimulus, to the memory of old stimuli. We can greatly extend the link to stimulus or memory of stimulus by thinking about related things or plans for future actions. The brain can do this because it has innumerable functional circuits that can be engaged to process the original information in a variety of ways and other contexts. Inevitably, such engagement of circuitry is accomplished via electrochemistry and molecular biology. An important distinction has to be made between the receiving of sensory information (“sensation”) in the brain and in perceiving it. Seeing and perceiving are distinctly different, and this point can be extended to sensory phenomena in general. Perception also occurs in at least two contexts: the context of self and that of the stimulus. Perception always has an element of interpretation or emotional response. This is nowhere more apparent than in Gestalt phenomena. Conscious perceptions can be distorted by ambiguous stimuli or certain drugs or by the limitations of resolution and sensitivity of the sensory receptor cells. With certain ambiguous figures, for example, conscious mind actually “fills in the gaps” in actual sensory data by imagining sensory elements that are not even there. The “you” that you know has been constructed by your brain from your lifetime of experiences and thought. Your brain-based identity was constructed (actually programmed) and memorized mostly by life experiences and conscious thoughts.
The Value of Consciousness
225
The brain constructs a most intimate sense of self, and recalls it from memory after recovering from sleep. One’s sense of self can never be experienced by others. The book of Proverbs (14:10) makes the point vividly: “The heart knows its own bitterness, and no stranger shares its joy.” We may empathize with others, but in no way can we experience their life as they do.
The Value of Consciousness Is there some biological advantage to having such an exclusive personal representation of self? I claim that such personal consciousness facilitates two main functions: (1) provide a personally relevant high level of thought that can integrate past, present, and future, and (2) program the subconscious mind’s way of viewing and responding to the external world. Let us consider each in turn. What makes conscious thought high level? First, one clear thing that conscious thought can do is intervene and veto subconscious decisions and their execution. Further, consciousness enables higher thought that operates on explicit and personal working memory. Representations of sensations and ideas are held in working memory and provide a necessary support for orderly conscious thinking. It seems likely that working memory operates both subconsciously and consciously, but priming experiments show that subconscious working memory is short lived. Conscious working memory can be sustained indefinitely and, of course, after sufficient rehearsal, the memory can become permanent. My ideas of just how working memory operates in conscious thought are elaborated later in the section “How I Think We Think When Conscious.” But for now, we only need to consider that conscious working memory can enhance the quality of thought because the thought is sustained explicitly. Moreover, since memory consolidation is a function of how long representations are rehearsed in working memory, consciousness is a major contributor to the human capacity to learn. Consciousness can be deployed to facilitate learning in systematic ways. Consciousness is the brain’s main teacher. This brings us to point number two. Conscious mind directs the programming of subconscious operations. We consciously decide what to read, what to listen to, what movies and television to watch. We can consciously decide what behaviors to engage in and which to avoid. We can consciously determine what we will think about and not think about. There is also the matter of consciously generating intentions, decisions, choices, and plans—the essence of free will, covered later in this chapter. Conscious mind is the brain’s way of getting a second opinion. Conscious mind provides a second level of analysis and judgment. Consciousness is like having an editor for your manuscript for living. It doesn’t matter whether conscious mind creates the initial drafts of our manuscripts or if they come from the subconscious. What does matter is that consciousness can review what we have been thinking and make revisions. Thus, we can see why humans are so well adapted for life in a complex and difficult world. We have an extra mind that is missing or incompletely developed in lower animals. Consciousness is a quality-control mind, one that can improve the
226
7 On the Nature of Consciousness
quality and effectiveness of our thinking and behavior. We use conscious mind, particularly its power of language, to program our brain via the analyses, decisions and choices we make. Conscious mind accelerates our learning and development of competence. Conscious mind fine-tunes our belief and value systems. Awareness of one’s thoughts gives the brain a better chance to know what its own processes are doing. Such awareness provides for refereeing and editing functions. Efficiency as well as effectiveness of thinking are enhanced when such awareness can be done as a by-product of what is already going on subconsciously. When you wake up in the morning and fix your breakfast, you not only know you are fixing breakfast and fixing it for yourself, but you know you know that. You know who you are and the role that you are playing in this breakfast-fixing scenario. In the process, you make conscious decisions about what to fix and how to prepare it. Your conscious mind plans a desired sequence of events: perk the coffee, take pills with orange juice, get out the right dishes and utensils, etc. Your conscious mind can veto decisions. For example, you may choose to skip the eggs, because you have learned that too much cholesterol is bad for you. Consciousness helps us pay attention to things that are important and gives us more power to learn from or respond appropriately to such things. I would argue that consciousness is a major reason people have such robust capacity to learn. Consider how difficult it would be to learn a language or play a musical instrument or even to touch type if you had to do it subconsciously. Consider language: I could subconsciously learn by much-repeated Pavlovian association the conjugation of a verb. But through conscious thought, I can memorize it in a few seconds. There is, of course, subconscious language, preserved in memory and demonstrable by wordpriming experiments. But such language capability is not fully operational until it resides in conscious mind. Subconscious mind might generate the will to learn a specific thing, but it lacks the explicit information on how to do it until after it has been learned. Consciousness, especially when it recruits language, gives us more effective interpretative and analytical capacity. Many of us think through issues and problems with conscious use of silent-language self-talk. Other people use imagery for creative problem solving—Einstein was a good case in point, as for example his vision of riding on a beam of light or train motion past a station. In all such cases, the common denominator is consciousness. Consciousness enhances reason. Animals can make what seems to be logical decisions, but they cannot sustain a long reasoning process that involves multiple steps. I suspect it is because they do not have the level of consciousness that allows them to shuttle onto and off of their working memory “scratch pad” (see below). Despite all these powers of consciousness, the problem is that human consciousness is not always up to the tasks imposed by the complexities of life. Surely, I am too stupid to be human Proverbs 30: 2
Consciousness is the hallmark of being human. It gives us the ability to be aware of many things. We recognize issues, problems, opportunities—involving ourselves
The Value of Consciousness
227
and others. But we are too stupid to cope completely, to think as effectively as we ought. Limited though it is, the power of conscious reasoning sets humans apart from all other life forms. We not only think, we have the too-often-unused capacity to think logically. Logical thought is greatly facilitated by good language skills, because language is the medium that carries much of our thinking in explicit ways. Not everybody has good language skills. And even those who do find that language alone will not guarantee good thinking. Deductions, intentions, and decisions can be made subconsciously, but consciousness allows us to make explicit the underlying premises and propositions. Inductive thought likewise begins in the subconscious, but consciousness again allows us to make explicit the number of particulars or specific instances and alternatives from which we can create a generalized conclusion. Of course, language is not the only venue for conscious expression—consider art or music, for example. We consciously conceive and interpret images and sounds. We can also be aware of our emotions and the varied input of our senses even when it is difficult to explain these things in words. Creative thinking may often spring forth out of subconscious processes. Ideas that emerge in a dream are sometimes used as an example, but remember that dreaming is in itself a special kind of consciousness. The dreaming brain makes its thoughts explicit and thus accessible for subsequent conscious thinking operations. More commonly, creative thinking is promoted by conscious processes whereby we structure our mental environment to facilitate creative ideas. We may decide to go on vacation or change the nature of our work, for example. As a result, fresh ideas often emerge. Or we may brainstorm, singly or in a group, where we consciously identify numerous relevant possibilities, and the integration of these thoughts often triggers new ideas. Consciously searching for all reasonable alternatives greatly expands the range of possibilities that can contribute to creative thought. Creativity is promoted when we consciously decide we want new ideas and are open to “thinking outside of the box.” Conversely, creative thinking becomes obstructed when we have conditioned our subconscious minds to conformity. Much of such conformity occurs consciously. Knuckling under to peer pressure is a common example. Many people decide not to be original thinkers because they fear being labeled an “egghead,” or “radical,” or “out of step,” or “not one of us.” Here is one of many examples where consciousness may betray our best interests. Another thing that consciousness does is to allow us to comprehend the world and ourselves more specifically and more accurately. Psychotherapy is based on the premise that consciousness helps us to know ourselves. In more normal people, introspection occurs without the need for much outside assistance. The consequences of not knowing ourselves are enormous. If we do not know and face our fears, for example, we have little chance of conquering them. If we do not recognize bad habits, how will we correct them? If we do not know when our thinking is irrational, how can we learn to think more logically? If we do not know when our emotions are inappropriate, how can we learn to deal appropriately with disturbing
228
7 On the Nature of Consciousness
events of life? If we do not know what kind of person we are and how we ought to improve, how can we grow? If we don’t know we are making excuses, how can we stop it and address the real problems?
Sense of Identity To illustrate self awareness, consider animals looking in a mirror (Swartz 2003). Most species, if they will look in a mirror at all, think that what they see is another animal. Higher animals, like dogs, soon realize what they see is not another real dog (to a dog, another dog is something that smells like a dog). Animals looking in a mirror do not seem to make much mental connection with themselves. Try that with your dog or cat. In the case of primates, you can get them to look in a mirror, and the more advanced species even seem to recognize themselves. A famous experiment was reported in 1970 by my old rival in another line of research, Gordon Gallup. Chimps were allowed to play with a mirror for 10 days. Then he put a colored mark on their forehead while they were anesthetized. They paid no attention to the mark until they were given a mirror again. Then they would touch their foreheads, using the mirror to guide their hands as they touched the mark on their forehead. Gordon interpreted this to indicate that the chimps were recognizing themselves in the mirror. Since that time, such self recognition has been observed in all great-ape species, but not in monkeys. Gordon tried to extend this observation to suggest that apes are like humans in the sense that we both have a “Theory of Mind,” that is, knowing how my mind works allows me to have a theory about how your mind works. Developmental psychologists conclude that this consciousness capacity emerges in children at about age 4, about 2 years after they develop mirror self awareness. Many psychologists reject Gallup’s notion that apes have Theory of Mind capability, but they do accept that mirror self recognition is a necessary first step in consciousness development. Although dogs don’t recognize themselves in a mirror, they do recognize their names and clearly seem to know that when their name is called, it refers to them. Dogs can even plan, as indicated by the hiding of bones. My dog, Zoe, will even bring a bone she has hidden onto the porch in early evening on her way into the house so she will have it available later when she is put out for the night. I don’t want to take time here to debate animal sentience. But that possibility does give credence to a basic premise: namely, that there is a continuum of consciousness ability in higher mammals which is paralleled by evolutionary development of the brain. It follows that the larger brain of humans would support the highest level of consciousness capability. A sense of self is necessary, but not sufficient for full consciousness. A dog, for example, clearly has a sense of self, but that does not mean it engages in introspection about what it means to be a dog or its own dog self. Self-recognition experiments have been performed in children. Daniel Povinelli and Giambrone (2001) at the University of Louisiana at Lafayette videotaped playing a game in which young children during which an experimenter secretly put a large
Sense of Identity
229
sticker in their hair. When shown the videotape a few minutes later, children saw themselves with the sticker, but only children older than about three reached up to their own hair to remove the sticker, demonstrating that the self they saw in the video was the same self at the present moment. A few humans have an unfortunate disease in which they cannot recognize themselves. This disease is known as Capgras syndrome, named after the French psychiatrist who first recognized it. Looking in the mirror, a victim named John might shave around his mustache. The man in the mirror shaves around the mustache. But John does not make the connection. That is not John in the mirror. If he shaves off the mustache, he notices that the guy in the mirror did also, without recognizing that he, John, is the guy in the mirror. Capgras patients have a disconnect. What they see in the mirror is not interpreted as a representation of their own persona, but rather as that of someone else. This may originate in a deep-seated subconscious psychosis in which the patient may have severely negative emotions about themselves. Sometimes, anti-psychotic drugs can make this delusion go away. Sometimes cognitive therapy works, wherein patients are systematically instructed in small steps to learn the causes of error of their misinterpretation. Note that here again is the indication of the teaching function of consciousness. Normal people have conscious representations of the real world, including physical and abstract representations of their own persona, inside their own brain. Each of us has an “image” of self. The self recognizes neural representations that are really memories of what we have learned about the world and ourselves. All of this points to the fundamentals of self-identity. Our self awareness of ourselves is in a constant state of flux. We transform who we think we are by how we think of ourselves. The enlightened brain nurtures this self image, using consciousness to monitor and adjust the brain‘s engagement with the world in the most adaptive ways. Further, how we think of ourselves continuously programs the brain, transforming who we really are, which in turn can change the way we think of ourselves and behave. Consciousness gives us the capacity to know ourselves. Personality growth depends on recognition of personal weakness in need of change. This was the main point of my Blame Game book. We can know when we feel defensive, or angry, or overly critical, or judgmental, or upset—and even when we try to make excuses. We can also recognize the situational causes or triggers of undesirable attitudes and behavior. Further we can realize that avoiding the situational causes or triggers will not always work, that real change must come from within. Conscious intent and planning will guide us in making those changes from within. Another way to think about consciousness comes from the recent discovery of “mirror neurons.” In 1996, two reports described the existence of neurons in premotor cortex of monkeys that not only fire when the monkey makes a goaldirected act but also when the monkey sees the same act performed by others. A common explanation is that these “mirror neurons” reflect an understanding of the movements. We do not know, however, if this understanding is conscious or subconscious.
230
7 On the Nature of Consciousness
This discovery helps to explain how we can facilitate learning just by watching somebody else perform an action. This is perhaps most obvious with development of athletic skills. Some sociologists say that this discovery shows how humans are socially entangled, tying people together in sympathy, empathy, and cooperative action. Some scholars extend the idea to quantum mechanical entanglement, where activity in one brain influences activity in another brain through physical quantum field entanglement. Of course this connection is only metaphorical, because the mechanisms have little in common (see Chap. 8’s discussion of quantum consciousness). One follow-up study (Caggiano et al. 2009) of mirror neurons has been to examine the possibility that they might encode other related acts, such as behaviors that might occur subsequently. For example, the consequences of a given act might differ depending on where the act takes place. Mirror neurons might encode differently depending on whether the act takes place within the individual’s personal space, where access is more likely, or if the act takes place at a distance and the observer is less likely to participate. This would test the “understanding” hypothesis, because understanding of the movements involved should be the same irrespective of separation distance. The researchers recorded mirror neurons when an object was within the monkey’s personal space (within arm’s reach) and when it was some 28 cm or more away. Pre-motor neurons were activated by both the execution of the required movement or by observing a human make the same movements. But 53% of the mirror neurons fired selectively, depending on where the object was located relative to the monkey’s personal space. These space-sensitive neurons were about equally divided between those that were preferentially responsive when the object was in the monkey’s personal space and those that responded when the experimenter manipulated the object some 28 cm or more from the monkey. Thus, it would seem that mirror neurons not only encode the understanding of the movements but a subset of them encode the spatial context of the act. The investigators concluded that these space-sensitive neurons are important for evaluating subsequent interacting behaviors. That is, the only way a monkey can interact when the object is outside is personal space is to plan movements to get closer to the object. If the object is in the personal space, then competition for the object might be anticipated. These neurons might be trying to answer the question of “How might I interact with the experimenter?” I have an alternative explanation that I think has profound implications. Spacesensitive neurons could be coding for the object’s relevance to the monkey’s sense of self. Those mirror neurons that fire selectively when the object is within personal space, may be part of a larger circuit that contains a representation of a sense of self. Close objects are viewed as a component of the sense of self (e.g. “this object is mine”). Mirror neurons that are selective for objects outside of personal space suggest that the object might belong to others (“not mine, at least not yet”). A vivid way to illustrate the difference is the difference between the possessiveness a tethered dog shows for a bone placed at its feet and a bone place beyond its reach.
Maps in the Brain
231
What about transitions between mental states, as when a sleeping brain wakes up? When consciousness emerges, it does so in the full flush of self-identity. When we wake up in the morning, it is then that we again realize we exist. A pre-requisite for thinking, it seems to me, is for a brain to have the capacity for recognizing its embodied brain, as distinct from its environment. By no means does this kind of “thinking” have to be a conscious endeavor. Take, for example, a simple knee-jerk reflex. This simple, two-neuron system “knows” in a most primitive sense of what is “out there” and what is “in here.” The first of this two-neuron chain registers sensory information from “out there” and delivers it “in here” to the appropriate muscle fibers. What then of the “higher” levels of thought which become possible in complex circuits involving millions of neurons? Is such thought nothing more than electrochemistry and molecular biology? Now we have entered the domain of the mindbody problem. What I want to focus on is that the mind was genetically programmed and epigenetically sculpted to provide the circuitry necessary to distinguish self from non-self and to generate “thought” about the meaning and appropriate responses to what is “out there.” The process of carving out a sense of self by the brain begins in the embryo and culminates with post-natal maturation in the conscious awareness of thoughts. The sense of self emerges as a system function, with “The System” being equal to brain + mind. The function of The System is primarily to “think” about how self can appropriately engage the environment with its own internal thoughts about that environment. You might want to suggest that emotions or abstract thought don’t fit this model. But even here, I would say that thoughts about how we feel or nonmaterial thoughts are really part of the way that The System tries to respond to and control the environment.
Maps in the Brain Consciousness ultimately begins with sensory perception, and here is perhaps the place to begin in trying to understand consciousness. There is still no universally accepted explanation for conscious perception, and indeed the explanation may have several elements. One appealing working hypothesis from Gerald Edelman (1989, 1992) is that consciousness involves a perceptual categorization of sensory inputs via large ensembles of topographically mapped circuitry (Fig. 7.1). Such “locally mapped” responses may be extended by interactivity with other mapped circuits, leading to a more “global” mapping that allows concept categorization. Expression of globally mapped concepts yields language, behavior, and consciousness. In this schema, the basic operational units of the nervous system are not single neurons but rather groups of strongly interconnected neurons. Membership in any given functional grouping is determined by synaptic connection strengths, which are subject to change. As distinct from the large neuronal groups that make up a map, these smaller groups are the units of “selection.” That is, specific patterns of sensory input “select” a subset of neurons and their interconnections to represent
232
7 On the Nature of Consciousness
Fig. 7.1 Schematic representation of selected neurons in two interacting topographical maps. Stimuli that feed into map 1 activate specific neurons (open circles) that project outputs (solid lines) to specific neurons in map 2. A similar process operates with stimuli that feed into map 2 (black circles). In both maps, the neurons that receive input from the other map deliver feedback (“re-entry”) into the original map (dotted lines). This arrangement could create oscillatory processes. Also, if stimuli are repeated, the synapses involved in that particular pathway can be strengthened (heavy lines), creating a basis for memory of the stimuli (Redrawn from Edelman 1992)
the categorization of that input. Maps are subject to a degree of reorganization in response to a dynamic competitive process of neuronal group selection. These neuronal groups exhibit redundancy in that different circuits can have similar functions or that a given neuronal group can participate in multiple functions, depending on the current state of intrinsic and extrinsic connections. This kind of group action is unfortunately referred to as “degeneracy,” when in fact it is an advanced operational mode that enables higher levels of cognition. Mapping phenomena seem to have a straightforward relationship to sensory and motor processing. But for certain “higher” functions such as emotions and memory, it is not clear what, if any, role is played by mapping. Emotions are clearly controlled by a complex, multiply-interconnected set of structures in the so-called limbic system. Point-to-point mapping in many parts of this system is not evident. Another example is with memory. Numerous experiments suggest that memory “is not a thing in a place, but rather a process in a population.” The explanation must reside in the fact that so much of the brain circuitry is highly interconnected. Mapping among circuits must certainly be involved in conscious operations, but it is not clear to me that mapping is necessary or sufficient for consciousness. The idea of re-entrant feedback could be crucial, and I see no reason why the basic idea of multiple, extensive reciprocal connections cannot occur in unmapped regions of the brain. In fact, we know they do, as I illustrated earlier with the discussion of the habenula pathways.
The Binding Problem
233
The Binding Problem Information in the brain flows in multiple, parallel pathways. Moreover, the information is often fragmented, with some information carried in spike trains of certain neurons while other information is carried by other neurons. This is best illustrated by the classical discoveries that Hubel and Wiesel made in their studies of vision that were mentioned in the first chapter. They found that visual stimuli are broken up into small pieces of lines and edges in the sense that a given visual cortex neuron only responds to a small piece of the image with a specific orientation or other property. Other neurons simultaneously register only their particular piece of the image. This “deconstruction” of an image is registered by millions of neurons scattered throughout the visual cortex. Hubel and Wiesel’s various discoveries remain a wellspring of information on the processes of the visual cortex. Perhaps more importantly, however, their discoveries have provoked new rounds of research as scientists eagerly seek to learn more of the intricacies of sensation and perception. Their findings require us to ask how the brain puts all these lines and edges back together again to reconstruct the original stimulus. Whether done in conscious or subconscious mind, the brain has a binding problem. How does it bind all these informational pieces back together again? Reciprocal interaction among regions of mapped information should certainly facilitate binding. The maps themselves are a binding mechanism. What functional processes can facilitate binding in mapped ensembles? One inevitable characteristic of mapped ensembles with re-entrant connections is the capacity for oscillation. And oscillatory behavior in interconnected oscillatory generating ensembles can shift phase relations. The sharing of information in each oscillating ensemble will certainly depend on whether the oscillations are in or out of phase. I discussed earlier the work of Wolf Singer (2007) and his colleagues (Singer and Engel 2001) in Germany who implicated synchronous oscillation of spatially segregated neural networks as the basis to understanding this binding problem. While conducting experiments on the receptive fields of kittens wherein the kittens were presented with a visual stimulus moved across their visual field, Singer began to notice that spatially segregated neural networks activated in a particular trial would enter a state of synchronized oscillation in the gamma range, around 40 Hz. This was the key evidence that lead Singer to his hypothesis. How precisely could simple synchrony explain binding of visual-image fragments? Synchrony generates increased activity through routing and modulating the activity of target neural populations. This is accomplished because of the inherent depolarization/hyperpolarization cycle involved with oscillation. For instance, consider two spatially distributed neural populations. When an oscillating population targets a neural population that is oscillating at the same frequency, their oscillatory cycles will match, and since the target neurons will be receiving input during their depolarization phase, they will be easily excited. Thus, populations in the same oscillatory phases have their activity reinforced and increased, whereas target populations that are out of phase will be less likely to be excited.
234
7 On the Nature of Consciousness
The uniting of spatially distributed neural networks may be considered the first step toward binding. The second step is centered on the selection of such networks for further processing. Singer postulates that this selection is made as a result of the increased activity that is observed in synchronous networks, which apparently serves as some sort of signal for processing. How binding is accomplished still resides in the realm of informed speculation. Synchronization may not be the only mechanism.
How I Think We Think When Conscious Thinking requires the mind to operate in small iterative steps: recognize what is being thought about, make an adjustment, recognize the changed thought, etc. This would mean that conscious thinking operates as a succession of small-time frames, played out like a movie in which the contents of a given frame can be changed as it goes along. What is in our consciousness at any given moment can be something remembered or something currently being experienced. In either case, we hold these representations as short snap-shots, in a narrow window of time lasting only a few seconds, flitting from frame to frame in the stream of consciousness. Analytical thinking or problem solving can occur when we string together these snapshots in a coherent and systematic way. Such operations may occur subconsciously, but they should be more robust in the thinking that occurs in consciousness. I submit that thinking is parceled out in chopped-up time that seems seamless because the transitions between thought segments are so rapid—like still frames in a movie. The speed at which we think consciously can be thought of as a frame rate. Frame rate can be determined by the time-chopping that results from oscillation. Like movies are that made of a succession of still frames, rapid playing of the movie-like frames creates the impression of a steady continuum. Just as “working memory” has a limited capacity our “working consciousness” frames are likewise built upon limited spans of time. One of the implications of this operational mode for conscious thought is that for consciousness to do anything constructive, such as reaching a complex decision or solving a problem, conscious processes themselves must guide the process to keep it on track. In making a decision, for example, I must consciously orchestrate the process whereby each snapshot of alternative choices and their consequences is viewed, organized, evaluated, and a final choice is made. Just as formation of memory can be interfered with by disruptions that occur, our stream of consciousness can likewise be disrupted by interjection of incompatible frames of consciousness. Limited working-consciousness span leads to what we call mind wandering or loss of focus.
Working Memory Biology Conscious thinking spans past, present, and future. Could you think consciously without working memory? I doubt it. Working memory (Shrager et al. 2008) is the
How I Think We Think When Conscious
235
capacity to hold a limited amount of information (such as a telephone number) in conscious awareness long enough to make use of that information (as in actual dialing of the number). A helpful metaphor might be the cellophane writing board you may have played with as a child, where after writing on the cellophane, lifting the transparency erases what was written and provides a blank sheet for writing something new. Working memory is a form of short-term memory and may or may not get converted (“consolidated”) into longer term memory. At the simplest level, working memory can be illustrated by what happens when you look up a phone number in a telephone directory. Those numbers are represented by certain CIPs of impulses, probably in multiple circuits, and the memory is accessible as long as the relevant circuits maintain that real-time representation. To do this, the circuits must reverberate. Anything that perturbs the reverberation can disrupt the representation and thus the working memory of the numbers. That is why working memory is so vulnerable to distractions. We know that interposing distracting or new stimuli or thoughts can interfere and disrupt an on-going consolidation process in addition to changing what we are currently thinking about. Some people call this the interference theory of forgetting. The usual interpretation is that a given working memory is represented by real-time distributed CIPs and if these reverberate long enough, the involved circuits will be facilitated long-term and the memory engram is thus “laid down” in a way to enable later retrieval. Interfering stimuli and thoughts would obviously create a different CIP representation and if that happened before consolidation, the original experience never gets facilitated and formed into long-term memory. This consolidation process for declarative memories relies heavily on function of the hippocampus. More recent evidence indicates that consolidation involves also the medial prefrontal cortex. Most interestingly, neuronal impulse activity becomes selective during consolidation in this part of the cortex. Such CIPs are sustained during the interval between two paired stimuli but reduced during the interval during two unpaired stimuli. These new CIPs develop over several weeks after learning, even without continued training. In short, the memory is consolidated in terms of CIPs (Takehara-Nishiuchi and McNaughton 2008). All such CIPs and interactions may reflect a huge learning component. Firing patterns elicit other firings that have been made possible by past experiences. The CIPs of all new experiences, whether manifest in the consciousness or not, must play out in the presence of activity patterns that are elicited from stored representations. In the process, the brain can not only modify representations of what is occurring in the present but can also change the stored past representations. We need to develop a theory of conscious thought that includes working memory, because conscious thought is not possible without it. When we are thinking consciously, our thoughts are held in the “real time” of working memory. This fact presents two theoretical issues: (1) what is the biological basis for working memory? and (2) how does the brain make itself aware of what is going on in the circuits that support working memory? I believe all kinds of thought progress by shuttling information on to and off of the working memory “scratch pad.” Working memory during thinking can be viewed
236
7 On the Nature of Consciousness
2
1
3 Memory Scratch Pad
5
Thought Engine
2
1
3
4
4
5 Information Source (stimulus/memory)
Fig. 7.2 A model for how working memory participates in the conscious thinking process. Elements of thinking are numbered in sequence from 1 to 5. Successive elements of thought, which may come from current stimuli, memory stores, or sources generated internally from elsewhere in the brain, are successively routed via the working memory “scratch pad ” into the “thought engine” circuits that accomplish the analysis and decisions involved in thought. Think of the pad as a frame in a movie or video in which thought elements stream through the pad. Similarly processes could also operate subconsciously
as a place holder for a succession of the elements of thinking. If you are consciously thinking about solving a math problem, for example, each step is successively brought into working memory, used as input for a “thought engine” that uses the succession of input from working memory to solve the problem (Fig. 7.2). The low information-carrying capacity of working memory is well known. This has consequences for both memory consolidation and also thinking ability (IQ correlates with working memory capacity) (Jaeggi et al. 2008). Though formal training methods can increase working memory capacity, the conventional wisdom is that most people can only hold seven items in working memory (that is why telephone numbers are seven digits long). However, the digits in a telephone number are not strictly random and independent, and a more modern view is that most people can only hold about four random and independent items in working memory (Cowan 2005). The thought elements, working memory and “thought engine,” operate via CIPs that are neural representations of the respective information. What I have diagrammed above is a process for conscious thought. Is there a subconscious “scratch pad?” It does seem reasonable to suspect that the same process of feeding small chunks of information sequentially into a “thought engine” would also be an effective way for “thinking” to occur subconsciously. One possibility is that the same processes operate subconsciously except that what is on the scratch pad and running through the thought engine is not accessible to consciousness and cannot as readily process a stream of frames through past,
How I Think We Think When Conscious
237
present, and future. Another possibility is that there is no subconscious scratch pad and the flow of thought elements streams seamlessly from the information source to the thought engine. Such operation might make subconscious thought more efficient, maybe faster (but less subject to “editing”). The scratch pad, however, is crucial for conscious thought. The scratch pad is a virtual way-station where the information on it is frozen momentarily, thus facilitating reflection and decisions about how the information might be altered, vetoed, or used in alternative ways. Most importantly, consciousness allows the mind to see what is on the scratch pad and integrates it with what was on the pad with what is about to be put on the pad. Such review and planning are hallmarks of conscious thinking. We do know that thinking and use of working memory occur in our sleep, most certainly during dreaming. Also, sleep helps to consolidate the short-term memory of a day’s events into long-lasting form. Abundant research in the last few years has shown that memories of a day’s events are being processed when you sleep, that is, when only your subconscious mind is operative. While conscious mind sleeps (gets a rest?), the subconscious mind stays on the job. During sleep, our subconscious mind is allowed to work without all the interferences that our conscious mind picks up during the day. In one study, Matthew Walker and colleagues at Harvard (Yoo et al. 2007) conducted an experiment wherein students were paid to stay awake all of one night and then try to learn 30 words the next day. The students were subsequently allowed to catch up on sleep loss, and were tested on the words. Compared to a control group that was not sleep deprived prior to the learning session, sleep-deprived students remembered 40% fewer words. Sleep-deprived students also tended to remember positively-connoted words far less accurately than negatively-connoted words. This study implies the presence of “proactive interference,” wherein sleep loss before learning a task affects memory consolidation. Additional studies, such as one involving image recall, support the idea of “proactive interference” and have even shown that brain areas involved in memory, such as the hippocampus, are more active in subjects who get a normal night’s sleep. Another recent sleep study has linked sleep with superior consolidation of motor skills (Robertson et al. 2005). Using the premise that memory of motor learning develops without practice “off-line,” after a learning session, this study taught two groups of subjects a simple motor skill; one group was taught in the morning, and the other during the evening. A learning disruption in the form of magnetic stimulation to the motor cortex was delivered to each group of subjects, yielding results indicating that the memory interference could not be compensated for by off-line learning during the daytime but was compensated for in the night-time group. This observation leads to the assumption that during the daytime, numerous memorydisruptive influences interfere with off-line consolidation of material that we learn in the morning, but with night-time learning, there are far fewer disruptive sensory and cognitive influences, because we are asleep. The advantages offered by having fewer disruptive influences during sleep have also been confirmed in a study conducted in the brain imaging lab of Thomas Pollmacher at the Max Planck Institute in Munich, Germany (Czisch et al. 2002),
238
7 On the Nature of Consciousness
where sound stimuli were presented to sleep-deprived patients during non-dreaming sleep. The results indicated a suppression of activity in the auditory pathways that the researchers were attempting to stimulate. However, results also indicated suppression of activity in visual cortex, suggesting that sleep protects the brain from the arousing effects of external stimulation during sleep not only the primary targeted sensory cortex but also in other brain regions that are interconnected with visual cortex. It is in blocking out such interference effects that sleep helps facilitate consolidation. This study also prompted researchers to conclude that consolidation of memory occurs over many hours (at least in sleep-deprived subjects), rather than over the course of just a few hours.
Sleep vs. Consciousness Dreaming occurs during sleep, and it seems self evident that dreaming is a form of consciousness. Later, I will present my own theory for why we dream. Here, I want to consider what regular (non-dreaming) sleep might tell us about consciousness. First, by definition, consciousness disappears when we go to sleep. Whatever thinking we do during sleep, such as processing whatever sensory input the sleeping brain gets (sounds, touch sensations from bedding, etc.) and the consolidating of memories, must be occurring via the subconscious mind. This is another way of saying that subconscious mind thinks, but not so as we are aware of the thoughts. Second, it is generally true that the sleeping brain is not processing much new information, because it isn’t getting much. There is no conscious mind accessible to sleeping brain to help inform the brain. In turn, this suggests a function for conscious mind. Namely, it is a primary source of instruction and programming for subconscious mind. I am also thinking sleep shows us something else: that subconscious mind operates under two conditions, one in sleep where subconscious mind operates in standalone mode, and the other when consciousness is present as a concurrent state where the “two minds” can interact. Some (Pockett, personal communication) would say that the interaction between subconscious and conscious is largely one-way. That is, subconscious mind informs consciousness of some of what subconscious mind is thinking (see following section on Free Will Debates). But I propose a two-way alternative in which conscious mind provides a thinking resource not available in subconscious mind and thus is a major source of programming and “enlightenment” for the subconscious. Our thinking about sleep and wakefulness is being pushed in new directions by studies in which human epileptics with electrodes implanted in brain have their brain electrical activity compared during sleep versus wakefulness. Of particular value are the studies that simultaneously monitored field potentials and impulse activity (Dalal et al. 2010). These studies reveal the prominence of large slow waves during non-dreaming sleep, and confirm Steriade’s original observations that there are also small gamma waves riding on top of the large slow waves. What is new are the observations of
A Humpty-Dumpty Theory for Why We Dream
239
several studies that in SWS both impulse activity and gamma waves are phase-locked to the slow potentials. It is not clear whether the phase locking of impulses to slow waves occurs because the impulses are driving the slow waves or because slow waves impose propagation constraints on impulse traffic. What do the observations tell us about sleep and wakefulness? To me, it suggest that gamma-generating processes and slow-wave generating processes may be mutually antagonistic. Slow-wave processes of sleep dominate electrically because the field potentials are so large. Another way of saying this may be that the brain constantly tries to achieve wakefulness, as manifest by gamma activity, but may be blocked from doing so if SWS processes are initiated. Another point: the brain is thinking all the time, sleep or awake. We know it thinks during non-dream sleep because certain memories are consolidated then. It may well be the co-existence of gamma processes that enable the consolidation, in spite of the presence of slow waves. In fact, slow-wave processes may serve a useful function if they prevent the distractions and memory-consolidating interferences that are inevitable during wakefulness. The brain insists on sleeping, and this suggests that slow-wave processes presumably satisfy whatever it is the sleeping brain needs. The traditional explanation is that we sleep to rest the brain. But nobody has documented what kind of rest the brain is getting in sleep, nor how predominance of slow waves could provide it. Before the discovery of concurrent gamma activity during SWS, it was assumed that neurons were less active in sleep. Maybe it is only a few select neurons that need periodic rest. Neurons that fire less, for whatever reason, make fewer metabolic demands and are given more time to regenerate their energy reserves, to actively transport atoms against gradients, and to rejuvenate synaptic metabolic processes. Maybe it is the glial cells that need metabolic “rest.” What sort of “rest” or benefit comes from phase locking of gamma and slowwave generating processes? Nobody knows and, of course, nobody asked until just recently when the phenomenon was discovered. We can speculate that phase-locking is a temporal pacing mechanism for the underlying neuronal processes. Maybe the phase-locking itself is a rest process, for it provides oscillating periods of lowered activity, even when the total amount of activity may be unchanged.
A Humpty-Dumpty Theory for Why We Dream Dreaming is a nightly experience for everyone. Those few who claim they never dream are wrong. They just don’t remember their dreams. Brain function underlying dreaming was first discovered in animals. So that is where we should start this consideration. Consider the following scenario: You’ve finished dinner, had dessert, and fed the dog. As you flop into the soft recliner to relax and watch a little TV, you notice your dog, having no entertainment outlet, takes her customary nap. Before too many commercials pass, you hear claw scratching on the floor, muffled barking bordering on whimpering. The dog is also padding her feet, as if trying to run. If you look closely, you will see darting of eyes, called rapid
240
7 On the Nature of Consciousness
eye movements (REM). You surmise she is dreaming, and if you had electrodes on her head to record brain waves you would know for sure, because the brain would be dominated by beta and gamma activity. You ask yourself, “I wonder what she is dreaming about? I bet she is chasing that pesky squirrel that drives her crazy every day.” By day, the squirrel always escapes. By night, your dog gets to try again. “Why do dogs dream?” you wonder. “Well, it is entertainment,” you think, adding, “that dog is so lazy, I bet she sleeps because it is an easy way to entertain herself. She has learned that going to sleep always leads to some unexpected adventure.” I don’t know if such observations caused the discovery of the bodily signs of dreaming in humans, but I bet so. University of Chicago researcher, Nathaniel Kleitman and his students Eugene Aserinsky and William Dement, monitored people as they slept and reported in 1953 that sleeping humans exhibit periods of REM, muscle twitches, and brain waves that looked identical to those seen during alert wakefulness. Later studies confirmed that if you awaken sleeping humans during REM they invariably will report they had just been dreaming. After the discovery of these signs in humans, William Dement observed that these signs occurred periodically during the sleep of cats. One can see the REM and muscle twitches in sleeping cats and dogs, but it wasn’t until Dement recorded brain waves that we knew for certain that animals, at least higher animals, must also dream. Of course, you can’t prove that dogs and cats dream, but it is reasonable conjecture. But why do certain animals and all humans dream? And why so much—about 2 h worth every night for humans, broken up into multiple segments interrupted by regular sleep? There must be a more fundamental explanation for us and all the other animals that have the ability to dream (only mammals show signs of fully developed dreaming). Why do humans dream? Four common answers come to mind: (1) to help normalize stress and emotions, (2) to consolidate memories, (3) to get ready for the next day of consciousness, and (4) the leading theory that random neural activity causes distorted thinking, which is manifest as fabricated stories. Historically, all dream theories have arisen from the perspective of the theorist. Ancient shaman priests and religious prophets have thought of dreams as God’s way of communicating with us. Of course these folks didn’t know that many animals also dream. If they had, they might have concluded that God talks to animals too. Any good theory for dreaming must accommodate two basic facts: (1) signs of dreaming occurs only in advanced species (more recently evolved reptiles and mammals, with the most robust dreaming occurring in humans), and (2) in every mammalian species dreaming incidence is far greater in babies than in adults, and dreaming also decreases with age. Among the early ideas that have been mentioned is the possibility that dreaming acts as an escape valve for letting off the “psychic steam” that accumulates from emotional stresses during the day. Psychiatrists think of dreaming as the brain’s release of subconscious thinking into dream consciousness. Thus dream analysis can be a key part of psychotherapy.
A Humpty-Dumpty Theory for Why We Dream
241
Dream content does have symbolic meaning, as Freud showed, but that is a consequence of dreaming, not its cause. Also, modern studies of young people show that about seven out of eight dreams are unpleasant, with about 40% of them being downright frightening. People become anxious, irritable and sometimes borderline psychotic, if they are repeatedly deprived of dreaming. When finally allowed to dream without interference, such people spend an unusual amount of time in the dream stage before eventually returning to normal. But this just shows that the brain benefits from dreaming, not that the need causes dreaming. Another theory, currently in great vogue, is that dreaming is needed to help consolidate memories of the day’s experiences. Experimental disruption of dreams does interfere with long-term memory formation. Again, however, memory consolidation is a consequence of dreaming, not a cause. Another problem is that such off-line processing and memory consolidation also occurs during Stage IV sleep. I will try to explain the discrepancy shortly. I suggest the consolidation theory has it backwards. We don’t dream to consolidate the day’s memories. Nor do I think dreaming is a by-product of memory processing. Dreaming releases the processing of recent memories. Consolidation is a consequence of the brain being activated and, in the absence of external stimulation, resumes processing of the day’s experiences because that is the information most readily at hand. I first thought over two decades ago that dreams were nature’s way of getting the brain ready for the next day’s activity. Dreaming was thus seen as a “dress rehearsal” for the next day, an idea that is an integral part of my idea that the brain-stem enables an ARAS-mediated “readiness response” to stimuli to assure that appropriate behavior and cognition occur (Klemm 1990). Early in my research career as a physiologist (I discovered the dream stage of sleep in a primitive mammal, the armadillo, and co-discovered it in ruminants that were once regarded as capable only of dozing and cud chewing), I concluded that dreaming was the brain’s way of getting itself ready for shifting from sleep to the wakefulness needed for the forthcoming day’s activity. This “dress rehearsal” idea still appeals to me. But it does not completely specify why the dress rehearsal is generated. In my more recent years I have developed an interest in consciousness and cognition. That orientation leads me to a new way of looking at dreaming, one that conveniently is compatible with the initial view that dreams help get us ready for a day of conscious activity. It is also compatible with other theories for dreaming. And best of all, it fits with my idea of how the brain creates consciousness (see Chap. 8). Dreaming could help the brain “remember” and rehearse how to reactivate readiness when the time comes in the morning. A related explanation for young, immature brains is that dreaming provides a source of endogenous stimulation that is useful in promoting brain maturation and the capability for consciousness. Back then, I looked at the issue from the perspective of an animal physiologist. A similar explanation for dreaming was proposed by Bob Vertes (1986). Each of us is a physiologically oriented neuroscientist, and in fact we have collaborated in creating the book referenced earlier on brainstem function.
242
7 On the Nature of Consciousness
Hobson and McCarley (1977) advanced the “activation synthesis theory,” which posits that dreaming occurs because cortical neurons fire randomly, causing an attempt to construct a story line that makes sense of the random impulse firing patterns of sleep . However, I am not aware of any statistical tests that confirm random firing during dreaming, and, moreover, many neurons have distinct firing patterns during dreaming. The story line notion, however, is relevant. Human wakefulness consists of episodic experiences, which to the brain probably seem like a series of adventures and stories. The dreaming brain probably does try to reconstruct the processes it has learned during wakefulness that characterize what conscious life is all about. The result is abortive dream stories that simulate real conscious life. The stories are often incomplete and irrational because they are abortive attempts at consciousness and lack the corrective “reality check” that occurs with wakeful experiences. Notably, dreams engage our sense-of-self, either as an observer or as a participant. Dreams bring the sense of self into being after its banishment in deep sleep. The key point is that the fabrication of story line is a consequence, not a cause, of dreaming. All our prior theorizing misses the obvious point about why people dream. The answer as to why we dream is simple. We dream because we are in REM sleep, which promotes the non-linear dynamical processes that support the consciousness sense of self. So the important question should be this: Why do we have REM sleep and why does this state only occur in phylogenetically advanced animals? In the early morning of January 22, 2010, I found the key to an explanation. And I did it in my sleep. More specifically, a new idea came to me as I awoke from one of my own dreams. I asked myself, “Why is it that in the morning, I awaken right at the end of a dream?” This caused me to think about what happens when a human goes to sleep. Early in the night, the brain tumbles into what is known as Stage IV sleep, a deep abyss of sleep that is as far removed from consciousness as a normal, un-drugged brain can get. If you have ever been to a sleep lab to check for sleep apnea, as I have, the technician will tell you that in the first hour after going to sleep, your brain shuts down so profoundly that you may even stop breathing. You would actually die, it there weren’t reflex mechanisms in the brainstem’s “non-conscious brain” that force you to breathe when blood oxygen gets dangerously low. Stage IV occupies almost half of the first hour of sleep and has a high incidence during the next 2 h. Most of the first 3 h not spent in Stage IV are spent in Stages II and III sleep, which are also stages of mental oblivion (Webb and Agnew 1971). Whatever brain circuitry and patterns of nerve impulses that were needed to construct and sustain consciousness during the daytime were obliterated when one falls asleep into that Stage IV pit. To wake up without any outside stimulus, the brain has to figure out how to get back what it lost. How does the brain get itself out of this abyss? A sleeping brain has no easy way to activate the brainstem’s ARAS to wake it up. External stimulation is normally required to do that. Typically, we try to avoid external stimulation so we can go to sleep.
A Humpty-Dumpty Theory for Why We Dream
243
If there is no stimulus to cause awakening, such as an alarm clock (or the need to urinate), the problem for the brain is how to recover from Stage IV’s demolition of all the processes it used to create consciousness. It is as if consciousness is the fairy tale Humpty Dumpty egg sitting on the highest wall of brain function, and Stage IV shoved it off and smashed it. Who can put it together again? “All the King’s horses and all the king’s men” can’t do it. To put it another way, the sleeping brain, as with a computer, has to be booted up, and I think that REM is a crucial part of the process. Unlike booting up MS Windows, slow as it is, re-booting the brain takes all night. It is as if REM sleep and non-dream sleep are in competition, and as a night’s sleep progresses, REM sleep progressively wins out. This maxim seems to be confirmed by common experience. Ever toss and turn at night because your conscious mind is too busy thinking of ideas, problems, or strong emotions? Well, dreaming produces a similar effect during unperturbed sleep episodes. Dream content is consciously perceived, and this generates robust thinking processes that compete with and eventually overwhelm the fog of unconsciousness. This is not to say that the brain says to itself, “I am in deep sleep, I must find a way to wake myself up.” Such a teleological explanation is not necessary. It may be that the default mode of brain operation is wakefulness. After all, it is wakefulness that assures a human will eat, drink, and do the necessary things to stay alive. In this view, sleep may be something the brain is required to experience periodically to sustain the capacity for normal wakefulness. The common assumption is that the brain needs rest, but what that rest entails is not at all understood. We don’t have to understand what “brain rest” is to speculate that REM may just be the brain’s way of breaking the constraints of sleep and “re-booting” to get back to its default mode of operation. Another reason for linking dreams with the consciousness of wakefulness is that both states are egocentric. Dreams almost always involve a person’s sense of self, either as a participant or a witness in a simulated world, just like its engagement with the real world. Dreams thus are a brain’s way to nurture its ego, to continually “remind itself ” just who the brain has created. Likewise, one could think of dreaming as the brain’s way to practice how to sustain its capacity for generating the sense of self. Also, during dreaming the brain is adding to its storehouse of information about its self. That sense is so important to individual survival that the brain works day and night to nurture it. It is no trivial matter for a comatose brain to recover from a Stage IV episode that demolishes the required processes for consciousness like scrambled eggs. Putting “Humpty Dumpty back together again” must be quite a challenge. The progressive increase in REM as the night wears on may serve to help the brain remember and rehearse how to implement the processes needed to lift the brain out of the pit and get it ready for the coming day’s conscious activities. The “pit” of Stage IV is so deep it shuts down even basic breathing reflexes in people with sleep apnea. In REM the cortical firing patterns have been disrupted and made random by the stages of deep sleep. But when REM is triggered, the brain is trying to re-construct the normal CIPs of self-aware consciousness and has not yet succeeded, resulting in
244
7 On the Nature of Consciousness
the bizarre thoughts of dreams. Rather than being dominated by random impulse activity, the brain is methodically trying to re-construct the combinatorial coding and temporal coherences of neocortical circuitry needed to facilitate awakening into full consciousness competency. This re-construction probably includes adjusting the sleep-wakefulness servo-system’s set-point circuit dynamics to make consciousness easier to generate and sustain, being fortified against drowsing off to sleep during boring parts of the day. As I explained in Chap. 2, human brain does have the basic neural “machinery” for generating consciousness. Scientists have known for over 60 years that consciousness is triggered from a huge cluster of neurons, the ascending reticular activating system (ARAS) in the central core of the brainstem, as was explained in Chap. 2. These cells are directly activated by all kinds of sensory input (except odors), and when activated these neurons activate all regions of the cerebral cortex to produce the associated consciousness. The focus of most thinking in this area has on how this brainstem system responds to sensory input and creates a cascade of ascending excitatory influences that eventually trigger the cortex into wakefulness and consciousness. Here I am focusing on what must happen in sleep, where consciousness disappears because there is no excitation coming from the outside world. The problem during deep sleep is compounded by the loss of excitatory drive to the ARAS coming as feedback from the cortex. Cortex and ARAS have a reciprocal relationship. The ARAS triggers cortical activation and an activated cortex supplies feedback to help keep the ARAS active. In the awake state, these effects are augmented by the steady stream of sensory input to the brainstem reticular formation that occurs during wakefulness. But in non-REM sleep, cortex neurons that supply feedback input to the ARAS slow their impulse firing drastically, thus removing excitatory drive to the ARAS, which in addition is not getting any stimulus from outside the body. If a sleeping brain is to become conscious without external stimuli, it must have a self-generating mechanism, and that mechanism probably requires the cortex to activate the ARAS—just the reverse of what happens during wakefulness. Reticular neurons in the midbrain may be critical in this activation, because they receive converging excitatory input not only from reticular neurons in the medulla and collaterals from sensory fibers, but also from cerebral cortex, the preoptic area, and several nuclei in the thalamus. I submit that in sleeping brain the mechanism for activating the ARAS and in turn the cortex during sleep is provided by the mechanisms that trigger REM sleep. That mechanism must include the capacity to increase activity in certain neuron clusters in the pons region of the reticular formation. These neurons not only produce the signs of dreaming, such as rapid eye movements and intermittent suppression of bodily movement, but they probably also cooperate with adjacent reticular neurons to turn on the ARAS, which in turn launches REM and the consciousness of dreams. These pontine neurons increase their firing during dreaming and cease it with the transition to wakefulness. Thus the main difference between wakefulness and dreaming is that these special pontine neurons are only active during dreaming,
A Humpty-Dumpty Theory for Why We Dream
245
while most of the other reticular formation neurons are active in both dreaming and wakefulness. The main difference in the nature of the consciousness of the two states may be that in wakefulness the brain has access and can respond to real-world stimuli. In any case, it would seem easier for the brain to switch from dream consciousness to wakefulness than from Stage IV sleep to wakefulness. Brains have to know how to generate the right impulse patterns throughout vast neural networks involving hundreds of millions of neurons. They have to know what oscillatory frequencies to use and where, and how to synchronize them. In Stage IV sleep, all this has to be done from a zero baseline. It takes all night of repeated REM for the brain to do this on its own if it is unperturbed by external stimuli. To have a way to wake itself up, the process apparently has to occur in stages, moving from Stage IV to lighter stages of sleep, and then to REM, which at first occurs in short abortive episodes. As the night progresses, the brain gradually emerges from Stage IV into lighter stages of unconsciousness, and then the sleep is periodically interrupted by REM episodes that become more frequent and longer in early morning. During Stage IV there is consolidation of memories going on then and perhaps some regenerative biochemical “rest” for neurotransmitter systems. This may be another reason a sleeping brain has to struggle to wake up. It is busy working on forming memories, and has to do it under the handicap of minimum functional capacity. Surprisingly, this brings us to the issue of learning. Is it not possible that a brain has to learn how to arouse itself from sleep? Think about babies. They sleep most of the time and they spend far more time in REM than adults do. Yet, they have little to dream about and hardly enough life experiences to need to ventilate psychological steam in dreams. For babies, the brain is still getting used to the experience of consciousness. Generating consciousness can be thought of as a skill that the immature brain has to learn how to accomplish. After all, because of physical limitations, a baby has very limited opportunity to hone its consciousness-generating skills through conscious interaction in the outside world. The inner-world conscious engagement provided by REM may be nature’s way for the brain to master one of its most important capabilities: triggering consciousness in the absence of external stimulation. Adults have less REM as they get older. Supposedly, the adult brain has become pretty adept at generating consciousness. In the process the brain must consult its memory stores to re-visit what has been learned through years of experience in learning how to produce consciousness from a state of sleep. Basically, REM is a way for the brain to avoid “getting stuck” in the Stage IV pit. My new view is that dreaming helps the brain to re-construct the self-image neural representation after that capability has been wiped out in Stage IV and the other deep stages of sleep. REM is what allows brain to do this in self-start mode, without external stimulus. Without REM and in the absence of external stimulus, the brain might stay in Stage IV coma continuously until starvation and dehydration cause death.
246
7 On the Nature of Consciousness
In the Stage IV pit, the representation of self is abolished. It has to be reconstructed from memory. REM could be the brain’s way of re-inventing the consciousness wheel each day. We can be consciously aware of our dreams, but the fragmentary and often incoherent nature of dreams could be indicating different stages of the reconstruction process. In generating a dream, the brain is grabbing bits and pieces of the consciousness-generating process, producing a dream consciousness that is incomplete and not fully controlled. Nonetheless, these early attempts may help the brain recover the capacity for creating consciousness. These attempts may have to reach a certain threshold, and may even be probabilistic, involving trial and error. Thus it might take a series of dream episodes throughout the night for the brain to put Humpty Dumpty together again.
A Summarizing Metaphor One way to think about all this is to think of being awake as being in a brightly lit room. When you want to go to sleep, you switch off the light switch and become enmeshed in pitch black. Now after awhile you want to wake up, but you are disoriented and don’t know where the light switch is. So you light a birthday candle, but it does not produce enough light to see the switch. You have learned even when you were a baby that if you light enough candles you can find the wall switch. So you anchor the candle to keep burning and move to another part of the dark room and light a candle there. That didn’t help enough either, so you anchor that candle and move to another area. Eventually, you have enough candles lit that you can see where the wall switch is. The rest just requires a flick of the wrist.
Fitting Known Phenomena into the New Explanation How do we reconcile this new view with the obvious fact that arousing stimuli during sleep, as with an alarm clock, can wake you up without the need for preceding REM? First, if you are in Stage IV sleep when the alarm goes off, you are not easily aroused, because the depth of sleep in that stage is quite profound. Second, even when awakened, most people are still groggy and not at their peak level of conscious function. There are different kinds and levels of consciousness that do not all emerge instantaneously at the moment of awakening. In other words, even though you can awaken without preceding dreams, you do not awaken with full cognitive powers. Internally generated arousal from sleep cannot be the same as when external stimulus, such as an alarm clock, induces awakening. External stimuli awaken us because their collateral inputs into the brainstem reticular formation induces a cascade of arousing influences that spread into the reticular nucleus of the thalamus and from there into virtually all regions of the cortex. Lacking such a mechanism in
A Humpty-Dumpty Theory for Why We Dream
247
sleep, internally generated awakening must necessarily involve tentative fits and starts that find expression via REM. In spontaneous awakening from sleep, the brain has to find a way to engage these arousal processes, which may not be all that trivial in the absence of external stimuli. Can the HD theory explain why we resume a normal REM pattern after being awakened in the middle of the night? When you are awakened in the middle of the night, for whatever reason, your brain still needs more sleep, which is why you can get back to sleep. The fact that you also resume a REM pattern indicates that the restorative effect of REM has not yet been accomplished. It may be that Stages II and III sleep also cause so much disruption to circuit dynamics needed for wakefulness that REM still contributes its consciousness preparedness function. REM sleep may also be adjusting the sleep-wakefulness servo systems “set point” of brain circuit dynamics so that upon awakening consciousness is easier to achieve and sustain, being fortified against drowsing off to sleep during boring parts of the day. Part of this preparation may well be a final re-touch on consolidated memories to store them in a way to optimize recall access by conscious working memory. The HD theory beats other theories when it comes to explaining many things about sleep and dreaming. For example, as explained earlier, it can explain why babies spend about 50% of their sleeping is spent with the physiological signs of REM, compared to only about 20% for adults. Not only does total sleep time decrease with age, but the percentage of sleep time spent in REM also decreases. For whatever reason, older brains don’t need as much of whatever benefit REM confers. Older brains may not need so much REM to put “Humpty Dumpty” together again, because their brains have had many more years of experience learning how to do it. How do we explain why only mammals and a few advanced reptiles have REM? Here, the explanation is that primitive species lack the neocortex piece of the ARAS neural machinery to produce consciousness in the first place. Thus, there is no opportunity to have a “rehearsal” process for the next day’s activities. Moreover, brain-wave studies suggest that the brains of these animals don’t have the mechanisms for the Stage IV kind of sleep either. Stage IV and REM sleep apparently co-evolved, reaching an apex in humans. Stage IV may be a needed recovery process in species that can generate full consciousness, and REM could be a needed recovery process for Stage IV. What does incomplete REM in lower mammals indicate? Their REM is short and they don’t have much REM during a night’s sleep. By not having fully developed wakeful consciousness, the brain does not have far to go to restore itself from Stage IV sleep. But the fact that they have REM at all suggests that they have some degree of consciousness during their wakefulness. They may be sentient beings, though at lower levels than we humans. Anybody who has had close relationships with animals such as dogs, cats, or horses already appreciates this possibility.
248
7 On the Nature of Consciousness
How do we explain why the most developed REM capability occurs in humans? A species that has the highest level of consciousness has farther for its brain to go to lift brain function out of the Stage IV pit. It may also be true, but has never been tested to my knowledge, that Stage IV is deeper and more profound in humans than in lower mammals. More REM may be required because it is harder to generate a higher level of consciousness. The quasi-conscious content of dreams may actually help to drive the REM process. That is, dreams may be integral to the grabbing “bits and pieces” of the consciousness-making process that the brain is performing to help wake itself up. How do we explain research results that show memory consolidation to occur in both Stage IV and in REM? One possibility is that the nature of the memory determines whether consolidation is augmented preferentially during Stage IV or during REM. Certain kinds of memories may be preferentially fortified during the respective stages of sleep. Presumably, the memory that benefits from dreaming is more relevant to the explicit, episodic processes that are expressed in the context of the human ego and sense of self. This possibility may actually be testable. Would there be selective memory disruptions if experimenters selectively deprived people of Stage IV sleep? Actually, that is hard to do, because brains fight to get their Stage IV experience. I would also predict that under such conditions there would be less dreaming (and more tired and irritable consciousness). But it is easy to disrupting REM selectively, and thus possible to structure experiments to find out just which kinds of memories are less able to be consolidated as a result. Why doesn’t the brain try to recover from Stage IV with just one long REM instead of having a series of choppy short REM? I suggest that early attempts to recover in a given night’s sleep cannot accomplish a long, sustained REM. If it could, you probably would wake up and have a much shorter night’s sleep. The repeated episodes of short REM probably reflects that it is not easy for the brain to fight its way out of stupor. Also, a brain that has not yet gotten its quota of non-REM sleep may keep insisting on blocking sustained REM. Two phenomena are not so easy to explain in Humpty Dumpty terms: awakening from anesthesia and narcolepsy. But both can be accommodated. First, neither phenomenon is normal brain function. It may be too much to expect any theory of normal function to explain all abnormal function. In the case of anesthesia, the problem is explaining how we seem to emerge from anesthesia so suddenly. At least in my experience of recovery from anesthesia, consciousness has just “popped up” with no previous dreaming (that I know of). First, the impression that consciousness just pops up may be an illusion. The anesthetized brain must struggle mightily to overcome the drug’s depression of network representation of the self-awareness. We know that struggle goes on, and it even gets expressed in physical thrashing about. That is why orderlies strap surgical patients to a gurney and “hide” them in a recovery room while the brain is fighting the body to wake up. There is also a lot of poking and prodding stimulation during recovery from vital-sign checking, clean-up, and bandaging. Also, dreams and hallucinations may well be occurring during recovery from anesthesia but can’t be remembered because the drug prevents consolidation of the
A Humpty-Dumpty Theory for Why We Dream
249
memories. There are people who say they don’t dream in normal sleeping, but they are wrong. Experiments where monitored sleepers are awakened and queried during physiological signs of dreaming reveal unequivocally that such people are dreaming, but just don’t remember unless queried at the time. Skill at remembering dreams can be developed through training. The other thing about anesthesia of course is that it is not the normal unconsciousness of sleep. There is no equivalent to Stage IV sleep. The drug suppresses neural activity, particularly in the brainstem reticular formation and cerebral cortex. Also, blood flow to the cortex is reduced. These constraints on consciousness of course go away as soon as drug levels fall below their effective threshold. Narcolepsy is the other problem for the HD theory. Narcolepsy is a disease, one in which the brain has sudden attacks of REM. How can this happen without preceding Stage IV sleep? First of all, in narcolepsy, the brain goes from a state of consciousness to another consciousness state. The point is that both normal consciousness and dreaming are not so much different that it would be difficult for the brain to transition from one state to the other. Finally, we need to explain the fact that people do wake up on their own after a night of being deprived of dreams by a sleep-research experimenter. Of course, the only way such experimental subjects can be deprived of dreams is for the experimenter to jostle them into wakefulness every time the physiological signs indicate that a dream has started. All that external stimulation must have a cumulative arousal effect that is likely to be much greater than any alarm clock. We do know that when people are prevented from having their REM, their conscious state becomes highly disturbed, even to the point of hallucinations. This observation reinforces the notion that dreams are central to the process of re-generating a cognitively competent conscious state as it emerges from the disruption of sleep. Why does the brain attempt to compensate for lost REM when experimenters selectively deprive people of REM? It may be that the brain needs such “practice.” The brain intuitively “knows” that REM is an important physiological tool it must nurture. It has learned how important REM is. In short, I contend that the brain mechanisms that trigger REM and incidentally cause dreaming are necessary for producing normal consciousness after Stage IV sleep. It is REM that puts Humpty Dumpty back together again. The question arises whether brain function in humans with disorders of consciousness, such as schizophrenia, can help in understanding consciousness. This comes to mind because a leading theory of schizophrenia is that dreams intrude into conscious awareness. But I doubt that schizophrenia can tell us anything about the genesis or nature of consciousness itself, because schizophrenics are conscious—they just have disordered mental content. What we do know is that drug-free schizophrenics tend to have insomnia, but otherwise have the similar sleep patterns, including REM, as normal people (Lauer et al. 1997). To conclude, let us recognize that there are incidental benefits of REM and dreaming besides the need for Stage IV recovery. Just getting ready for the next day’s conscious activity has its value, even if there were no need to overcome the neural disruption of Stage IV sleep. There is also benefit in consolidating certain memories that perhaps cannot be accomplished as well in other stages of sleep.
250
7 On the Nature of Consciousness
Then there is the likelihood that dreams can have reward properties. I truly believe that my dog sleeps so much because she is bored and looks forward to the adventures, such as chasing deer and catching critters, that she has learned she can experience in her dreams. Dreams are nature’s way of enhancing the life experience. This also relates to a basic propensity of brain: stimulus seeking. Advanced animals have an evolved brain that feeds on stimulus. REM helps to address that need. As satisfying as all this explanation is, to me at least, I still haven’t answered my original question of why do I wake up right at the end of a dream. Review of my own dream content over many decades of dreaming convinces me it is not exciting, frightening, or other attention-grabbing aspects of dreaming that wake me up. On many occasions, I have awakened from boring and insignificant dreams when they occurred during my normal wake-up time. More likely, I wake up because my brain has achieved final threshold for completing construction of consciousness. All my brain needs to do is lift the remaining obstacles to receiving external input and switch off the few neurons that are controlling the unique features of REM, such as eye movements and movement suppression. Taken together, I believe the HD theory is the only one that can explain all the key characteristics of REM sleep: why it occurs mostly in primates, why it occurs mostly in the young, why it decreases in the elderly, why memory consolidation occurs in both non-REM and REM sleep, why REM is short and choppy, why REM increases toward the end of a night’s sleep, and why we typically awaken at the end of a dream. And what about my dog’s dreaming? I hope she catches that damn squirrel.
Compulsions People do many things they would consciously prefer not to do. They may eat too much when they want to lose weight. They may sit around watching TV when they know they should be out exercising. They may have bad personality traits (anger, introversion, detached, and so on) that they want to change, but can’t. Such compulsions are magnified in any sort of addiction (gambling, pornography, drugs, etc.). Compulsions can be overcome through force of will, but that is often insufficient. Common experience with such lack of will power, in ourselves and in others, has led many people to think subconscious mind drives our behavior. Conscious mind cannot exert control, but can only be aware of what is going on. In short, humans are zombies, where compulsions are learned and play out like computer programs. Many scholars concede that the human mind can generate intentions, choices, and decisions, but it is not capable of free will. Among scientists, this view was originally espoused by some of the most prominent scientists in history. Early proponents of zombiism include Thomas Huxley, Charles Darwin, and Albert Einstein. In our time, the “movement” received a big push from the advent of behaviorism, originated by John Watson and enshrined through the research of B. F. Skinner
Free Will
251
shortly after World War II. Behaviorism was a philosophy that regarded human behavior as a black box of environmentally conditioned actions. Conscious thought was not considered appropriate to think about or study. Fortunately, a significant segment of the scholarly community has rebelled against behaviorism and created in its wake the new field of cognitive neuroscience. In this field, study of consciousness is not taboo. It is necessary. Addictions are often cited as proof that humans don’t have free will. Addictions develop over time, and thus constitute a learning process for the brain. The addicted brain has been re-programmed. Clearly, at least with regard to the object of addiction, there is no free will. But there once was, and there can be again when a person wills to break the addiction. Former cigarette smokers believe they deserve credit for a free-will choice to quit. I do, knowing first-hand what it takes to quit the terrible habit. In many human societies, alcoholism is a major social problem, and much research has been devoted to preventing and curing the condition. Many brain functions are involved, especially the brain’s reward system, which becomes altered so that the normal needs for positive reinforcement cannot be met without access to the object of addiction (alcohol, in this case). One recent study (Kasanetz et al. 2010) of cocaine addiction in rats revealed a shift from controlled drug seeking to uncontrolled seeking as the addiction progressed over time. This shift was associated with a change in synaptic biochemistry. Rats were trained to self-administer cocaine via intravenous cannula. After a few weeks, many of those rats became addicted and compulsively self-administered cocaine. Electrical stimulation of certain neurons in a drug-sensitive region of brain showed that the normal responses to stimulation changed over time, that is, “learning” about the stimulation occurred at the synaptic level. But in cocaine-addicted rats, this normal response did not occur. The brain had been physically and chemically re-programmed. As a child, I once asked my Uncle Bob why he did not drink alcohol, because everybody else in my family drank in moderation. Uncle Bob would not drink at all, and explained to me that he was afraid of becoming an alcoholic. Maybe there were alcoholics in his family tree; I don’t know. The point is, he exerted a free-will choice not to drink while he still could. I don’t want to get into an argument over whether alcoholism and other addictions are diseases. They are diseases in the sense that the brain has become maladaptively re-programmed by the object of addiction. But there was a time in the history of every person’s addiction, when a free choice was available to preclude compulsive behavior.
Free Will The foregoing comments presume that humans can freely generate intentions, choices, and decisions—that is, have free will. Free will can be defined in various ways. “Will” is herein operationally defined here by such synonyms as intent,
252
7 On the Nature of Consciousness
choice, or decision, and it can be accomplished consciously or subconsciously. “Free” implies a conscious causation in which an intent, choice, or decision is made among alternatives that are more or less possible of accomplishment and are not constrained by either external or internal imperatives for the embodied brain. There are two kinds of people in the world. One kind is the “captain of his own ship” who believes he controls events in life through willful choices and acts. The other kind, call them fatalists, believe their life is at the mercy of genetics, childhood experiences, brain chemistry, or fate. This second group includes a growing body of scientists, many of whom are acknowledged as scholars of the first rank, who acknowledge consciousness as a distinct mental state, yet conclude that free will is an illusion, a trick played on us by the brain. This view dates back for hundreds of years, but in our time the debate has intensified, in large part because of what I think is research that does not test what it purports to test and additionally generated results that have been misinterpreted. Also in the fatalist group are certain religious groups, such as Predestinationoriented Presbyterians, who believe that God knows ahead of time what you are going to do; therefore you have no free will because you can’t change your destiny. Just believing in free will has consequences. Consciousness certainly enables the belief in free will, and maybe the belief itself is an act of free will. Several formal studies show that belief in free will matters. One study showed that level of free-will belief affected performance of students taking a test for monetary reward (Vohs and Schooler 2008). Students inclined to disbelief in free will were more likely to cheat on the test. Another study in which level of belief in free will was manipulated revealed that inducing people to be more doubtful about free will make them more aggressive and decreased their helpful, pro-social behaviors (Baumeister et al. 2009). More recently, Tyler Stillman and colleagues (2010) demonstrated that belief in free will affects job performance. In a first experiment of undergraduates (mostly women), they found that free-will belief affected how students rated their expected success in future jobs. Correlation of scores on a free-will belief survey and ratings of expected career performance showed a strong correlation. Other factors were also correlated, such as the student scores on tests for conscientiousness and agreeableness. The other factors tested, extraversion, emotional stability, and SAT scores, were unrelated to estimates of success. However, belief in free will was independent of the other factors, predicting expected career success above and beyond the other two positive factors. The second experiment evaluated people already in the workforce and compared free-will beliefs with actual job performance, as measured by actual supervisor ratings. Level of belief in free will strongly correlated with overall job performance. No other tested independent variables were relevant: these included life satisfaction, work ethic, and personal energy. Of course, correlations do not necessarily indicate that one thing causes the other. But, common sense suggests that if you believe you can change your life for the better, you should be more likely to do the things necessary to create success. Disbelief in free will should have the opposite effect. The greatest value of con-
Free Will Debates
253
sciousness is the capacity for free will, because consciously directed intentions, choices, decisions, and plans give people power over themselves and their environment. There are obvious political implications of these points that are considered later in the section on Personal Responsibility.
Free Will Debates Most non-academics tend to take as a given that people have intentions and can freely make decisions and choices when there are alternatives and absence of external constraints. To “regular people,” free-will is self-evident. But there is a growing body of scientists and who conclude that free will is an illusion, a trick played on us by the brain. They view humans as zombies who do only what their subconscious mind tells them to do. Humans are considered selfaware but incapable of generating a freely willed action. Although, as I mentioned, this view dates back for hundreds of years, in our time the debate has intensified, in large part because of what I think is a misinterpretation of neuroscience.
The Zombie Argument Zombians arrive at their counter-intuitive conclusion from research that does seem to challenge the traditional common-sense view of free will. This research has been interpreted to indicate that all intentions are generated subconsciously. Zombians assert that consciousness can only mediate awareness of intentions; it can’t cause anything. Think about the implications of such ideology. If consciousness can’t cause anything, what good is it? What good does it do to be aware of our pains and pleasures if there is nothing we can do about them? Maybe the zombians will at least concede that the consciousness can inform the subconscious mind so that the subconscious mind can use the information in its generation of willed behavior. There is abundant evidence, anecdotal and experimental, that suggests that the subconscious mind makes us do things that we really know we should not do and is opposite of our conscious will (“the devil made me do it”). But it seems patently absurd to extend such observations to a conclusion that we have no free will or that free will is inevitably overwhelmed by subconscious demons. Yet, in the interests of completeness, let us consider the evidence against free will.1 People with brain injuries provided the first arguments against free will. For example, people with injuries that caused amnesia were studied by British psychologists, Elizabeth Warrington and Lawrence Weiskrantz (1968). They showed
Much of the material in this section was taken, with permission, from the author’s article, “Free will debates: simple experiments are not so simple”(Klemm 2010).
1
254
7 On the Nature of Consciousness
a series of words to the amnesics, who could not remember the words. Then the patients were shown the first three letters of each word and asked to complete the letters to make a word, any word. Amazingly, they consistently conjured a word that was exactly the same as the one they had just seen and forgotten. In other words, the words had been memorized in the subconscious mind but not in the conscious mind. But this seems to indicate a problem that consciousness has with memory recall. What has this got to do with intentions? The zombian argument may have begun catching on with the 1976 book by Julian Jaynes (1976), The Origin of Consciousness and the Breakdown of the Bicameral Mind. Jaynes gave many logical arguments that consciousness is not necessary for thinking and that most human mental work is done subconsciously, only becoming realized consciously after the fact. Jaynes concluded that consciousness is used only to prepare for thought and to perceive and analyze the end result of thinking. Subsequent theorists argue that decisions and intentions are made subconsciously, but the egotistic conscious mind lays claim to it as its own (Fig. 7.3). This position holds that the brain is a zombie automaton that creates its own rules and makes sure that we live by them. There is no “I” in charge. The brain is in charge of itself. Zombian theorists argue that human personality and behavior are predetermined and predictable, controlled by genetics and by how the subconscious mind has been programmed by the social and physical environment. There is little recognition that conscious mind can program the subconscious, as in learning to play the piano for example. Zombians cite the existence of compulsions and addictions as examples where conscious awareness fails to control the brain. The conscious mind knows when we IIIusory will Conscious Mind
!
on i t c A
Subconscious Mind
Fig. 7.3 The concept of free will as an illusion. Subconscious mind is said to create behavior and belatedly informs the conscious mind of what has already been done. T.H. Huxley, a dominant force in nineteenth century science, called conscious will to act as a mere “symbol” of the processes that generate action (From Klemm 2010)
Free Will Debates
255
have bad behaviors but can’t do anything about it. Our excuse is that we are addicted, have a brain disorder, or have been programmed by bad events beyond our control. The same kind of logic is used to explain character or personality flaws. How convenient! We say, for example, “He can’t help it. That’s just the way he is.” Or “She really doesn’t mean to be that way.” Or “I can’t believe he did that. He is such a good boy.” Zombiism is the mother of all excuses. A more formal philosophical argument is provided by Henrik Walter (2001). He says our standard theory of mind is a mere convenience that satisfies our expectations about what we do. Walter says that criminals cannot be held responsible for their crimes. They may be self-aware of what they did, but they could not stop themselves. He emphasizes that the conscious mind is only partly aware of the choices made by the subconscious. Conscious mind can only “look in” on what the real mind is doing and perhaps veto some choices made subconsciously. A more liberal elaboration is that free will operates “to ensure the continuity of subjective experience across actions which are — of necessity — executed automatically” (Jeannerod 2009). A complete defense of the zombian school of thought is in the book by Daniel Wegner (2002). Leading thinkers, such as the philosopher, Patricia Churchland (2002), and the neuroscientist, Michael Gazzaniga (1998), recognize the nihilistic nature of the zombian conclusion but are resigned to a position of “it must be so.” The most recent book (Pockett et al. 2009) on this matter perpetuates the zombian argument at least for many short-term intentions and asserts that the question remains open for all other intentions. I will argue that this view is based on flimsy evidence and specious arguments. Philosophers seem to polarize around two points of view: (1) people generally lack free will but sometimes may have it (compatibilism) or (2) human thoughts are beyond personal control and incompatible with free will (incompatibilist, or as I term here, “zombian”). Zombians argue no empirical tests have been devised to prove that free will exists and, moreover, experiment proves their view that free will is an illusion. Some kind of reconciliation seems needed, and this is what gives urgency to the compromise of compatibilism. Most contemporary philosophers seem to hold the compatibilist view, namely that human beliefs and actions arise from a subconscious zombie-like mind, but it is wrong to assert that humans do not have any free will. In modern times, the free-will conundrum has been exacerbated by neuroscientific evidence that seems to conflict with the nihilistic belief that people are not responsible for their beliefs and actions. The accumulation of evidence began with the simple experiment performed and elaborated in the 1980s by University of California scientist, Benjamin Libet. Thus, this analysis will focus on the prototypical Libet experiment and those of others that followed in order to comprehend its strengths and weaknesses concerning the issue of free will. Hopefully, I can provide some comfort to those who feel intimidated by sophistry into believing their data supports determinism at the expense of free will.
256
7 On the Nature of Consciousness
A New Critique of Zombian Research I have published a formal critique in a peer-reviewed journal (Klemm 2010). The critique begins with two main lines of research that provided the scientific underpinnings for modern zombianism. One is the paradigm developed by Wegner in the 1990s in which subjects were asked to move a cursor randomly around a computer screen and stop the cursor every 30 s or so over an object depicted on the screen. After each stop, the subjects rated their intentionality in terms of how sure they were that they made a conscious decision to stop the cursor or that the experimenter had done the manipulation behind the scenes. In turns out that subjects were quite bad in making such estimations. They were correct only 56% of the time that they had actually caused all of the stops. Wegner developed a later approach by having subjects view other people’s gloved hands located in the position where their own hands would be. As the gloved hands performed actions, subjects were asked to rate the extent to which they had controlled the movements. Again, subjects performed poorly in such estimates. I reject the zombian conclusion of Warrington and Weiskrantz because their experiment measures memory recall more than conscious intent. And I also reject Wegner’s conclusions because his experimental designs seem to test awareness more than intent. My objection to the design is that one cannot conclude unequivocally that the intent is either conscious or subconscious, and that the major uncontrolled variable is the level of reliability of the subjects’ awareness of their conscious intent. As we shall see later in this paper’s analysis of Ben Libet’s experiments, awareness of conscious intent is quite problematic. Tim Bayne (2009) has written a more exhaustive criticism based on the extreme complexity of the experience of conscious will.
The Libet Experiments Libet (1985) monitored a “voluntary” finger movement while at the same time recording brain waves from the scalp overlying the part of the brain cortex that issues movement commands to the fingers. Participants were asked to make a spontaneous finger movement, at a time of their choice, while watching an electronic spot moving around a clock face. Subjects were to note the time on the clock at the instant that they decided to move the finger. When subjects consciously decided to make a movement, they reported the time of the decision from the modified clock. As expected, subjects thought that they had decided to move about a half second before actual movement, which is consistent with the idea that they willed the movement to occur. But the startling finding was that a major change in neural activity in motor cortex was observed about 350 ms before the subjects claimed that they willed the command to move. One interpretation of such a result, especially for scientists who are conditioned to believe that every action has an antecedent cause, is that the decision is made subconsciously and that conscious awareness is not part of the cause. Accepting that premise, one is forced to conclude that one does not “will” such movement. The brain just subconsciously decides to move the finger
Free Will Debates
257
and lets the conscious mind know what it has decided. The disturbing corollary is that one does not freely “choose” to do anything. The brain is just driven by external and internal forces to direct behavior, and one’s consciousness is only around to know about it.
Follow-up Studies In a similarly designed follow-up to the Libet experiment by Lau and colleagues (2004) at Oxford, human brain scans were made as subjects were asked to report when they first felt the urge or intention to move. The images showed three small cortical regions of activation when the subjects attended to the urge to move prior to the actual movement itself. The Lau studies showed that subjects reported an intention to move about 1/4 s before the actual movement, which is consistent with Libet’s results. Conscious intention was associated with increased neural activity in areas other than the motor cortex. These activations could well occur before the motor cortex is activated, but the imaging method used does not have the time resolution to answer this question. But even these limited results show that limiting analyses to the motor cortex is not sufficient. This is reinforced by the findings of Sukhvinder Obhi and Patrick Haggard (2004) who found that awareness of conscious intent correlates more specifically with a motor cortex potential over the side of the head opposite to the hand making the movement (hand movements are initiated from the opposite cerebral hemisphere). A follow-up study by the Lau group (2006) did examine more closely the timing judgment issue. Specifically, they examined Libet’s finding that subjects misestimated the onset of movement, thinking it occurred about 50 ms before it actually did. The Lau group reasoned that there must be someplace in the brain that signals the judgment that movement has occurred and that across subjects the magnitude of the brain activity correlate would positively correspond to the accuracy of the time estimate. Alternatively, enhanced electrical activity might contribute to the timeestimate error, in which case the correlation would be negative. They also re-examined their earlier fMRI data to see if the same principle applies for judgment of the onset of intentions. What they found confirmed many earlier studies that indicated that the brain makes errors in time estimation. When participants were required to estimate the time onset of their movements (instead of their intentions), the activity in the cingulate motor area was enhanced. Moreover, across subjects the level of cingulate activity was positively correlated with time-estimate accuracy. That is, the greater the cingulate activity, the earlier subjects estimated the time of movement. This is consistent with their earlier data on MRI changes in the pre-supplemental motor area and time estimates of onset of intention. In both cases, time estimation could not be relied upon as accurate. As with the original Libet experiments, experimenters relied on self-report of the decision to move, which no doubt has limited time resolution. These experiments
258
7 On the Nature of Consciousness
suffer the same limitation as those by Libet in that one must presuppose that the decision to move as well as the conscious realization are instantaneous. Both assumptions are wrong, as I will document later. The recent studies by Chun Soon and colleagues (2008) at the Max Planck Institute in Leipzig used brain imaging in a design that was akin to Libet’s. They monitored oxygen consumption in the same area as did Libet, the supplemental motor area of cortex (SMA). However, they reasoned that the SMA is active in the late stages of a movement decision, and that other brain areas might be involved in movement planning at earlier times. They also used a more sensitive way to establish when the awareness of decision occurred. Finally, Libet provided only one behavioral option, and the anticipatory electrical change might reflect nonspecific preparatory activation. What Soon and colleagues found was astonishing. Two regions in the frontal and parietal cortex exhibited a decision-predictive change a full 7–10 s before conscious awareness of the decision. The areas of motor cortex that actually issue movement commands showed slightly increased activity in the second or so immediately prior to the instant of decision, and much more pronounced activity after the decision. Activity in brain areas directly involved in issuing movement commands (SMA and motor cortex) increased greatly after decision. The increased activity in the other areas prior to awareness can be interpreted in more than one way, though the authors were wedded to just one interpretation. Most people, especially the lay press, assume that these other areas are processing the decision to move and thus indicate absence of free will because their increased activity occurred before subjects thought they willed a movement. The authors were careful in wording their conclusion; namely, that the frontal and parietal cortical areas “influenced” the decision making up to 10 s before conscious decision to press one of the two buttons was realized. They viewed this early, pre-conscious activity as preparatory and also as a specific predictor of which button was to be pressed. To me, this conclusions conveniently ducks the issue of whether the increased antecedent activity reflected conscious or subconscious decision making. I think the more obvious interpretation is that frontal and cingulate cortex were processing the “rules of the game” and creating the free-will intent to move and were doing so concurrently with some activity in the SMA prior to increased motor cortex activity. A replication of the Soon study used electrical recordings (Christophel and Haynes 2009). Not surprisingly, electrical changes from multiple scalp electrode locations occurred several seconds before subjects indicated a conscious decision to move. These results were interpreted as indicating that such antecedent activity reflected subconscious decision making. The interpretative flaw remains: decision making is assumed to be subconscious. But where is the actual evidence for that? All such data really prove is that there is antecedent neural activity. In designs like this, the subject knows as soon as one trial is over that another is beginning. Moreover, the subject consciously chooses to make a movement and the brain no doubt is planning to make such a movement long before a “go” signal is delivered via the conscious decision-making process. So, the pre-movement
Free Will Debates
259
increased brain activity could actually reflect conscious processing in working memory of the “rules of the game” and the will to obey those rules. All through a trial, before conscious decisions are made, the brain is consciously processing in working memory at least five different things: (1) “I will make a button-press movement, (2) I will make the press either on the right or left, (3) I will notice the letters on the screen and hold them in working memory, (4) I will issue a go decision voluntarily, and (5) I will remember which letter was present on the screen when the go command is issued.” Under these cognitive conditions, it is unrealistic to expect any single electrophysiological marker of when a decision is made. Yet, even so, this study actually supported a non-zombian interpretation. Activity increased in nonmotor, consciousness-mediating areas before the movement. In other words, the “go” decision was only one part of the consciously willed process. At the time of this writing, the most recent study was that of Michel Desmurget and colleagues (2009) in France, who took a different approach. First, they distinguished between two processes, the will to make movements and the awareness of such willed action. This led them to consider the parietal cortex as a possible site that brings intentions into conscious awareness. Secondly, they used direct electrical stimulation rather than recording. The subjects were awake humans with electrodes inserted into the brain to help locate tumors that were not located in the recorded sites. Stimulating the right inferior parietal regions triggered a strong intention to move the contralateral hand, arm, or foot, whereas stimulating the left inferior parietal region produced an intention to make the movements of speaking. When stimulation strength was increased, subjects believed they had actually made such movements, even though monitoring of the relevant muscles showed no signs of muscle activation. This result does not fit the zombian theory, for there was clear sign of willed action even when no movement occurred. This paper cites earlier work by Fried and colleagues who showed that low-intensity electrical stimulation of the supplemental motor cortex in humans caused an urge to move. Stronger stimulation caused actual movement. The lay press has commonly claimed this is proof of free will. I don’t go that far, because the data just show that the parietal cortex enables people to be aware of their intent, not whether that intent was first generated consciously. There is also the problem that the really crucial point was not tested. Namely, can subjects distinguish between a stimulus-induced feeling of intent and an internally generated actual intent? There is also the probability that brain activity is probably different between test conditions in which a subject randomly and spontaneously wills a movement and when such a movement is planned. On the other hand, this work is a refreshing departure from Libet-type experiments. Because the focus is on stimulation, the limitations are of a different order. The authors did note the earlier research on the cognition of intention and the zombian theory. But they were careful not to endorse (nor criticize) the zombian theory. Instead, they made the limited interpretation that the will to move precedes movements and even intended movements that do not occur. As will all such studies, the investigators only considered a subset of all the brain areas that are known to be involved in willed actions. For example, there were no
260
7 On the Nature of Consciousness
electrical stimuli delivered to prefrontal cortex areas that are known to be involved in generation of intent. Just because realization of intent is generated out of the parietal cortex does not mean that intents was generated there. Even so, whenever intent is generated, it clearly must precede the realization of intent, and their studies clearly showed that realization of intent can occur without movement.
Twelve Interpretive Issues Free-will studies have been plagued by unwarranted simplistic assumptions and circular reasoning. The assumptions are convenient but not persuasive for the zombian argument. I think that zombians commit at least twelve major fallacies of logic or acceptance of insufficient data in interpreting experiments of these kinds. Points of interpretation that are typically not considered include: 1. Increased neural activity has ambiguous meaning. Increased neural activity in a given brain area may not be limited to just one function. 2. Decisions are not instantaneous. What we consciously think could well be spread out over time. The process can be on-going but our realization captures the process only as a snapshot in time that suffices to label the decision but not the process. Moreover, in experiments like this the subject continuously wills to perform the task and to do so within the rules of the experimental paradigm. The only thing at issue is when to act. Even the decision of when to act is not instantaneous. Even if not verbalized with silent self talk, the subject has to monitor time and think consciously about what is an appropriate time to act. “Has too much time elapsed since the last act? Should I use a set pace of responding or use a semi-random pattern? How often do I change my decision to act now and defer it?” In a more complex situation, decision-making is even more obviously an on-going process. We weigh the evidence. We lean one way, then the other. Finally, the preponderance of evidence and the weights we assign to it lead to a conscious decision. The decision itself may have been instantaneous but its process was dominated by free-will choices spread out over days, months, or even years. 3. Conscious realization of intent is not instantaneous. Conscious realizations in general take time. Libet himself in a 1973 paper was the first to show that conscious realization can take at least 500 ms. In human subjects who were electrically stimulated in the somatosensory cortex, the stimulus had to be delivered for 500 ms or longer before they realized the sensation. In experiments of this type, two things have to be done at more or less the same time, neither of which can be assumed to be instantaneous. In addition to deciding when to move and realizing a willed decision has occurred, the subject also has to think consciously about the clock indicator for the decision. To do this, the brain must be consciously aware of the time indicator and integrate the
Free Will Debates
261
movement into that awareness. Does the subject think about the clock in the context of “I am about to move and must make sure I note the time?” Or does the subject force a spontaneous movement and then switch attention, after significant delay, to note the time? Both the decision and the time recognition need external validation. How long does it take for proprioceptive or visual feedback to confirm the act has occurred and that the clock really showed an X fraction of a second? With images presented in sequence, for example, it takes up to about 100 ms to accomplish the correct conscious recognition of an event (Grill-Spector and Kanwisher 2005). In other words, subjects need this time after seeing an object to process in consciousness what it was and what category of objects it belongs to. At all time lags, accuracy was the same for detection that something was seen and its category of the object, but was substantially less for realizing or identifying what the object was. On average, 65 more milliseconds were necessary for identification of what the object was than for its categorization, even when accuracy in the categorization and identification tasks was matched. Using visual images to test the time for conscious recognition of an event is especially useful evidence, because vision is an exceptionally high-speed process in the brain, very likely to be much faster than the conscious processes needed in a Libet experiment where one must decide to move, determine what to do and with what body part to use to do it, and be consciously aware that these events have occurred. In other words, you can make a conscious decision to act, but it may take you several hundred milliseconds to realize what you have done. Figure 7.4 illustrates what I think happens in free-will choice and decisionmaking: 4. Decision-making and decision-realization are likely separate processes. This could impose delays because it is accomplished via numerous synapses in widely distributed circuits, whereas the movement command can be executed via as few as two or three synapses. It is also possible that conscious realization processes are not complete until they are confirmed by feedback from seeing and feeling that the movement has actually occurred. Realization captures the process as a snapshot in time, but the antecedent process of realization goes unrecorded. 5. Decision-making is not the only process going on. Actually, there could be four conscious processes going on prior to movement commands in the Libet designed experiment. In such experiments, the brain must say to itself the equivalent of
1. 2. 3. 4.
“I know the rules of this game and agree to play by them.” “I intend to move soon (and withhold movement in the meanwhile)” “I realize and confirm that I have issued the order to move.” “I notice and report the time I issued the order to move.”
Decision-making is a process, not solely an event (Fig. 7.5). The same principle applies to subconscious decisions, but I make the point here because zombians seem to overlook the role of multi-step processes in conscious decision making.
262
7 On the Nature of Consciousness
Fig. 7.4 Explanation for the delay between when a conscious process occurs and when the conscious mind realizes that such a process has occurred. The idea is that embodied brain generates a conscious mind. (1) Brain then supplies conscious mind with information that should inform the conscious process. (2) Conscious mind reflects on this information, considers alternative choices, and makes a decision. (3) Conscious mind then issues the commands to embodied brain to implement the choice. (4) After a processing delay, brain performs the action and informs conscious mind of its choice (and consequences of that choice)
Global Operations feedback
Drives
Memory
Preception
Associative value Stimulus properties Valence Remembered utility Context Rule-base feedback
Satiety Energy level Procreation
Expected Utility
Decision
Action
Fig. 7.5 Constellation of processes that participate in making a decision (From Klemm 2010)
Free Will Debates
263
Most theorists tend to ignore the full dimensions of these conscious processes, focusing only on step three as the single important incident. Let us recapitulate what must be happening during a conscious decision to make a movement. External stimuli or even internally generated signals would generate a conscious decision to perform a given act. These signals must activate memory banks as a check on the appropriateness of the movement in the context of what has been learned about making such a movement. The reward system has to be activated to assign value to the making of such a movement, weighing the expected immediate utility with the longer-term value. The emotional networks of the limbic system have to be activated to see what level of passion, if any, is appropriate to the movement. Movement control networks have to be activated in order to plot a trajectory and to evaluate the correctness of the anticipated movement. There are “pre-motor” areas of cortex that are engaged in the planning for the movements that are to be executed. The single brain area monitored by Libet certainly should not be the temporal bench mark for deciding the time relations between conscious decision and engagement of motor control processes. A properly designed experiment would monitor other areas of the brain, preferably multiple areas at the same time, with monitoring protocols that could serve as a better indicator of when a conscious decision was made. Even free-will critic, Daniel Wegner (2002), concluded that “The experience of will may be manufactured by the interconnected operation of multiple brain systems, and these do not seem to be the same as the systems that yield action.” 6. Not all intentions are for simple movements. There is also the issue of the kinds of movement we wish to associate with conscious intent. In speech movements, for example, we have all experienced high-speed conversation, clearly controlled by conscious intent to express thoughts, both spontaneous and in response to what is said by others. Consider all the thoughts one has to hold in conscious working memory to conduct intelligent conversation. We think consciously about what is in working memory as we use it. 7. Not all willed intentions are formed in acts of decision. Especially in the case of habits, decisions are made long before the execution of an act. Mele (2009) points out, an intention to do something can arise without being actively formed from a decision process. Not only are some habits originally formed consciously, but the choice to deploy a habit may be made consciously and certainly, as Libet suggested, be vetoed consciously. 8. Conscious decisions can be temporally uncoupled from the action. I may decide this morning, for example, to be more thoughtful toward my spouse. Opportunity to do that may not arise for hours, as for example, when I come home from work that evening. When the opportunity arises that evening to be thoughtful, do I have to re-make the decision? No, it had already been made hours ago. So, when I do nice things that evening, the new behavior resulted from a choice action at that moment, not a decision made hours ago. One could argue (but not test) that the evening’s behavior was generated subconsciously, but it could not have been driven by the process of making a conscious decision, because that had already been done.
264
7 On the Nature of Consciousness
9. Failure to monitor other non-motor brain areas. While free-will experiments have used high-resolution time monitoring of movement activity, there is no corresponding high-resolution electrophysiological indicator of conscious decision making and reporting. Surely, there must be electrical indicators of conscious decision making somewhere in the brain. More recent investigators have indeed documented increased brain activity prior to the increased motor cortex activity, and these include areas not normally associated with movement. Nobody knows where in the brain the conscious self is, probably for the same reason that we can’t find where many memories are stored. The phenomena are not things in a place, but processes in a population; i.e., patterns of neural activity distributed across a large population of neurons. Also, the part of the cortex that was monitored, the motor cortex, only began its increased activity before the self reported intent to move. Few analysts admit how little we really know about what is signaled by this “readiness potential.” In the early ramp-up of the electrical signal, the change could signal that a movement command was about to be issued or that there was intention to move. That intention could have been generated elsewhere, in areas of brain that were not being monitored. Maybe the processing of intention triggers the ramp up at the same time as the processes that were signaling the awareness of the intention. This ramp-up seems inseparable from the general instructions in the experiment in which subjects knew they were supposed to generate an urge to move the finger sometime during the rotation of the clock hand. Subjects already had a fixed intention to move the finger long before the start signal of the “urge” to move. Of necessity, they were holding these instructions in conscious working memory. So in that sense, subjects had already consciously planned and willed to move long in advance of selecting between the options of move or don’t move. 10. Inappropriate reliance on awareness of actions and time estimation accuracy. In self-reported awareness of a conscious decision, the issue is whether the intention occurred prior to action or if the awareness was reconstructed after the action occurred. It only takes mention of a few studies to make the case that humans are not precise in their awareness of time compared with actual time on a fraction of a second scale. One study, for example, showed that subjects made major errors in time estimation when instructed to keep visual displays on a screen for a fixed time (Ono and Kawahara 2005). Moreover, the accuracy was affected by prior priming experience with the images. A review of a variety of reports show that time estimation accuracy is affected by experimental conditions, such as stimulus modality, degree of attentiveness to time, and level of arousal. Their own experiments showed that time estimates were affected by prior expectations about visual stimuli (Ulrich et al. 2006). Stanley Klein (2002) re-plotted Libet’s original data and found that observers had great uncertainty about the relative timing of events. He also pointed out that the Libet design required responses that were difficult to judge. Several experiments document that it takes time to process visual information consciously. In an experiment originated by Dr. Nijhawan, subjects assessed the
Free Will Debates
265
timing of an object passing a flashbulb. The timing was exact: the bulb flashed precisely as the object passed. But subjects perceived that the object had moved past the bulb before it flashed (Nijhawan and Kirschfeld 2003). This suggests that the brain projects a moving event a split second into the future, seemingly working on old information. Apparently, the brain needs time to consciously register what the eye sees. In the context of a Libet type experiment, realizing the location of a clock hand occurs later than what the time of decision actually was. Various investigators have raised questions about the accuracy of time awareness under conditions specifically relevant to Libet-type experiments. For example, when potential biases in this task are directly assessed by asking subjects to make subjective timing decisions about at stimulus, subjects consistently tended to report events as happening about 70 ms later than they had actually occurred (Joordens et al. 2002). Another recent study of time awareness accuracy used the control condition of the Libet method, and required subjects to judge the time of occurrence of a stimulus relative to clock indicator of time (Danquah et al. 2008). Response accuracy varied systematically with the sensory modality of the stimulus and with the speed of the clock. If these indicators of externally observable events are inaccurate, the researchers suggest that the time estimation may also be inaccurate for endogenous events. Awareness of time is only one indicator of how well humans are aware of their actions, and it can be argued that humans have awareness limitations that go beyond time awareness. For example, a just published paper reports that awareness of our actions depends on a combination of factors involving what we intend to do and what we actually did. One interesting experiment required subjects to reach consciously for a target that jumped unpredictably on some trials (Sarrazin et al. 2008). Subjects were to express their expectation of a target shift, point at the target as fast as possible, and reproduce the spatial path of the movement they had just made. The last step of reproducing the trajectory was taken as an index of the awareness of the previous action. The accuracy of reproducing the trajectory was measured in terms of the degree of movement undershoot or overshoot. On trials where subjects thought there would be a target shift, the overshoot was greater and the undershoot less than on trials with lower expectancy. Thus, conscious expectancy affected the awareness of what had taken place. Time-awareness accuracy is confounded by the likelihood that the whole process of decision making and monitoring has many elements that combine subconscious and conscious processes. Of all these processes, Libet only observed that the “action” stage had only started before subjects thought they had issued a command to move. The time scale used in Libet-like studies is too short to adequately capture all conscious processes. In the Libet study, the actual movement did not occur until after subjects thought they had decided to move, which allows for the possibility that the processes above could have participated in a conscious will to move. Some portion of these processes occur at a subconscious level that could have primed the
266
7 On the Nature of Consciousness
motor cortex to start a readiness ramp up of activity to await final confirmation from conscious decision making. And how do we explain other kinds of decisions that are so rapid that long preparation periods are not possible? For example, one news story on free-will research began this way: “You might think you just decided to read this story on a passing whim—but your brain actually decided to do it up to 10 s ago, a new study claims.” The problem here is that I made that decision to read in a split second, because I had just clicked a hyperlink to take me to the Web page where the story was posted. My brain could not have made a decision much in advance, because my brain did not know such a site existed more than a few milliseconds earlier. It is still possible, of course, that this rapid decision-making occurred in my subconscious before I realized I made it. But in my conscious mind, I certainly considered whether following a hyperlink was likely to be worth my time, and I could have rejected whatever decision was fed to my consciousness from subconscious processes. 11. Unwarranted extrapolation to all mental life. Just because subconscious choices are made prior to conscious awareness in one task is not proof that all mental life is governed this way. How can this kind of methodology possibly be appropriate to test for free will in such conscious cognition choices as deciding on an optimal plan, a correct problem solution, what to conclude, the appropriate interaction with others, which words to use in conversation, or what attitudes and emotions to embrace? Complex tasks are probably performed in different ways than simple ones. It may be that the reflex-like button press response is so simple that the subconscious mind performs it and has no need to assign or recruit assistance from conscious mind in making the decision. All of the experiments used to support the zombian conclusion are of the same basic and quite limited type. But there are different forms of intentions and any given form may not be as simple as it seems. Many neural processes are going on, even in the simplest designs, that are not taken into account, as illustrated in Fig. 7.6. This scheme more correctly describes, I think, what the brain must be doing to make the simple finger movements in the Libet-type experiment. This scheme should make clear why the measurements in such experiments cannot possibly be an accurate reflection of all that is going on. More specifically, there is no way to show that the ramp up in motor cortex activity occurred before a long sequence of operations involving intent generation, conscious working memory of the “rules of the game,” the instant of intent realization, the realization of the time of intent, and the linguistic preparation for declaring the information. Some of these processes, such as ongoing working memory of the “rules of the game,” are clearly present before ramp up of motor cortex activity. A series of processes occur in parallel over time. Rehearsal of the “rules of the game” occurs continually. This is the context in which everything else occurs. One process involves first the decision to make a movement at some point. This is followed by consciously informing oneself that now is the time for
Free Will Debates
267
Processes in Libet-type experiments
Rehearse “Rules of Game”
Decide HOW to Do It
Decide WHAT to Do
Time
Prepare to Act increase muscle tone Estimate Time of Decision time-awareness issues Report When Decision Was Made
Decide WHEN to Act instantaneous or “smeared” over time?
Act
Realize Action Occurred
Fig. 7.6 All of the processes, except for the two with shadowed backgrounds are clearly performed consciously. Note that they are intermixed in time and they cannot be interpreted unambiguously (From Klemm 2010)
a movement to be made (“what to do”) and also to choose the correct hand to activate the actual motion (“body part to use”). Then, after significant delay, the conscious mind realizes that these decisions are now complete and readies itself for action. This is followed by the activation of motor cortex to prepare for and execute the movement. The brain has to decide to split or divert attention from the movement commands to noting the time. Time of decision has to be estimated and consciously realized for subsequent reporting. In parallel, a set of processes is triggered, first involving integration of the command to move and to do so with the right hand. This is followed by the activation of motor cortex to prepare for movement and finally initiate the movement. The most salient point is that many of these cognitive processes have to be held in conscious working memory, in order to perform the expected task. These working-memory tasks are smeared out across time and there may not be any single electrophysiological signature of their occurrence. 12. Conflicting data or interpretations are ignored. Recall the data of Soon’s group, which showed increased activity in two regions of the frontal and parietal
268
7 On the Nature of Consciousness
cortex a full 7–10 s before conscious awareness. This was considered evidence of subconscious motor preparation. There is no basis for believing it takes 10 s for subconscious mind to prepare motor pathways for a button-press movement. Why do zombians assume this predictive change reflects motor preparation instead of the processing of free will and other cognitive functions associated with the “rules of the game?” These areas of brain normally have conscious functions and not movement functions. Is this not bias? Zombian bias may even keep investigators from looking for evidence crucial to the argument; namely, neural representation of intention. Yet, there is enough evidence to indicate there are neural representations of intention, as for example in the Desmurget study (2009). A slow time scale allows for conscious awareness of intent, development of plans and “on-the-fly” adjustments. Consciousness allows us to think in the future, to anticipate what we need to do to get what we want and to plan accordingly. Such intentional planning has a neural representation and can even be detected experimentally in animals. In one such study, Sam Musallam and colleagues (2004) eavesdropped on neurons in a planning area of monkey brain. They put electrodes in an area of cortex that was known to be required for planning, but not actually making, arm movements to reach a target. The planning area in monkeys is a small patch of cortex just above the ears. Monkeys were trained to “think about” a cue presented on a computer screen that told them to plan a movement toward an icon on a screen that had just flashed on a screen in one of up to eight locations. Each location was associated with a certain firing pattern in the planning neurons. Here is a clear case where the will to do something was established long before any action occurred. While monkeys thought about the required movement, computer analysis of the firing patterns of these neurons could predict what the monkey was intending to do—tantamount to reading the monkey’s mind. The researchers knew that it was intention that was represented, not actual movement or even planning for movement, because the monkeys were trained to get reward only when they withheld actual movement but nonetheless made the correct planning, as indicated by their neural firing patterns. Whether or not the monkeys were consciously aware of what was going on is another question. But it is clear that these animals have a mind that contains neural representations for decision processes, and these neurons are active prior to motion or even in the absence of movement.
Proposal for Next Generation of Experiments If free will exists, then there should be some neural correlates when such will is being exercised. No one knows what those correlates are, mainly because they haven’t been looked for. Almost certainly, free will emerges from a distributed process in neocortex. One might monitor multiple neuronal activities within appropriate cortical columns. For example, if the willed task involves vision, multiple columns in visual cortex should be monitored. Perhaps changes in
Free Will Debates
269
impulse onset/offset, firing rate, change in firing rate, or sequential interval patterns will be seen in certain neurons. Perhaps there will be changes in oscillatory frequencies of field potentials or in coherences with oscillations elsewhere or with other frequencies. I suggest that there might be a global electrical marker for conscious decision making: synchronization of brain-wave oscillations at multiple locations. Degree of synchronization can be frequency specific, involving shifts in coherence among various brain areas and even oscillators of different frequency. As reviewed in Chap. 4, (Klemm et al. 2000) my colleagues and I noticed that when subjects made a conscious decision about which mental images were present in an ambiguous figure, there was significantly increased synchronization in specific frequency bands across widely distributed scalp locations. We also found, much to our surprise, that synchronization occurred in multiple frequency bands, a finding that has also been reported by others (Makeig et al. 1998). The obvious interpretation is that these synchronization changes are correlates of conscious perception. They are also correlates of willed choices; that is, we decide whether we are seeing a vase or a face. By the way, with such ambiguous-figure images, subjects can, through force of will, choose which percept to hold in working memory. In fact, for many such images, many subjects have to extend considerable will power to perceive an alternative image because their default percept is so strong. Since oscillatory synchronization is so tightly associated with this process, this may be the clue that free will is enabled by synchronization of certain oscillations. An experiment could readily check for changes in coherence patterns when one free wills to hold the difficult percept in consciousness as compared with patterns during the default percept. The experiment might benefit from including a time indicator, of the Libet or Soon type, for when subjects realized they wanted to force perception of the difficult alternative image. If synchronization changes indicative of intent occur after the indication of conscious intent, it might support the zombian hypothesis. However, we would still face many of the faulty assumptions mentioned earlier (intent processes are smeared in time, extra time is needed for realization of intent vs. generation of intent, etc.). The emphasis in analysis should be on correlating electrical activity with mental state, not on precise timing of which came first. A step in the right experimental direction is the experiment reported in 2004 by Daeyeol Lee (2004) at the University of Rochester. He monitored the level of coherent oscillations in electrical activity in the supplemental cortex of monkeys in a task in which they made a predictable series of hand movements as they integrated sensory signals with expected reward. Movement performance was influenced by both the position of movement and the location of the rewarded target, but only the expected reward affected the degree of synchronization. I don’t claim that monkeys perceive these things consciously, but synchrony of neuronal activity clearly seems to be a marker of something different from the amount of activity. To summarize, I think this critique shows enough weaknesses in the zombian theory to warrant a new generation of experiments aimed at testing the possibility that there is neural representation of free will.
270
7 On the Nature of Consciousness
Common-Experience Examples of Free Will Numerous common-sense examples could be constructed to illustrate complex situations wherein conscious intent can occur. The examples I give are all based on presumed conscious free will to make certain movements that are much more demanding than a button press. Here is one example: you are driving a car in heavy traffic and another car runs a red light, pulling into your path. You can realize the full nature of the emergency and intend to turn the steering wheel appropriately and move your foot off of the accelerator and onto the brake pedal long before you can make such movements. You may not be able to avoid the accident that you consciously intended to avoid. The analysis of the emergency, the intent to make certain movements, and the motor execution are all completed in a fraction of a second. And we need to take into account the fact that a conscious decision can be made but not realized for up to a half second. How likely then is it that all this was figured out subconsciously, then conscious awareness was engaged, and then conscious awareness was realized in that same instant? How can the responses be generated subconsciously when the subconscious has not been preprogrammed for such movements? From beginning to end of the episode, conscious intent processes are clearly operative. Here is another example that football fans can relate to: In almost every game there is at least one play where a pass receiver drops the ball because he was consciously thinking not only about catching the ball but also about defensive backs that he heard thundering toward him and was thinking about the moves he would make after the catch. All this was going on in conscious mind long before the brain issued the movement commands needed to catch the ball. You might argue that the preparation to move was triggered before all the conscious realizations about the pass-receiving context, but that can’t be measured. As in the car accident case above, there is no way the subconscious is preprogrammed to make all the right movements, given all the variables involved and the uniqueness of every pass-catching challenge. In any case, it seems clear that conscious thought and decisions were being made well before complex motor commands were issued and adjusted in the last few milliseconds to adjust to the ball’s trajectory and speed to accomplish the desired movements. True, intent to move might be preceded by subconscious preparations and rudimentary alternative sets of muscle commands that could be considered for movement. But it is hard to argue that conscious thought about how and when to move is preceded solely by subconscious processes. Conscious planning, by common-sense definition at least, often precedes action. Scientists will point out that commonsense can be wrong. But so can scientific dogma. If subconscious mind does everything, and conscious mind is merely a reporter that may intervene on occasion, we have a problem in explaining the decisions and conclusions we make in: • Attitudes and beliefs we choose to make as a result of introspection. • Conclusions we choose to make from literature, poetry, art, or music.
Free Will Debates
271
• Deciding what words to use in rapid conversation. • Choices we make about time (past, present, and future). • Intentions we use in early-stage learning, such as riding a bicycle or touch typing. • Deciding what to believe in politics, religion, etc. • Decisions to take or avoid responsibility. • Choices that emanate from conscious analysis. • Choices made in developing plans for the future. • Feedback adjustments to ideas, attitudes, emotions, and behavior? The subconscious mind surely participates in all of these human cognitive activities, but to presume that all of these activities are governed only by subconscious mind is an assault on human reason. Only a few scientific studies of free will have been performed, and each has involved only decisions to make simple movements that one already knows how to do. These studies have seriously flawed assumptions and interpretations. Also, each of these studies is contaminated by the requirement of pre-requisite processing needed to hold in conscious working memory the rules of the experimental game. In other words, I think that scientists who argue against free will have jumped to conclusions—hardly a judicious scientific stance. Until science provides evidence (as opposed to speculation cloaked in pseudo-scientific garb) it is scientifically irresponsible and dogmatic to insist there is no such thing as free will. It seems to me that such scientists are left with arguing from authority, as indicated by their citing Darwin and Einstein as no-free-will allies (Sommers 2007). Zombians reject these common-sense arguments. Yet, I have not seen anyone make the following point, which I believe to be irrefutable: in learning a new skill, such as playing the piano, there is no way the subconscious mind can control movements in the beginning, because it has no way of knowing what to do. Only the conscious mind can choose which keys to press because only it knows what should be done. If that is not free will, what is? My way to think of the relationship of mind to free will is illustrated in Fig. 7.7. The perspective I want to emphasize is that everyday intentions, choices, and decisions can arise through a combination of subconscious and conscious actions. Which mode of operations prevails, zombian or free-will, depends on the nature of the situation and that of the individualized brain. The idea is that for simple, welllearned, or habitual tasks, the subconscious mind issues the intent, choice, or decision and informs the conscious mind of what has been done, which in turn may or may not do anything about it. For complex or novel tasks, the conscious mind does the processing and informs (programs) the subconscious mind, which likewise may or may not do anything about it (Fig. 7.8).
Personal Responsibility The free-will issue is more than an arcane scholarly argument. Positions become politicized. In a zombian world, people are more likely to be victims and less able
272
7 On the Nature of Consciousness
Fig. 7.7 Emergence of free will from brain operations—a traditional view. When conscious mind is active, at least some of its operations can generate free will. Note that conscious mind is shown as the “tip of an ice berg,” beneath which lie more basic neural processes (white arrow and dotted line) (From Klemm 2010)
Fig. 7.8 Genesis of intent, choice, or decision can be made separately by subconscious or conscious mind or by joint action (From Klemm 2010)
to change maladaptive attitudes and behaviors. Thus, society and government must help people do what they cannot do for themselves. Zombians don’t seem to ask this question: is our decision to help fellow zombians likewise a zombian choice? If we have no free will, then there is not much we can do to improve ourselves or our plight in life. Or even if there are things that can be done to change us and our
Free Will Debates
273
situations, the approach will surely have to be different if we can’t initiate the change by force of our free will. The government or schools or some other outside force must program our subconscious. That, of course, is a driving force behind moves to increase the size and power of government. If there is no “I” in charge, then there is no reason to demand or expect personal responsibility. All manner of bad brains and bad behavior can be excused. If we believe there is no free will, we cannot justify our criminal justice system. If people cannot make choices freely, and if all their decisions emanate from subconscious processes, then how can we hold them responsible for unacceptable morals or behavior? All crime should be tolerated or at least excused, because the criminal could not help it. The zombie committed the crime. Lack of free will would mean that we should reform the criminal justice system so that no criminal would be jailed or punished. If we have no free will, it is inhumane to punish criminals or even terrorists. Indeed, the only justification for locking anybody up for misdeeds would be to protect society from further crime or terrorism. Capital punishment has to be banned, as indeed it is in many parts of the world. In the minds of many, criminals are victims. To believe in the absence of free will creates an intolerable social nihilism. Many defense lawyers increasingly use neuroscience inappropriately to convince jurors that the defendant was not responsible for the evil deeds. They even have a name for this kind of defense: “diminished capacity.” Indeed, to them the whole notion of evil is inappropriate. Lawyers are adept at stressing mitigating circumstances where criminal behavior was caused, they say, by a terrible upbringing, poverty, social discrimination, or brain injury. To be sure, most murderers have been found to have a standard profile that includes childhood abuse and some kind of neurological or psychiatric disorder (Gazzaniga 1998, 201p) . But many non-murderers have a similar profile. How can lack of free will explain such difference? The reality is that most people have brains that can learn social norms and choose socially appropriate behavior. Ignoring those norms is a choice. A most disturbing book, written by Laurence Tancredi (2005), uncritically accepts the Libet-type research reports. He endorses the zombian view and argues that human morals are “hard-wired,” with the “wiring” created by genetics and molded by uncontrollable forces in life experiences. Tancredi, is a lawyer and practicing psychiatrist. Not surprisingly, the poster boy for his arguments was a psychopathic serial killer, Ricky Green, who was abused as a child and had relatives with serious mental problems. Thus, Tancredi stresses that bad genes and bad treatment as a child made Green become a “biologically driven” murderer. Yet, in recounting the case history, it became clear that Green was not insane. He was fully aware of his childhood past, and was fully aware of, even remorseful over, his murders. He was also aware that his out-of-control episodes were triggered by the combination of sex and alcohol. So, it was clear, even to Green, that his crimes could have been prevented by avoiding alcohol. He apparently was not an alcoholic who had no control over drinking. Even if we give the benefit of doubt to the conclusion that Green could not control himself, it is a stretch to argue that the uncontrollability of psychopaths applies to
274
7 On the Nature of Consciousness
everybody else. One would have to argue that normal people are only normal because they got good genes and had a childhood in which their mental health was not damaged. Interestingly, Tancredi acknowledges the brain is changeable if skilled therapists provide structured rehabilitation for dysfunctional thinking. But the general tenor of the argument is that the individual is powerless to produce such changes. It has to come from others. Because dysfunctional people are victims who can’t help themselves, it is the duty of psychiatrists and government to mold the brains of people so they overcome bad genes and whatever bad experiences life has thrust upon them. People, zombies that they supposedly are, do not have the power to nurture their brain. Thus, government must create a cultural and educational environment in which humans are molded to conform to some pre-defined state of normality. Does this remind you of Aldous Huxley’s Brave News World? Also not considered by Tancredi and his crowd, is that dysfunctional people might have become that way through their own freely determined bad choices along their life’s journey. Arguing that the brain is modifiable by experience, as he and many others do, is a two-edged sword. While one edge slashes the idea that a person can’t change his brain, the other edge slashes the idea that people can’t be changed by the influence of others. A major function of consciousness, as I have argued, is to program the brain, which inevitably causes lasting changes in its structure and functions. If consciousness provides capability for freely chosen intentions, choices, and decisions, then people are responsible for how those powers are deployed. Tancredi acknowledges that many people have bad genes and very traumatic childhoods, yet overcome it. Sexually abused children do not necessarily become sexual predators as adults and may, in fact, become crusaders to protect children from abuse. But they don’t get any credit for a freely chosen decision to live a wholesome and helpful life. People who live a wholesome and constructive life get no credit for it. Their virtue is attributed to necessity, not to anything they voluntarily chose to do. How then do we account for the effect of schools and religious teachings? Do we conclude that it is the inner zombie that decides which ideas and beliefs to accept and which to reject? If so, why do some zombie brains accept the teachings and others reject them? How can anyone seriously contend that people have no conscious intentions, choices, or decisions—that we are driven only by impulses and desires? How can anyone contend that all our impressions, beliefs, value systems, and preferences are not molded by conscious choice? How can anyone seriously argue a person is not responsible for criminal and evil behavior? Yet, this view of human life is endorsed by many highly educated lawyers, mental health professionals, and scientists. Oh, and don’t forget liberal politicians. These people demean human life by regarding us all as victims or undeserving beneficiaries of good fortune. Success can be demeaned because we don’t deserve it. Failure can be excused because we can’t help it. Though all of us have subconscious devils, arising primarily from our animalbased biology, that does not mean that we are excused when the devils take over. How do we explain how so many people overcome their subconscious devils,
Free Will Debates
275
whether those are compulsions or addictions, maladaptive lifestyle, or even changes in attitudes and belief systems? At a minimum, conscious mind has the power of veto. I further contend that conscious mind is responsible for programming the subconscious, for better or for worse. This power is exerted in many ways, ranging from deciding what we read, what we watch on television, the kinds of people we associate with, and the kinds of behaviors we indulge. Responsibility is not only a social construct, it is also learned by the brain. And the brain has the power to make choices that are not easy. A terrible childhood, for example, need not condemn one to an immoral or underachieving life. Abraham Lincoln and Thomas Huxley are conspicuous examples of people who consciously chose to rise above their environment. Sigmund Freud was a cocaine addict. George Patton hallucinated. Merriweather Lewis, of the Lewis and Clark expedition, was a manic depressive. True, some brains have abnormalities that make it more difficult to learn social norms or to obey them. Yet, many brain abnormalities are created by the lifestyle and thought and behavioral choices that a person freely chooses. You will probably mess up your brain by snorting cocaine or smoking pot, but that behavior is something you chose to do. You may program your brain badly by associating with the wrong people, but again, that is a choice not a necessity. Let us leave aside for the moment the fact that a world in which no one can be held accountable is impractical, socially maladaptive, and even intolerable. After all, these conclusions are not valid scientific arguments against the zombian position. But hopefully this analysis of neuroscience research on free will shows that the zombian view is not only nihilistic, but scientifically unjustified. To be personally responsible means to be in at least partial control of our attitudes, thoughts and behavior and to be able to change them. To exert this control, we must have free will. In a free-will world, people can choose to extricate themselves from misfortune.
The Purpose of Free Will Consciousness as the Brain’s Planner One of the hallmarks of conscious thinking is the intention that deals with future planning. The logical flow of steps in systematic conscious planning results from a series of step-by-step intentions. While each step in the plan is constrained by logic and thus perhaps not freely willed, many plans have branch points where alternative steps can be freely chosen. Consciousness Facilitates Learning I take conscious intent one step further and view consciousness as a special adaptation that makes us the superior learners that we are. Consciousness guides development of intent, focus of attention, plans, creation of belief systems, suppression of unwanted distractions, and integration of information across past, present, and future epochs. In short, consciousness amplifies the programming of the brain, from sources both internal and external to the brain.
276
7 On the Nature of Consciousness
Consciousness makes our thinking explicit. Conscious mind not only vetoes decisions, but also generates intentions. By making analysis and decisions explicit in the consciousness, we can more effectively write the programs of the brain that change who we are, what we think, and what we do—now and in the future. Free will gives us power over the subconscious and over the future. We can set goals and make plans to achieve those goals. Consciousness gives us more power to make good decisions. Free will allows us to analyze the pros and cons of multiple alternatives and explicitly choose among them. Consciousness allows us to monitor the consequence of decisions and use free will to make necessary adjustments. Many of the zombians like to point to automatic behaviors, such as driving, riding a bicycle, and the like as good examples of the absence of conscious control. What they don’t seem to recognize is that it was conscious thought that caused these automatic behaviors to be learned well enough to become automated. Remember when you first learned to ride a bicycle? It was hardly a subconscious process. Even when decisions are made subconsciously, the conscious mind can veto decisions which it concludes is inappropriate through conscious rational process. Now the question is, how is the second-order decision made? Vetoed acts are decided in the consciousness. Are such vetoes also constructed subconsciously and the conscious mind just thinks it just appropriated the veto as its own? This may be hard to determine, because in many real-world situations, veto decisions are made in fractions of a second, often with no chance for prolonged subconscious planning and analysis. Studies of the Libet kind bear no obvious relevance to real-world situations where people are aware of their own stream of multiple decisions made at high speed, such as in driving at high speed in a crowded freeway, or playing any of a number of high-speed sports, such as tennis, racketball, basketball. Even if many such decisions are made subconsciously, we know that many choices are made “on the fly,” and it is hard to believe that no choice is made consciously. Although past experiences and subconscious programming adjust the probabilities that certain choices will be made, we are still free to make a choice. No matter how hard scientists and philosophers try to convince us that free will is illusory, all humans have an inescapable sense that they are free to make “right” choices, even in the face of high odds against it. Where does this sense of conscience come from? Where do we get our “illusion” that we chose to conduct our behavior in line with our sense of right and wrong, of wise and foolish? To the extent that consciousness is operative, we all are in theory “masters of our own fate, captains of our own ship.” Of course, that cannot be entirely true, because consciousness is always being pushed and shoved around by what the subconscious mind wants to do. But then what the subconscious does is affected by its programming, much of which is the result of conscious choice. Not all of us are equal in ability to be aware of or control our subconscious urgings. Some people are more autonomous and self-actualized than others. Some people are servile and submissive, more comfortable in choosing to blindly accept the views of gurus, teachers, or political leaders. It is not clear how much of this inequality is due to genetics and how much to learning, but certainly we can learn
Free Will Debates
277
such aspects of control over subconscious mind as how to control impulses, how to make wise decisions and how to plan carefully. It does not matter whether you believe that conscious mind is an external “observer” in the head, as Rene Descartes did, or whether you think of consciousness as emerging from widely distributed coherent neuronal impulse activity in the cortex, as I do. The practically important point is that this consciousness can change the processes that generate it. This is a profound property that affects how our individual personalities and attitudes change over time and even affects the evolution of whole human cultures.
Program the Subconscious Even if or when we were not responsible for a given act at the time of the act, we are responsible for the original creation of the demon that caused the act. This point is argued persuasively by Robert Kane (2004), who also quotes Aristotle as having said, “If a man is responsible for wicked acts that flow from his character, he must at some time in the past have been responsible for forming the wicked character from which these acts flow.” The mind does not self-assemble as a stereotyped reflex response to experience. The mind is assembled by the intervention of conscious thought in the response to experience. Comedian Tim Allen, in his semi-serious book I’m Not Really Here, challenges the zombian philosophy this way: “Eastern mystics say that there’s no one home inside us, no ‘I’ even asking the question. The ‘I,’ they claim, is an illusion. … So tell me smart guys: who’s doing the looking?” To believe in free will is to take responsibility for one’s actions and to hold others accountable for theirs. Moreover, taking responsibility leads to more success in life, however one wishes to define “success.” Taking responsibility empowers people, enabling them to cope and overcome. Whatever the nature of that conscious “ghost in the machine,” it has the power to tell the brain what to think and do. Conscious brain can do more than just veto. Conscious brain is our primary agent of change. If we are ever to move up out of our comfort zone, it will be freely willed intent of conscious brain that makes us do it. Conscious brain makes choices in the full light of awareness. Subconscious brain makes choices too, but the conscious mind is not aware of what subconscious choices are until those consequences become evident. Having hopefully made the case that the conscious brain makes choices freely, I am especially obliged to explain why so many people make stupid and irrational choices. Read any day’s newspaper and you will find examples of how people make bad decisions. They often choose to believe in ideas that are demonstrably wrong or irrational, even to the point of giving their lives. Indeed whole cultures may have belief systems that are logically incompatible with the belief systems of other cultures. Clearly, somebody gets it wrong. I have no simple answer to this question I raised. The explanation for bad choices involves a host of variables involving biology, personal experience, and, of course, conscious choice. Any particular brain can evolve any particular mind, depending on what the brain has learned and the choices that brain makes. Societies can also
278
7 On the Nature of Consciousness
make conscious choices and use peer pressure or actual force to achieve compliance with those choices. One thing is clear: our choices make us what we are. Few have put it any better than the fictional headmaster, Albus Dumbledore, of Harry Potter’s school of wizardry and witchcraft: It’s our choices, Harry, that show what we truly are, far more than our abilities.
What we consciously believe can program our subconscious. In his recent book, The Biology of Belief, Bruce Lipton (2005) argues that beliefs control our biology. His point is somewhat overstated, but the core idea is valid. The mind does exert enormous control over the body. In medicine, this is evident in the placebo effect, wherein subjects in clinical drug trials that are given a fake drug, a placebo, will also show improvement simply because they believe that they are getting the drug. Placebo effects are especially noticeable in the treatment of asthma, Parkinson’s disease, and depression. Of course, if the drug is any good, the treatment group shows a bigger effect. One cannot explain away the placebo effect by saying that the improvement is “all in their mind.” It is more than that. The mind’s belief has actually changed the bodily function. Many experiments document less mysteriously how the mind influences our heart rate, our blood pressure, our release of hormones, our immune system, and many other functions. The opposite of placebo effect, “nocebo,” is also true and perhaps more common. Here the idea is that negative thoughts actually promote or aggravate medical problems. The point is, for our purposes, that what we believe is a matter of choice. We can choose what we believe. We can choose to embrace positive thoughts or negative thoughts. We choose whether to see the glass as half full or half empty. We can chose religious beliefs that lift the human spirit or beliefs focused on killing those who do not share our religion. One conspicuous feature of free will is the sense of fairness. The Golden Rule of doing unto others as we wish them to do unto us is a fairness doctrine. Attitudes and actions in support of fairness are hard to explain as the result of a mechanistic subconscious operation, because the analysis of what is fair occurs via conscious reflection. We choose how to classify things as fair or unfair and make them part of the “program” of our character. Sense of fairness is linked to willingness to cooperate or combat. Conscious sense of fairness must have been present in the earliest human social groups, which were small and allowed easy tracking of who cooperated with whom. Lipton likens the subconscious to running the mind on autopilot, whereas conscious mind provides manual control. The most obvious examples of how manual control of conscious thinking serves to re-program the subconscious are how psychologists treat phobias. We may, for example, be driven by fears of snakes, or heights, or of failure, or even of success. Careful analysis of these phobias in the light of conscious and rational analysis can re-program the subconscious “autopilot” so that we no longer have such fears. More generally, we can say, as Lipton does, that “the biggest impediments to realizing the successes of which we dream are the limitations programmed into the subconscious.”
Free Will Debates
279
Beliefs that are embraced in the subconscious are automated and rise to the surface rapidly and without analysis. This is the mechanism of bias and prejudice. Such beliefs often serve us poorly. The conscious mind has the option of considered evaluation of beliefs, and can choose which beliefs will serve us well and which are counterproductive. Thus, we rely on the manual control of conscious mind to superimpose its will and belief systems on our thinking. Ultimately, the belief systems that become embraced by the conscious mind can serve to re-program our subconscious. This even occurs at the social level. In our own time, witness how attitudes about racial segregation have caused most people to realize it was unfair and most have adjusted their attitudes accordingly.
No Will, No Way What if willed positive thought is not applied? If people believe that they have no control over their lives, why would they even try to make things happen? People who don’t believe they are in control, make excuses and attribute their situation to fate, bad luck, or powerful forces or people beyond their control. My Blame Game book was designed to show such people their belief is a learned helplessness that is incorrect. People can be pushed and buffeted around by events and other people without exerting their free will. Many people may have become habituated to a state of willlessness. The extreme form of this condition, called learned helplessness, has been well documented in both animal and human studies. Elephants, for example, can be trained to stay tethered by staking their restraint chain so that escape is impossible. Then, later you can stake them out with a simple wooden stake that they could easily pull out, but they don’t even try. In people the situations where learned helplessness develops are much more complex. Usually, however, these are situations in which a person concludes that there is little that he or she can do. Trying harder or using a different approach is perceived in advance to be doomed to failure. The doubt breeds powerlessness. Even though the doubt may be irrational, it still has the power to immobilize us. Commonly, we are not only unable to escape our fate, but we feel victimized, blaming others and making excuses. The learned element of such helplessness is key to its creation and to its cure. Past lack of success and failure creates the state. The cure is conscious reasoning that unmasks the irrationality of learned helplessness. With free will, we can will ourselves into action and choose to pursue alternative goals, strategies, and tactics. We do not have to be chained to the stakes of the past. In his book, The Power Principle, Blaine Lee (1998, 363p) begins with the premise that learned helplessness is at the root of lack of personal power—and that this is a matter of choice. When you choose to be powerless, you exhibit such behaviors as: ignore, disregard, procrastinate, neglect, indifference. The consequences lead to living with the status quo, uncertainty, anxiety, fantasies, diminished capacity, and helplessness. Lee’s remedies begin with rational assessment of just how helpless and powerless you really are. The state may be real, but typically it is because you have boxed yourself in and not considered looking for a way out. Like the elephant chained to a wooden peg, you have come to accept your state as the norm, when in fact there is no rational reason why that has to be so.
280
7 On the Nature of Consciousness
There are alternatives, and these revolve around freely willed development of personal efficacy. This begins with a sense of agency, namely, that we are the agents of change. In Albert Bandura’s (1997, 604p) book, Self-efficacy. The Exercise of Control, cultural evolution is seen as moving from a collective sense of depending on the gods to a realization that people have the free-will capacity to shape their own destiny. Early humans relied on conciliation rituals to gods to improve their lot in life. Personal efficacy is seen by psychologists as a state of control over one’s life. People clearly differ in efficacy capability, but people do have choice of action and many of them are very effective at choosing wisely and at following through to accomplish their choice. It is hard to see how all this can be accomplished without a substantial degree of conscious free will. To move from helplessness to personal power, one must consciously decide that the discomfort and inconvenience of being helpless are greater that the difficulty in making a change. Change requires only a simple cost-benefit analysis and choosing accordingly. Once a decision to change is made, the change process may need such assistance as letting go of old ways, learning about alternatives, getting help from others, seizing opportunities as they arise—and, yes, faith and courage. All of these can be free choices too. The exercise is worth it. Of course, self-efficacy does not result directly from the exercise of free will. Efficacy develops with skill acquisition and coping achievement and this in turn empowers people. With personal power, you no longer have to make excuses. The old habits can die away to be replaced by better habits created out of a free will to change. We are what we repeatedly do. Excellence, then, is not an act, but a habit —Aristotle
References Bandura, A. (1997). Self-efficacy. The exercise of control. New York: W. H. Freeman & Co. Baumeister, R. F., Masicampo, E. J., & DeWall, C. N. (2009). Prosocial benefits of feeling free: Disbelief in free will increases aggression and reduces helpfulness. Personality and Social Psychology Bulletin, 35, 260–268. Bayne, T. (2009). Phenomenology and the feeling of doing: Wegner on the conscious will. In S. Pockett, W. P. Banks, & S. Gallaher (Eds.), Does consciousness cause behavior (pp. 169–185). Cambridge: MIT Press. Caggiano, V., et al. (2009). Mirror neurons differentially encode the peripersonal and extrapersonal space of monkeys. Science, 324, 403–406. Christophel, T., & Haynes, J. D. (2009). Single trial time-frequency decoding of early choice related EEG signals. Further evidence for non-conscious determinants of “free” decisions. Program No. 194.19 Neuroscience Meeting Planner. Chicago: Society for Neuroscience [Online]. Churchland, P. S. (2002). Self-representation in nervous systems. Science, 296, 308–310. Cowan, N. (2005). Working memory capacity. New York: Psychology/Taylor & Francis. Czisch, M., et al. (2002). Altered processing of acoustic stimuli during sleep: Reduced auditory activation and visual deactivation detected by a combined fMRI/EEG study. Neuroimage, 16(1), 251–258. Dalal, S. S., et al. (2010). Intrinsic coupling between gamma oscillations, neuronal discharges, and slow cortical oscillations during human slow-wave sleep. The Journal of Neuroscience, 30(4), 14285–14287.
References
281
Danquah, A. N., et al. (2008). Biases in the subjective timing of perceptual evens: Libet et al. (1983) revisited. Consciousness and Cognition, 17 (3): 616–627. Dennett, D. C. (2005). Sweet dreams. Cambridge: MIT Press. Desmurget, M., et al. (2009). Movement intention after parietal cortex stimulation in humans. Science, 324, 811–813. Edelman, G. M. (1989). The remembered present: A biological theory of consciousness. New York: Basic Books. Edelman, G. M. (1992). Bright air, brilliant fire. On the matter of the mind. New York: Basic Books. Gazzaniga, M. S. (1998). The mind’s past. Berkeley: University of California Press. Grill-Spector, K., & Kanwisher, N. (2005). Visual recognition. As soon as you know it is there, you know what it is. Psychological Science, 16(2), 152–160. Hobson, J. A., & McCarley, R. (1977). The brain as a dream state generator: An activation-synthesis hypothesis of the dream process. The American Journal of Psychiatry, 134, 1335–1348. Jaeggi, S. M., et al. (2008). Improving fluid intelligence with training on working memory. Proceedings of the National Academy of Sciences of the United States of America, 105(19), 6829–6833. www.pnas.org/cgi/doi/10.1073/pnas.0801268105. Jaynes, J. (1976). The origin of consciousness and the breakdown of the bicameral mind. New York: Mariner Books/Houghton Mifflin. Jeannerod, M. (2009). Consciousness of action as an embodied consciousness. In S. Pockett, W. P. Banks, & S. Gallaher (Eds.), Does consciousness cause behavior (pp. 25–38). Cambridge: MIT Press. Joordens, S., et al. (2002). When timing the mind one should also mind the timing: Biases in the measurement o voluntary actions. Consciousness and Cognition, 11, 231–240. Kane, R. (2004). Agency, responsibility, and indeterminism: Reflections on libertarian theories of free will. In J. Campbell, M. O’Rourke, & D. Shier (Eds.), Freedom and determination (pp. 70–88). Cambridge: MIT Press. Kasanetz, F., et al. (2010). Transition to addiction is associated with a persistent impairment in synaptic plasticity. Science, 328, 1709–1712. Klein, S. (2002). Libet’s temporal anomalies: A reassessment of the data. Consciousness and Cognition, 11(2), 198–214. Klemm, W. R. (1990). The behavioral readiness response. In W. R. Klemm & R. P. Vertes (Eds.), Brainstem mechanisms of behavior. New York: John Wiley & Sons. Klemm, W. R. (2010). Free will debates: Simple experiments are not so simple. Advances in Cognitive Psychology, 6, 47–65. Klemm, W. R., Li, T. H., & Hernandez, J. L. (2000). Coherent EEG indicators of cognitive binding during ambiguous figure tasks. Consciousness and Cognition, 9, 66 – 85. Lau, H. C., et al. (2004). Attention to intention. Science, 303, 1208–1210. Lau, H. C., Rogers, R. D., & Passingham, R. E. (2006). On measuring the perceived onsets of spontaneous actions. The Journal of Neuroscience, 26(27), 7265–7271. Lauer, C., et al. (1997). A polysomnographic study on drug-naive patients. Neuropsychophar macology, 16, 51–60. Lee, B. (1998). The power principle. New York: Simon and Schuster. Lee, D. (2004). Behavioral context and coherent oscillations in the supplementary motor area. The Journal of Neuroscience, 24(18), 4453–4459. Libet, B., & Commentators (1985). Non-conscious cerebral initiative and the role of conscious will in voluntary action. Behavioral and Brain Sciences, 8: 529–566. Lipton, B. (2005). The biology of belief. Santa Barbara: Mountain of Love/Elite Books. Makeig, S., Jung, T.-P., & Sejnowski, T. T. (1998). Multiple coherent oscillatory components of the human electroencephalogram (EEG) differentially modulated by cognitive events. Society of Neuroscience Abstracts, 24(1), 507. Mele, A. R. (2009). Free will: Theories, analysis, and data. In S. Pockett, W. P. Banks, & S. Gallaher (Eds.), Does consciousness cause behavior (pp. 187–205). Cambridge: MIT Press. Musallam, S., et al. (2004). Cognitive control signals for neural prosthetics. Science, 305, 258–262. Nijhawan, R., & Kirschfeld, K. (2003). Analogous mechanisms compensate for neural delays in the sensory and the motor pathways. Current Biology, 13(9), 749–753.
282
7 On the Nature of Consciousness
Obhi, S. S., & Haggard, P. (2004). Free will and free won’t. American Scientist, 92, 358–365. Ono, F., & Kawahara, J.-I. (2005). The effect of non-conscious priming on temporal production. Consciousness and Cognition, 14(5), 474–482. Pockett, S., Banks, W. P., & Gallagher, S. (Eds.). (2009). Does consciousness cause behavior? Cambridge: MIT Press. Povinelli, D. J., & Giambrone, S. (2001). Reasoning about beliefs: A human specialization? Child Development, 72, 691–695. Robertson, E. M., Press, D. Z., & Pascual-Leone, A. (2005). Off-line learning and the primary motor cortex. The Journal of Neuroscience, 25(27), 6372–6378. Sarrazin, J.-C., et al. (2008). How do we know what we are doing? Time, intention and awareness of action. Consciousness and Cognition, 17(3), 602–615. Shrager, Y., et al. (2008). Working memory and the organization of brain systems. The Journal of Neuroscience, 28(18), 4818–4822. Singer, W. (2007). Binding by synchrony. Scholarpedia, 2(12), 1657. Singer, W., & Engel, K. (2001). Temporal binding and the neural correlates of awareness. Trends in Cognitive Sciences, 5(1), 16–25. Sommers, T. (2007). The illusion of freedom evolves. In D. Ross et al. (Eds.), Distributed cognition and the will (p. 73). Cambridge: MIT Press. Soon, C. S., et al. (2008). Non-conscious determinants of free decisions in the human brain. Nature Neuroscience, 11, 543–545. doi:10.1038/nn.2112. Stillman, T. F., et al. (2010). Personal philosophy and personal achievement: Belief in free will predicts better job performance. Social Psychological and Personality Science, 1, 43. doi:10.1177/1948550609351600. Swartz, K. B. (2003). Self-reflection, a review of the book. The face in the mirror: The search for the origins of consciousness, by Julian Keenan with Gordon Gallup (278 pp). Ecco/Harper Collins. In American Scientist, Vol. 91, pp. 574–575. Takehara-Nishiuchi, K., & McNaughton, B. L. (2008). Spontaneous changes of neocortical code for associative memory during consolidation. Science, 322, 960–963. Tancredi, L. (2005). Hardwired behavior. What neuroscience reveals about morality. New York: Cambridge University Press. Ulrich, R., Nitschke, J., & Rammsayer, T. (2006). Perceived duration of expected and unexpected stimuli. Psychological Research, 70, 77–87. Vertes, R. P. (1986). A life-sustaining function for REM sleep: A theory. Neuroscience and Biobehavioral Reviews, 10, 371–376. Vohs, K. D., & Schooler, J. (2008). The value of believing in free will: Encouraging a belief in determinism increases cheating. Psychological Science, 19, 49–54. Walter, H. (2001). Neurophilosophy of free will. From libertarian illusions to a concept of natural autonomy. Cambridge: MIT Press. Warrington, E., & Weiskrantz, L. (1968). A study of learning and retention in amnesic patients. Neuropsychologia, 6(3), 283–291. Webb, W. B., & Agnew, H. W., Jr. (1971). Stage 4 sleep: Influence of time course variables. Science, 174, 1354–1356. Wegner, D. M. (2002). The illusion of conscious will. Cambridge: MIT Press. Yoo, S., et al. (2007). A deficit in the ability to form new human memories without sleep. Nature Neuroscience, 10, 385–392.
8
Theories of Consciousness
The contemporary “fad” among neuroscientists is to look for esoteric mechanisms that could explain the near-universal belief that consciousness must be some kind of “emergent property” of brain. That is, it is widely assumed that consciousness cannot be explained from the first principles of neurobiology. I will present four theories that I think offer the best potential for explaining conscious mind: Bayesian Probability, Chaos, Quantum Mechanics (QM), and Circuit Impulse Patterns (CIPs). Bayesian Probability and Chaos Theory treat the brain as a “black box” and purport to describe what the brain does more than show how the brain does it. QM and CIP theories get at the nuts and bolts of how the brain might produce consciousness. Quantum mechanics may prove to be the ultimate explanation, but this theory contains many counterintuitive and bizarre notions that we may never completely understand, even in its home domain of particle physics. Consciousness may well be inexplicable, but the fourth theory on CIPs at least has the virtue of being mostly intuitive and, more importantly, is clearly based on what we have learned about the brain over the last 100 years.
Bayesian Probability At the single neuron level, we can consider the question of how does a neuron respond to a specific kind of stimulus. The encoding takes the form of a series of spikes that can be quantified in terms of their number and when they occur. However, neurons are stochastic, that is, their spike firing will not be exactly the same if the same stimulus is repeated. So, the encoding has to be characterized by a probability distribution of spike patterns over a given time. The code then is captured in a population of all possible responses to a given stimulus or input. In the context of circuit impulse patterns (CIPs), this means that neurons not only represent the world with CIPs but those CIPs may also represent a family of probabilities. Brain networks operating on Bayesian principles compute with those
W.R. Klemm, Atoms of Mind: The “Ghost in the Machine” Materializes, DOI 10.1007/978-94-007-1097-9_8, © Springer Science+Business Media B.V. 2011
283
284
8 Theories of Consciousness
probabilities and combine them with reinforcement estimates, thus learning results for given circumstances. Formally, the Bayes’ rule is as follows: P (A | BC ) = P (A | C )P (B | AC ) / P (B | C ) where P = probability, A = one of several explanations for new observation, B = the new observation, and C = summary of all prior assumptions and experience. Bayes’ rule tells how the learning system should update its beliefs or frame of reference as it receives a new observation. Before making the observation B, the learning system knows only C, but afterwards it knows BC, that is, it knows “B and C ”. Bayes’ rule then tells how the learning system should adapt P(A | C) into P(A | BC) in response to the observation. In order for Bayes’ rule to be useful, the explanation A needs to be such that together with the prior assumptions and experience C it fixes the probability P(B | AC). P(A | C) can be thought of as the baseline probability or frame of reference, whereas P(A | BC) is the probability after adjustment for the new information. At that point, P(A | BC) for one observation then becomes the P(A | C) for the next observation. The brain does have to perform many functions that could benefit from Bayesian mechanisms. For example, how do we compare the risks of one-time events? How do we make decisions to optimize our gains and minimize our costs? Eric Baum (2004) points out that the value of Bayesian mechanisms is that it allows our brain to estimate the likelihood of various outcomes and the desirability of those outcomes . Bayesian probability is conditional, based on immediately pre-existing conditions. For example, the likelihood of impulse discharge by the neurons in the hypothalamus that signal when you are hungry will vary depending on whether or not you just ate. However, we should not conclude that all neurons exhibit Bayesian coding. But it seems clear that many CIPs could represent probability families, which in turn could be established by prior experience and learning. Such processes would also add significant weight to the conclusion that CIPs are central to how the brain thinks. To decode a spike train, a second-order neuron needs to detect the probability distribution of an incoming spike train and associate it with a given kind of stimulus. These probabilities depend on the nature of the stimulus and the nature of the receiving neuron. Bayesian statistics are a common way to evaluate such conditional probabilities. Bayesian models have demonstrated the feasibility of coding at a population level. Such models are also effective in showing how a variety of sensory cues can be integrated and processed with prior knowledge. Also decision making and its speed and accuracy seem amenable to Bayesian analysis (Doya et al. 2007). Probability estimates for any given situation are adjusted by experience and learning. The idea is an application of the mathematical principle of Bayesian probability theory. Bayesian probability is different from the kind of probabilities we are more familiar with, which are based on frequency or incidence. Bayesian probabilities are determined by updating existing information to account for new information. As an example most of us can relate to, Bayesian
Bayesian Probability
285
approaches have been applied to filter spam e-mail. A Bayesian e-mail filter uses a reference set of e-mails to define what is originally believed to be spam. After the reference has been defined, the filter then uses the characteristics in the reference to define new messages as either spam or legitimate e-mail. New e-mail messages act as new information, and if mistakes in the definitions of spam and legitimate e-mail are identified by the user, this new information updates the information in the original reference set of e-mails with the hope that future definitions are more accurate. The original idea for Bayesian population coding in the nervous system came in 1963 from Geoffry Hinton of the University of Toronto and Terry Sejnowski, then at Johns Hopkins, who posited that the brain makes decisions based on probability estimates, evaluating a range of possibilities and assigning likelihood estimates. Observations of neuronal firing patterns indicate that at least some neurons operate with Bayesian principles. Some parietal cortex neurons, for example, modulate their firing rate relative to the probability of a saccade ending within the neuron’s receptive field (Platt and Glimcher 1999). As another example, dopaminergic neurons code reward probability (Schultz et al. 2000). Bayesian neurons also can represent how probabilities change over time (Janssen and Shadlen 2005). Population code computing, such as sensorimotor transformations, is an area of active investigation. Mathematical models show that network computing can act on set of noisy population codes and generate another set as output (Latham and Pouget 2007). An approach to study Bayesian ideas that seems to be gaining followers is to treat the brain as a probability machine that constantly makes predictions about the world and then updates them based on what it senses and learns. How neuronal circuits make predictions receives scant attention. Clearly, Bayesian theory treats brain as a biological learning machine, which it certainly is. It is not clear how the newborn brain, which is almost a blank slate, can make probability estimates. Surely, the statistical error must be quite large. Viewed this way, as the brain matures, its probability estimates become more correct and reliable. In conscious brain, Bayesian principles are codified as beliefs or expectations. At a neural network level, these principles presumably are codified by circuit impulse patterns (CIPs) that represent the frame of reference by which the CIPs representing new information are matched. Bayesian predictions may operate at both subconscious and conscious levels, but it is not clear how these principles can generate consciousness. There are experiments on sensory perception that seem to support Bayesian processing. Some cognitive processes also seem to operate that way too. For example, when you are listening to someone speak, your brain anticipates and predicts what might be said next. The brain does more than just receive a stream of sound, it predicts it, adjusting the predictions according to what is actually said. One science writer, Gregory Huang, gives the example of seeing something in your peripheral vision. At first, your brain does not quite know what it is. To reduce the prediction error between what the brain predicts and what it senses, the brain has two choices: either change its prediction or change the way it collects the information.
286
8 Theories of Consciousness
If the initial prediction is that the brain perceives biological significance in the peripheral vision, it will make the head turn so that the eyes can register a more complete image. The idea is to minimize prediction error. We are still a long way from a simple equation for brain function. It is also not at all clear how Bayesian processes operate in creative thought or generate consciousness. Nor can I see how Bayesian processes could distinguish among non-conscious, subconscious, and conscious thought. There is also the possibility that complex life processes cannot be reduced to simple equations. For example, how would one write an equation for the theory of evolution? Or for embryonic differentiation? Although Bayes methods provide an elegant way to quantify the information content of the family of probabilities for a given neuron, it does not in and of itself explain how specific functions occur. But this is a useful way to reinforce the notion that there are impulse patterns at all points in a circuit and that these patterns carry information that can be quantified by Information Theory. Thus, these population codes must contain useful information for specific functions. But how we can move from this level of analysis to consciousness is not at all evident. My main criticism of Bayesian theory is that it is only descriptive. Even when those descriptions are quite apt, the theory does not help much in understanding how consciousness is generated and represented by brain.
Chaos Theory Chaos Theory is a mathematic approach to consciousness and a relatively new approach to science in general. Chaos theory seems to describe the behavior of whole complex systems, whether they involve the weather, the flow of traffic on congested streets, or the function of the human brain. The extensive connections among neurons create large functional networks that dynamically change their constituent members and the information contained by the network. Such networks can be enormously complex, leading theorists to look for “complex systems” methods of analysis, such as chaos theory. Chaos Theory is also somewhat controversial, inasmuch as some people think it rejects the reductionism that has enabled science to flourish in the last century. Nonetheless, Chaos Theory provides a refreshing way to think about processes that are complex, nonlinear, and a mixture of random and predictable mechanisms—the brain certainly is all that! Chaos provides a mathematical way to describe deterministic phenomena that are so complex that they seem random. Central to the idea of a chaotic system is that it is governed by deterministic controls, yet seems to behave unpredictably, at least until the system eventually settles into its “attractor” zone, which is a mode of operation that becomes more predictable. One characteristic of chaotic systems is that small errors can evolve rapidly and exponentially to “blow up.” This certainly is a way to describe epilepsy, and the mathematical description of changes in EEG during a seizure show the dynamics and attractor states that characterize chaos.
Chaos Theory
287
The original ideas about chaos were promoted in the late 1800s by French mathematician, Jules Poincaré, but for the next 70 years chaos theory remained obscure and arcane. A major revival of interest began in 1963 from the work of meteorologist, Edward Lorenz (1993). Early on, scientists realized that weather is chaotic. After about a decade, scientists began to realize that the theory could be applied to a variety of natural complex-system phenomena, and that includes brain function (Krasner 1990). The mathematics of chaos allows one to track a succession of nonlinear events in time or position in space. Thus, it can predict the behavior of the governing system. There’s the rub. Even if chaos could explain what a system does, for example, generate consciousness, it cannot explain how it does it. Theorists have attempted to apply chaos theory to many common dynamical systems, such as stock market prices, weather forecasting, and any physical or biological process. Predicting the future state of a dynamical system becomes difficult when that system is non-linear, as is the case of the brain and the behavior it controls. Appropriate candidate systems for chaos theory must be sensitive to initial conditions. That is, even a slight change in the initial configuration of the system can cause dramatic changes in the resulting system output. It would seem futile to try accurate predictions when chaotic systems are unpredictable and likewise futile to create an equation because the solution will always change with the specific initial conditions. However, the totality of all possible solutions may be identifiable. Chaos theory asserts that slight changes in initial conditions will eventually generate large and seemingly chaotic outcomes. Further, some behaviors may be too complex to be predicted and behaviors can emerge for no apparent reason—again, depending on initial conditions. Even advocates of this view, such as William Clark and Michael Grunstein (2000, 322p), concede that the outcome is not entirely unpredictable. By analogy, the path a boulder takes as you push it off a cliff depends on the initial conditions (location on the cliff, orientation of the boulder, force of push), but it will always end up at the bottom. In relating chaos ideas to brain function, we need to consider what it is in brain that moves through time. The most obvious answer is brain electrical activity, impulses and field potentials. These electrical phenomena are nonlinear. That means that a small change in initial conditions in a neuron or circuit of neurons can be amplified nonlinearly, ultimately producing marked output changes in attitude or behavior that are not programmed in the DNA or even in memory. That does not mean that these changes operate outside of human consciousness or cannot be aborted or re-directed by conscious intervention. So, the chaos theory view of human attitudes and behavior does not let us off the personal responsibility hook. The amount of indeterminacy in generating impulses and behavior does not automatically extend to indeterminacy of outcome. As Clark and Grunstein put it, “Chaos may indeed force us to experience things not scripted in either genes or experience, but we have extra-ordinary power to learn.” They define the nature of free will as the ability to choose among personal and social possibilities that were dictated by neither genes nor experience. They conclude, “Each of us must struggle to maximize the genetic hand we have been dealt, played in the context of the
288
8 Theories of Consciousness
e nvironment into which we are born, against a certain level of indeterminacy we must somehow learn to bring under control. It is this struggle that defines us and makes us human.” Chaos mathematics does a fair job of describing state changes of such brain functions as electrical activity, particularly when that activity is rhythmic. Commonly, chaos analysis is applied to the succession of microvoltages found in an EEG signal. A rigorous explanation of chaotic dynamics at the level of neuronal spike trains is presented in the book by Izhikevich (2007). Despite the widespread belief that the brain is chaotic, it is not clear to me how mathematical description is the same thing as an explanation of mechanisms. Moreover, chaos theory has not been tested much on a wide range of other, nonelectric, kinds of brain functions, ranging from biochemistry to behavior. The ideas of chaos theory apply to tracking transient states—in the language of chaos, plotting the change “trajectory” over time. Brain electrical activity is a logical target for testing chaos theory. The course, called trajectory, of the behavior of a chaotic system, tends to some sort of stable state or equilibrium. Mathematicians call this an attractor state. Some attractors are not steady state or periodic and are even stable in the sense that small changes in the system will not obliterate their trajectory. These are called “strange attractors.” Chaotic systems tend to be “attracted to” certain functional states. A system that produces periodic behavior, such as a clock, is said to have a limit-cycle attractor trajectory. Aperiodic oscillation is typical in normal brain, and it produces what is called a “chaotic attractor.” Walter Freeman (2009), the leading advocate of brain as a chaotic system, believes that brain fluctuates endlessly around in an attractor basin until a phase transition occurs that moves the trajectory across the boundary between basins to another attractor. He believes that the cortex has many basins that correspond to previously learned classes of stimuli. Very small changes in neuronal impulse discharge in a few neurons can shift the trajectory from one basin to another. Brain electrical activity seems to have some properties of chaos. The hope of chaos advocates is that by studying chaotic-like behavior of brain electrical activity one is in fact learning how the total system works. However, of necessity, it is not possible to study the dynamics of each and every generator of electric current in the brain. The typical electrical activity studied is the EEG, not impulses. One can only sample, as when trying to record the impulse activity from large numbers of implanted microelectrodes. Or, as usually done, one can sample large populations by recording field potentials, such as the EEG. The problem here is that the EEG waveform is a mixture of multiple frequencies that arise from multiple current generators. It is not at all clear how the trajectory of such a composite signal tells us anything about the function of the multiple current generators that concurrently generated the composite EEG signal. It could be useful to compare chaotic dynamics of the different frequency components of the compounded EEG waveform. To illustrate what chaos mathematics can do, you can calculate a trajectory of successive digitized amplitude values of an EEG by plotting the microvoltage of any given data point as a function of the next (or nth) data point. When you do that
Chaos Theory
289
you may see an intrinsic structure in the trajectory over time, especially when the EEG is oscillating. In such plots, time is not explicitly represented, but rather units of time can be thought of as plotted along the trajectory. Oscillation as such is not chaos, but may be triggered into it if a controlling parameter changes in certain ways. In the example of the simple logistic map for microbial growth rate, at equilibrium there is a periodic alternation between large population and lower population. However, if the growth rate parameter is raised beyond a critical point, chaotic oscillation emerges with a mixture of frequencies. Brain waves contain a mixture of frequencies and that leads some people to think the brain is chaotic (Fig. 8.1). EEG signals shift between nearly pure sine waves, like alpha, to mixtures of multiple frequencies. Degrees of chaos also vary, ranging from a mixture of small high-frequency oscillations (in the range of 40–90/s or so), when we are aroused and alert, to large aperiodic waves with the small fast waves “riding on top,” when we are drowsy or asleep. Much more definitive evidence of chaos occurs in epilepsy when large spike-like waves replace all other electrical activity.
Fig. 8.1 Evaluation of EEG signal in the context of Chaos theory. Calculation of a succession of sampled voltages in an EEG trace that exhibits 8–12/s alpha rhythm (upper left). When sample voltages are plotted as a function of short intervals as with lag = 2 (every other data point), the trajectory shows clear structure, with an attractor zone around a small hole in the center. With increasing lag, there is less dependence of one data point and the preceding data point, and the structure breaks down
290
8 Theories of Consciousness
Walter Freeman (2000) has made the best use of chaos theory in neuroscience. He regards the brain as a chaotic system that has an equilibrium attractor “rest” state. Stimulation produces an active state, which degrades back to the attractor state when the brain habituates to the stimulus or the stimulus stops. One problem with this view is that the brain never rests. Most of its neurons are always firing impulses, though it is true that the number and pattern of impulse discharge is changed by stimulation. Freeman’s initial work involved recording the EEG from the simplest part of the brain, the olfactory bulb. He built an ingenious array of 64 electrodes that could cover most of the bulb. He finds that activity at each electrode oscillates. All parts of the bulb share similar waveforms, but the amplitude at one or more electrodes will be significantly larger, depending on odor stimulation conditions. The information in these oscillations, as produced by different odors for example, is manifest in the topography of the amplitudes. The odor signal induces amplitude modulation in a spatially coded way. The amplitude variations are closely related to discharge rate of neurons in each given electrode’s location. Buzsáki (2006) links chaos theory with EEG-behavioral correlations. For example, when the EEG exhibits alpha rhythm, as in the figure above, trajectories show structure. Presence of alpha occurs in relaxed waking states and becomes most pronounced when visual stimuli are blocked by closing the eyes. This leads to the idea that alpha reflects an “idling state,” enabled because the eyes and skeletal muscles are not moving all the time. In such a state the cortical columns that generate the EEG are free of demands, indicating that the circuits self-organize into oscillation and “chaos.” You might be tempted to say that relaxation and sleep are the default states of brain function in which self-organization dominates the system. Changing initial conditions, by sudden stimulation for example, disrupts self-organization and the subsequent trajectory of the state’s output. Similar logic, however, could be used to defend alert wakefulness as the default brain state. Neural networks seem to mediate a succession of transient states that constitute what the brain does. A model neural network, upon receiving some input signal, will gradually change its pattern of activated nodes (equivalent to neurons) until it settles into one “attractor” state. How one characterizes each state, i.e. each point in the trajectory, is a critical assumption and one that is seldom examined critically. For example, the typical data point is an instantaneous microvolt value in an EEG. Can you characterize a state with such a data point? Given that EEG signal at a typical electrode can reflect activity of several networks that may or may not be functionally linked at any given moment, how can such a parameter indicate network state?… which network? Freeman is a maverick opposed to the traditional information processing model of neural activity, which has been the underlying assumption of virtually everything I have tried to explain in this book. Instead, Freeman has developed his own model, based on chaotic activity in neural populations, assimilation, and amplitude modulation patterns of EEG activity. This perspective grows out of Freeman’s long pioneering history on theoretical implications of the EEG, originally manifest in his classic book, Mass Action in the Nervous System. While most neuroscientists
Chaos Theory
291
embrace the idea that the brain processes information, I feel that we owe Freeman a hearing for his views in deference to his prominence as a scientist of the first rank. Freeman points out a number of reasons for his discontent with the information processing model: he claims it fails to explain how we select which stimuli to attend to, and disregards the impact of emotions on how we perceive and experience things. His main criticism, however, is that information processing does not seem to account for the unique conscious meanings we each form for experiences. There is great and altogether appropriate emphasis in Freeman’s model placed on the dynamic nature of the mind, as well as active rather than passive perception. But that does not mean that the nonlinear dynamical modeling of chaos theory accurately describes thinking, much less explains it. Freeman uses the term “assimilation” to describe this incorporation and consequential application of environmental information. Self-organization is attributed to be the driving force in our environmental assimilation and generation of intention. It is achieved, according to Freeman, by means of chaotic dynamics sustained by neural populations, namely those in sensory systems. The principles of these chaotic dynamics are, as Freeman himself freely admits, difficult concepts to understand. Freeman does not tackle the issue of free will, but I suspect that he believes in free will. Notwithstanding his assertion that minds self-organize, Freeman says we interact with the world in a goal-directed, intentional way by creating meaningful constructs of it (embodied in brain activity patterns called amplitude modulation patterns), rather than passively processing the information it sends to us. One of the first tenets of these dynamics is the idea that neural populations have a resting level of activity, called a point attractor, to which they will always tend to return. Oscillation, centered at this point attractor, can emerge when these neural populations are supplied with negative feedback. When learning and consequently synaptic modification occurs, this oscillation extends across time such that negative feedback increases, and this point attractor becomes a limit cycle attractor. Where a point attractor is a particular level of activity (i.e. activity is centered around a line), a limit cycle attractor is a particular wave pattern of activity, with set amplitude. We form different attractors for different learning experiences, and use these attractors when generating goal-directed behavior. When different neural populations (as in a sensory system) interact to excite or inhibit each other, chaotic activity results. This activity provides a level of disorder that contributes to the brain’s flexibility while learning. When an EEG is taken from these neural populations upon presentation of a stimulus, a pattern of neural activity within a sensory system emerges that supports the notion that every neuron and neural population is involved in the perception and sensation process. Each measured population displays the same waveform of activity, but with differing magnitude. That is, they are amplitude modulated. These modulated patterns are constructed with each sensory input and are unique to each type of input. They also evolve for each specific type of input over time. This evolution over time, and across situation, of amplitude modulated (AM) patterns for specific stimuli is perhaps the key to Freeman’s theory. Such evolution is, in Freeman’s opinion, evidence for the embodiment of meaning in specific AM patterns.
292
8 Theories of Consciousness
He reasons that, were the information processing model to hold true, the patterns for each stimulus would be the same or averaged across time, and each stimuli-specific pattern would be stored in memory so that each time an animal is presented with a new stimulus, they would be able to access this databank and find the closest matching pattern to determine this stimuli’s identity. If this is not the case, and many patterns can be developed for the same stimulus depending upon the context in which it is presented, Freeman believes there must be some further information meaning attached to these patterns. Perhaps the brain has a mechanism for tagging and meta-tagging packets of information. Also supporting this notion is the fact that with each new stimulus, while new patterns are created, old ones are altered to account for the meaning of the new sensory experience. The uniqueness of AM patterns among individuals provides additional evidence for Freeman’s model of meaning. How can meaning get captured by the mechanism of amplitude modulation? The meaning in these AM patterns is essential to the self-organization that Freeman assigns to be the driving force of our environmental interactions. The sensory cortices in the brain only receive the AM patterns (in the form of nerve impulses) from sensory system neural populations. Since these patterns hold self-created meaning, it is logical to assert that the brain receives self-created input. Freeman also reconstructs the pathways for the flow of sensory information in his model. Whereas in the information processing model, information flows from various sensory neurons through intermediates such as the brainstem and thalamus, and then ultimately to a sensory cortex and likely also some side loops for additional processing, in Freeman’s model all forms of sensory input flow to the limbic system, particularly the hippocampus and entorhinal complex which both hold sway over intentional behavior. The interactions between the entorhinal cortex, the sensory systems, and motor systems, as well as the AM patterns ultimately sent to the entorhinal cortex, allow for the integration of all activity into a global AM pattern that unites all input and enables the direction of intentions, and, I would add, the “flavoring” added by emotional context. Making the limbic system an integral part of brain function is a unique contribution of Freeman’s research data and his interpretations. The involvement of the limbic system integrates the influence of emotions on intentions. The mechanism for the direction of intentions seems rather unclear in Freeman’s presentation of his model, though he does offer a few ways that the limbic system appears to regulate intention. But no grand or groundbreaking model seems to emerge. It seems as if he described how input gets to the limbic system in a unique way and then assumes that we know how the limbic system directs intention. We don’t know. However, experiments have supported the role of the limbic system as an intention generator, given that animals lacking a more developed limbic system or a fully functional limbic system have reduced ability to generate intention. Whether limbic-system generated intention is exclusively subconscious is unknown. Each intention formed is based on past experience. This is also why each new experience alters old AM patterns as well as the point and limit cycle attractors of neural populations; every new learning experience is accounted for in how we interact with the world. Freeman also asserts that global AM patterns promote awareness. Again, the evidence for this assertion is missing.
Chaos Theory
293
I don’t think Freeman’s model sheds much light on the nature of consciousness. But then whose model does? He regards consciousness as an interactive process by which the brain modulates its own activity. That is fine as far as it goes. It just doesn’t go very far. In fact, you could say the same thing for certain subconscious processes, such as movement coordination by the cerebellum. The complicated nature of Freeman’s model makes it difficult to assess its validity. He certainly presents some novel ideas, as well as supporting scientific evidence. However, it appears that some of the evidence he presents as support is only presented in a context in which it supports his ideas. For example, the relationship of amplitude modulation to impulse firing rate and interval coding remains unclear. How does amplitude modulation of field potentials cause thinking? I would think they are the consequence of thinking.
The Problem of Fast Transients A main problem with attractor states as representatives of thought, is that the neural networks of brain undergo changes on such a fast time scale that time to reach classical attractor states is often not possible. In the most complex parts of brain, such as the cerebral cortex, equilibrium may never be reached. Once an attractor state is reached, no useful dynamics are revealed. What is revealed is the state the system has settled into. Without the complete dynamical history, one may not know the path used to reach the attractor state. Misha Rabinovich and colleagues (2008) at University of California, San Diego, present a new way of dealing with this enigma. Recent studies of odor processing in locusts and starfish provide a more refined way to use chaotic dynamic modeling. Odors generate odor-and concentration-specific patterns of responses in principal neurons. For a given odor and concentration the trajectory of impulse response reflects a succession of states that settle into a fixed point in state space. Even if a given stimulus condition is not sustained long enough to achieve an attractor state, stable transient states can be seen. Rabinovich argues that it is the transient paths, not fixed-point states, that are most meaningful to the total system. Experimental observations confirm that a population of neurons that receives signals from principal neurons responds mostly during transients. Brain computations may be seen as a succession of unique patterns of transient activity, controlled by adjustments in input. The model which best suits reality may be one based on stable “heteroclinic channels,” Such a channel is one in which a sequence of stable states (“saddle points”) dissipate successively, constituting a path that is stimulus specific. The channel path reflects how the computational system gets from the initial base state to the fixed point, and it is this path that constitutes the coding for the stimulus. All of the trajectories in the neighborhood of saddle points remain in the channel, thus assuring reliability, reproducibility, and robustness. This kind of coding need not be limited to sensations. Characteristic path trajectories can code for specific memories and a wide assortment of cognitive processes. The networks that subserve such dynamical paths create the selective patterns as a
294
8 Theories of Consciousness
way to store and recognize the information and to employ information for “thinking.” Thus, we can think that brain operations depend on a constellation of unique patterns of transient impulse activity that are controlled by input, either extrinsic as in the case of sensations, or intrinsic as a result of interactions among overlapping neuronal networks.
Fractal Dimension Fractal geometry derives from chaos theory. Fractals are the geometry of chaos and thus fractals are one side of the same dynamical coin. Fractals track irregularity in space, whereas chaos tracks irregularity in time. Mathematician Steven Strogatz gives us this metaphor: “Fractals can be thought of as the footprints that chaos leaves behind.” One way of looking at chaos is to think in terms of its fractal dimension. “Dimension” in this case refers to the ability of a space to contain a set of points and measures the degree of irregularity. As an analogy, think of a cube, which has three dimensions. If its surfaces are rough or fuzzy, a cube can have a fractional dimension, ranging for example from 3.1 to 3.9. The surface of the brain’s cortex, for example, has a fractal dimension of 2.3, owing to the fissures and folds move it toward a three-dimensional object. The dimensions of a smooth cortex, as in rodents or rabbits, would be significantly less. For a time series, such as the EEG, time series data have been represented as a curve with dimension between one and two, with higher values often associated with high degrees of mental alertness and cognitive performance. Many investigators have looked at fractal dimension with the hope of identifying neural correlates of various states of cognition. While such correlations can often be found, they have not been consistent. Even when fractal analysis seems to work, we are still left with a problem. Fractal measurements reflect the mathematics of how different frequencies are compounded when they are simultaneously generated, but that tells us very little about the underlying processes that generate the various frequencies and how they relate to the cognition or behavior being studied.
The Take-Home Message About Chaos Theory We should remember that the phase-space plots of chaos theory that we find so fascinating are descriptive, but not necessarily explanatory. They show what happened, not why or how. The plots are nothing more than a graphical display of sequential ordering. It is this ordering that constitutes thinking processes, not the chaos mathematics. What is important is not so much how we choose to describe the ordering process, but how the brain produces and uses it. Another point to emphasize is that chaos is a subset of “complex systems,” and the brain is best described as a complex system, not as a chaotic one. Chaos is most applicable for systems in which one variable at a time is “tweaked,” as in adjusting
The Quantum Theory of Consciousness
295
a rate or amplitude. The brain, however, is tweaked by multiple variables, often operating simultaneously. To the extent that individual oscillators in brain are chaotic, they provide a basis for self-organization and emergence of order. Spontaneous cooperation among oscillators leads to the ordering and organizing via synchronization. I don’t believe Chaos Theory explains brain function, much less consciousness. But it has enough relevance to convince me that reductionistic research has already told us all it can about conscious mind. Consistent with the requirements of a chaotic system, the brain operates as a non-linear deterministic system that generates order and disorder. But that is not a complete explanation.
The Quantum Theory of Consciousness Fundamental to neuroscience is the belief that human mental states correlate with and are caused by physical processes in the brain. We experience and interpret sensations as a result of the graceful orchestration of electrochemical machinery in our nervous system. Given this relationship between our conscious mind and the matter that comprises the nervous system, it is only logical that Quantum Mechanics (QM), which addresses matter at its most fundamental subatomic levels, might be applicable to the generation of consciousness. But note the word “subatomic.” Where is the evidence that brains use subatomic particles to create mind? This book’s title is “atoms” of mind, not subatomic particles of mind. Where QM phenomena have been observable, they usually involved things like light or electrons. There is no evidence for light being generated in brain. Electrons may be relevant, however (see below). Quantum mechanics has a certain appeal for explaining consciousness because all attempts of traditional science to explain consciousness are inadequate. QM has a large degree of weirdness that raises the hope that such weirdness may be exactly what is required to explain consciousness. Like consciousness, we know (at least physicists do) that QM is real, even though I don’t think they understand QM as well as they let on. Certain QM phenomena can be observed and described, but that is not the same as being able to understand what is observed. Despite almost 100 years of study, physicists are still mostly limited to performing experiments that suggest the phenomena exist. At the heart of QM is the notion that sub-atomic particles are both waves and particles, but not both simultaneously. This view is used to describe the dual particlelike and wave-like behavior and the interactions between energy and matter. For example, light is often referred to as having different energy wavelengths, which is true, but it may also have particle-like properties which are what are called photons. Does QM really explain this? While the particle nature of quantum-sized matter is described by various traditional concepts in classical physics, the wave nature of quantum particles such as electrons is described in a more abstract way by the concept of the probability wave.
296
8 Theories of Consciousness
The probability wave is a wave function extending across space that dictates where a particle is most likely to be located. It accomplishes this by assigning some statistical probability to each location in space of a particle being present in that particular location. Fortunately, the probability wave for an electron’s location, though it technically extends throughout the universe, is very high for one location and essentially zero for any distant location. Thus, we don’t have electrons spontaneously relocating to the opposite side of the universe. In any case, it is not clear how electron wave functions can apply to mind, since the carriers of thought in the brain are positively charged ions, not negatively charged electrons. Quantum mechanics is not entirely intuitive, and indeed has some bizarre features that even assault our common-sense notions of reality. For instance, experiments have shown that there is a quantum connection between two atoms, even if they are spatially separated with no apparent interdependence. Many basic observable QM phenomena can be demonstrated experimentally, but cannot be explained. When it comes to speculating on what mind is and where it comes from, there are only two likely scientific possibilities: (1) brain tissue creates mind, or (2) brain is an antenna that picks up some kind of information field in space-time and acts on it. Option one is what I have been advocating. Option two is the modern-day expression of Decartes’ dualism. Most scientists dismiss the antenna idea. But Paul Nunez, a physicist-turned-EEG researcher, considers the antenna notion and QM as a possibility. His recent book, Brain, Mind, and the Structure of Reality (Nunez 2010), provides a lay-audience explanation of QM and then posits that mind may be some undiscovered quantum field that is detected by brain tissue. His strongest analogy is the retina, which he regards as an antenna that is sensitive to the electromagnetic spectrum’s narrow-band field of light. The idea of empty space being filled with all sorts of fields dominates modern physics. There are fields of electrical voltage, magnetism, gravity, dark energy, QM (explained below), and perhaps unknown conscious information fields. The most obviously relevant field is electromagnetic. Neurons are batteries that store and dispense electric charge in the form of positive-ion current. Nunez reminds us that memory is known to be stored in chemical form in synaptic junctions. After all, genetic memory is stored in DNA molecules. Could not information in the mind be stored by the electron “glue” that holds molecules together? A big problem with the memory point is that memory is storage, not active processing. Conscious mind certainly uses some of what is in working memory, but consciousness is an active process involving the real-time detection, propagation, and processing of information. Messages are carried and propagated, distributed, and processed as nerve impulses, not storage macromolecules. And the impulse signal is generated by positively charged ions, not electrons. Any QM explanation of mind must deal with that reality. None has. Many quantum enthusiasts point to electrons as the key to consciousness. Electrons certainly exhibit quantum behavior. But there is no evidence that they are information carriers in brain. Impulses moving down nerve fibers are not electrons flowing in copper wire, and they certainly don’t flow down fibers in a steady
The Quantum Theory of Consciousness
297
stream like electric current in copper wires. The place in brain where you do find a surplus of electrons is inside the cells, and most of them are probably in proteins, bound and not propagating. Such bound charge density might have something to do with information storage, but not information distribution and processing. It is not clear how these electrons and their associated QM behavior can DO anything. They are present in both the resting and active state of neurons. There is no evidence that they constitute electric current in nervous tissue or are involved in signaling in any way. Nunez does not go into the various specific QM theories of consciousness, but I will do that here. To understand these theories, however, an understanding of certain fundamental quantum concepts is necessary. Since I am not a physicist, I hope that community will forgive the limitations of my explanation.
A Brief Description of Quantum Theory Quantum theory was generated by physicists to answer questions centered on the smallest known components of matter such as electrons and photons (Greene 2004; Nunez 2010). As technology has improved, physicists have discovered subatomic components of atoms, like neutrons and protons, and finally components of these components, like quarks. Quarks were discovered through collisions produced in high-energy particle accelerators that smash particles with such strong force that they shatter, like smashing a glass marble with a hammer. If certain particles always appear, then they must be true subatomic components, not just shatter trash. How far can we continue to divide particles before we arrive at something indivisible? And how do atomic and subatomic particles affect brain function? These are the questions that QM seeks to explain. When the world is scrutinized at such a small scale, some strange observations are made, many of which seem to contradict reality. Quantum mechanics embodies a principle of “wave-particle duality,” which states that particles of matter possess wave-like properties, and vice versa. This principle actually applies to all forms of matter, perhaps even to macroscopic forms, but only becomes observable at the quantum scale. A key question, commonly avoided, is whether such QM phenomena can have any practical impact at macroscopic scales where they are not even observable. In quantum terms, this duality is referred to as “complementarity,” since the wave and particle properties of quantum matter are complementary and both necessary to explain all aspects of its behavior. An electron, for example, not only dances around as a particle but also vibrates as an oscillating wave. Some theorists think of the wave as a “proxy” for the particle. QM theory derives from the wavelike properties of subatomic particles. Thus far, I don’t know that wavelike properties have been detected in whole atoms, like the sodium and potassium ions of nerve impulses that have lost one of their outer-shell electrons.
298
8 Theories of Consciousness
A basic tenet of QM is that of wave-particle duality and that subatomic matter exists as one or the other form, but not both simultaneously. Moreover, the mere act of observing a particle, whether by the eye or fancy instrument, determines the property of the particle/wave. Now that is really weird. In a dominant interpretation of QM, elementary particles cannot be said to have definite properties such as location, velocity, or spin until they are actually measured. This is spooky stuff, as Einstein said, and though demonstrated in certain very restrictive experimental conditions, it has never been demonstrated in connection with brain function. The wave nature of matter examined at the quantum scale has been thoroughly tested and confirmed experimentally, but it is perhaps still difficult to grasp how something like an electron can be seen as a wave can also have the form of a particle. Sure, this property can be described mathematically, with the Schroedinger equation, which basically describes the wave function for any system of particles. The equation calculates the statistical information about the instantaneous position and velocity of subatomic particles. The basic idea is that subatomic particles have a waveform proxy, the “quantum wave function.” In the absence of any observation or measurement thereon, a system of particles exists in a state of superposition in which all particle locations and velocities are indeterminate. The wave function equation describes the probability of finding a given subatomic particle at a specific place when “we look for it.” This is the heart of the famous Heisenberg “uncertainty principle.” The corollary is that what a particle is doing depends on whether we are observing it, either with our eyes or with a remotely operating instrument. Enchanting as these ideas are, how can they explain thinking? Some conjectures follow, but as far as I am concerned they all are quite a stretch. In fact, I have a problem is seeing how QM even applies at the molecular level in brain. In the nervous system, the obvious carriers of signal (sodium and potassium ions) are ionized, whether neurons are signaling or at rest. The basic point is this: when sodium and potassium ATOMS hit water, they ionize. They do not lose the electron during impulse generation. They lose it as soon as they dissolve in body fluid, even when neurons are in a resting state. Where do their electrons go? True, electrons of hydrogen are whipped through chains of proteins in mitochondria to produce energy, but neurons do not access this energy in real time to generate ionic currents, though obviously energy is needed to regenerate the stored energy of asymmetric distribution of ions across neuron membranes, that underlies the ultimate flow of ionic current. Both sodium and potassium ions are hydrated; that is they have a small shell of water surrounding them. But where are the electrons that they lost upon ionization? Some of the electrons may just be floating around, but in a conductor (and tissue fluid is a conductor), electrons must be flowing, presumably to regions of electron deficiency. Most likely there are no free electrons. They probably attach to intracellular proteins, serving as a major source of the internal negativity of cells. Another possibility may be that free electrons may move up or down energy levels and produce light (since light is a byproduct of energy being released). Energy must go
The Quantum Theory of Consciousness
299
somewhere and I wouldn’t doubt that since the atoms are losing electrons they are giving up their energy which would then produce light and/or heat. Light inside brain has not been observed, to my knowledge. Heat is another matter. Perhaps this is one source of body temperature. The effect of measurement on particle behavior is analogous to sticking a thermometer in a bowl of cold water. Heat transfers from thermometer to the water, but the water is warmed so slightly it is not readily detected. Nunez gives an even more relevant illustration by pointing out that EEG currents on the scalp may be perturbed by being “observed” with electronic amplifiers, but modern amplifiers operate on such small currents that the net effect on scalp potentials is undetectable. What flows in the amplifiers are, of course, are electrons. If the wave nature of subatomic particles is still difficult to grasp, it is not surprising. Even today, the foremost experts on QM seek to explain how quantum matter is linked with its probability wave—such as whether the matter is the wave, or merely described by it. Thus, for now, while we know that subatomic particles have wave properties, we are not sure if those are properties of the particles themselves or of their behavior, or something else. Regardless, the wave nature of quantum matter places on QM a fundamental limitation, known as the Heisenberg uncertainty principle: exact location and velocity of quantum-sized matter (or, technically speaking, any matter since all matter is supposedly imbued with a wave-particle duality) cannot be determined simultaneously. This is because when we know velocity of a particle and attempt to measure its location, the particle’s wave nature dictates that there is only a certain probability that it will be in any given space. If we are insistent, however, upon knowing a particle’s location, we measure it at the expense of velocity, since the measurement process requires physically interacting with the particle. These interactions disrupt the particle’s velocity because physical contact with a subatomic particle exerts enough force to affect its speed. One author has used the uncertainty principle to explain quantum consciousness. Daegene Song, in Korea, points out that a conscious person is an observer of his own mental state. In quantum terms, how could one observe his own state without changing it? Is cognitive realization a physical thing that acts like a measurement device invoking the uncertainty principle? The axioms of quantum theory require a distinction between what is being observed and the entity that does the observing. Song presents equations that describe an observer’s mental state in terms that are analogous to the state vector that is being observed. In the case of consciousness, observer and object of observation are inseparable (Song 2008). On the other hand, let us remember that a conscious person observes only the consequences of neural activity, not the neural activity itself. A particle’s wave nature leads to the idea of a particle’s probability wave “collapsing” upon measurement. This quantum jargon means that when we actually measure a particle’s location, the probability that it is at that location becomes 100%, so that rather than having a probability wave spread over space, the wave is condensed to one location and takes on a spiked shape there to signify this certainty of location. Correspondingly, waveform “collapse” is only observed when measure-
300
8 Theories of Consciousness
ment of a particle’s location is taken (there are exceptions to this, which will be discussed in the application of quantum theory to microtubules as described below). However, the fact that collapse only occurs upon measurement begs the question of how measurement selects one probabilistic outcome among many. This issue, known as the quantum measurement problem, is still being explored. The probability features of QM seem straight-forward enough. For every future outcome there is a certain probability of its occurrence. But actually the original math and new experiments establish that something that you do over here can be simultaneously linked to something happening over there, even at enormous distances. This too, is one of those weird things about QM. In a quantum world, two objects can be separated by great distances yet be regarded as one and the same object. Could this mean that the collective quantum objects in a human brain simultaneously exist someplace else? Also, given the tight link of space with time, bizarre time effects can occur. Supporters of QM propose—and have experimentally confirmed—that nonlocality can affect particle interactions. This nonlocality, known as “entanglement,” dictates that particles that are completely separate, even at astronomical distances, can somehow be connected, and exert influence over each other in spite of spatial separation (Gilder 2008). This influence is apparently independent of distance and operates at greater than the speed of light. Despite such mind-boggling, crackpot weirdness, entanglement is considered by many as a defining feature of QM, one that can be experimentally verified under certain circumstances (Gisin 2009). In a quantum universe, two objects separated in space can be mutually entwined even if they are on opposite sides of the universe and even if there is not enough time, even at the speed of light, to travel between the events. Talk about weird! Experimental verification of such weirdness began in the 1960s by Irish physicist John Bell. Particles such as electrons and photons spin about an axis. Speed of spin is constant, but spin direction (clockwise or counterclockwise) and angle of the axis are variable. Bell’s experiments and those that followed by Ed Fry, now at my university, and by others, showed that you can’t determine the spin of a particle around one axis at a time, but you can identify the consequences of spin about any given axis. Moreover, two such spinning particles can be entangled in that both show identical spin direction and axis angle of spin. The act of measurement of one particle’s spin properties “compels” another particle that may be a considerable distance away to take on the same spin properties. Separated particles, even if governed by the randomness of QM, somehow manage to stay sufficiently “in touch” so that whatever one does the other also instantly does. This implies that some kind of faster-than-light force is operating between them. This speed issue alone contradicts relativity theory, but there are other problems. How do two entangled particles communicate with each other across space? In quantum theory, space is largely irrelevant and time is a mere clock indicator. There is no satisfying explanation for entanglement, but there is a partial explanation centered on collapse. According to this explanation, measurement and waveform collapse of one particle causes its entangled counterpart to also undergo a waveform
The Quantum Theory of Consciousness
301
collapse that takes on the same characteristics of the measured particle. This still does not explain the emergence of entanglement. Indeed, physicist Nicolas Gisin suggests that entanglement is not based on “communication” between particles. That is, the entanglement must have some other, undiscovered, cause. His colleagues will be forced to accept this conclusion if they cannot show how communication could occur at near-infinite distances and speed. The central enigma of modern physics is that both quantum theory and relativity theory are well supported by evidence. Yet they are incompatible. The relationship of QM to the space-time continuum of general relatively needs to be reconciled, and that is where physicists have been hung up for decades. Also, nobody that I know has tried to employ general relativity to consciousness, but if QM and relativity are entwined, then quantum consciousness would have to depend on both sets of phenomena.
Specific Possible Explanations of Consciousness Physicists tend to think about consciousness in ways unique to their discipline. Their discipline is dominated by two ways of thinking, relativity and QM. The space-time of general relativity is said to be the embodiment of gravity. Or conversely, gravity is expressed as the space-time continuum. A physicist would ask, what is the embodiment of consciousness? While waiting for an answer, some scientists have embraced what they call quantum theories of consciousness. However bizarre QM may seem, applying it in a theory of consciousness seems worthy of effort. Indeed, a new journal, NeuroQuantology, has been created for publishing papers that relate QM to consciousness (http://www.neuroquantology. com/journal/index.php/nq/issue/current). Consciousness is also bizarre, and maybe it takes a bizarre theory to explain it. In its extreme form quantum consciousness theory might accommodate previously untenable ideas, such as “ghosts in the machine” and a “mind out there.” The main objective of all of the quantum consciousness theories, as with any theory for consciousness, is to link a seemingly immaterial mind and the clearly material brain. What is the obscure step or set of steps that somehow converts neural processes to conscious experiences and vice versa? Quantum consciousness theories, because they inherently operate at the subatomic level, address this question at the smallest possible level. If QM is only verifiable at subatomic scales, how then can it explain the level of neuronal cell assemblies? Perhaps the assumption is that quantum sized particles and waves within such assemblies are actually where the consciousness generating action occurs. The hope is that perhaps QM can explain consciousness in ways that other more intuitive theories cannot. Currently, despite their thought-provoking nature, none of the proposed quantum consciousness theories seem to me to be sufficiently compelling.
302
8 Theories of Consciousness
Diverse theories link QM and consciousness, and each has unique advantages and disadvantages. Typically these theories apply quantum theory to different levels of the nervous system, ranging from assemblies of neurons to synapses to microtubules within neurons. This is perhaps not surprising considering the myriad locations where quantum events could occur. Some of the most prominent theories applying QM to consciousness will be explored here, and their advantages as well as limitations will be examined.
Collapse As stated in the overview on quantum theory, the collapse of a probability wave and selection of one probabilistic outcome among many is often taken to be a result of conscious observation, through automated or manual measurement of a phenomenon. The selected probabilistic outcome that follows collapse can be referred to as an actual occasion, and the probabilistic wave of possibilities as potential occasions. Using these principles, Henry Stapp developed a quantum consciousness theory stating that each selected actual occasion comprises a conscious experience (Stapp 2007, 207p). Each time collapse occurs, a particular entangled neural pattern of activity that correlates with a particular intention or form of reality is selected. I have to wonder how collapse selects and holds a particular neural pattern. Stapp also states that conscious activities can in turn influence brain activity when attention devoted to certain neural assemblies prolongs their activities, with the result that intentions tend to remain focused where they are already focused. A quantum mechanism is specified for this task, but will not be described here because of its advanced nature. It should be noted, however, that this mechanism does seem to provide a well-developed explanation for this assertion. If quantum theory of consciousness seems rather ambiguous and lacking in detail, that’s because it is. While Stapp proposes some interesting ideas, his theory generally only provides a framework that is still unsatisfying. For example, the above-mentioned mechanism on the selection of particular neural patterns following collapse remains vague and difficult for me to understand. There seems to be a larger problem, however. Quantum theorists attribute collapse to an effect of conscious observation, but Stapp’s theory seems to state that consciousness is an effect of collapse. If consciousness causes collapse, how could collapse also cause consciousness? If there were some sort of feedback mechanism that causes the two to influence each other, it is not specified. In the meantime, this theory seems to be caught in a loop, and in need of more concrete scientific support. Quantum Tunneling Quantum tunneling refers to the phenomenon where a particle tunnels through a barrier that it classically could not surmount because its total mechanical energy is lower than the potential energy of the barrier. The phenomenon occurs because of wave-particle duality. It is the wave-packet that is able to partially penetrate the barrier. One example is with solid-state electronic devices. In a semi-conductor device with a p-n junction (a diode) where you apply an electric field (a voltage) across the
The Quantum Theory of Consciousness
303
p-n junction, the energy bands within the material begin to move. When electrons and holes (the absence of an electron) begin to move from p to n and n to p, some of them are able to ‘tunnel’ through the semiconductor energy wall without having to take the normal path of moving up and/or down the energy bands. It is not clear, however, how this applies for a liquid-state electronics system where the current carrier is not electrons but positive ions and where information is carried as discrete pulses of current rather than continuous current. Luca Turin in London believes that quantum mechanisms help to explain an enigma about olfaction (Turin 1996). The prevailing view is that specific odors act by binding specific receptor molecules in the olfactory bulb. One problem with this idea is that some odorants that have almost the same three-dimensional structure can smell quite different. Also, humans only have about 350 known receptor types, yet we can smell a wider array of odors. Earlier, I explained that most neuroscientists think that this is accomplished by combinatorial coding. But Turin argues that quantum tunneling can explain odor sensation if we assume that when an odorant attaches to one of the odor receptors, electrons from that receptor tunnel through the odorant, jiggling it back and forth to change its vibration frequency. In other words, vibration frequency can affect perceived smell. Where is the role for propagating impulse streams into the brain? Quantum tunneling could in theory be involved in all sorts of molecular interactions at synaptic junctions. If so, the underlying mechanisms of consciousness might be synaptic tunneling. Impulses, in that view, would merely be one of the signature of consciousness, while another comes from neurotransmitters. How synaptic tunneling and impulse propagation are related in QM theory is not explained.
Quantum Mechanics and Exocytosis This view about impulses is not universally held, in part because it requires nerve impulses to be regarded simply “stochastic background.” What matters, according to the originators of this theory, Fredrich Beck and John Eccles (Beck and Eccles 2003), is synaptic action and microtubule function . They proposed that the release of neurotransmitter is a candidate for QM process. Though such release is certainly a macromolecular process and therefore not accounted for by QM, the argument is that QM provides the trigger that regulates transmitter release. Transmitter release is regulated by a two-state quantum trigger operating via quantum tunneling to the superposition of the two states, leading to quantum collapse into one definite final state. Neurotransmitter release into synapses is one of the fundamental processes of the nervous system. It is therefore not surprising that a quantum theory of consciousness would arise centered around this fundamental process. Exocytosis is a probabilistic process that must overcome an activation barrier, by means of a triggering mechanism. This triggering mechanism involves a nerve impulse, which causes excitation of what Beck calls an “electronic configuration” (Beck 2008). Precisely what is meant by “electronic configuration” is never fully specified, although Beck implies that it might be the electron transfer that occurs between biological molecules. Once this “electronic configuration” is excited,
304
8 Theories of Consciousness
it gains the capacity to surmount the energy barrier (perhaps by tunneling) and thus promote exocytosis. This theory rests on the assumption of “quasi-particles,” which are particle-like entities that are not actually particles. For example, a bubble in a soda would be analogous to a quasi-particle: while it behaves like a particle, it is merely a pocket of air. Beck and Eccles use quasi-particles in their theory as a means of representing the transition of the “electronic configuration” from a state where exocytosis is not occurring to a state where it is occurring. In this case, electronic configuration is an abstracted, conceptual entity that can undergo particle-like transition between energy states. Taken in the context of this theory, tunneling forces collapse of the probability function for exocytosis, which has two outcomes of either occurring or not occurring. If an electronic entity tunnels through the exocytosis energy barrier, then exocytosis will occur. If not, no exocytosis occurs. The question of how these processes are relevant to consciousness remains. In a vague way, Eccles proposed that the inherent uncertainty caused by the influence of probabilistic quantum processes would somehow correlate with increased opportunities for conscious decision-making, due to momentary increases in the probability of exocytosis. This proposal not only lacks supporting evidence, it is vague and fanciful.
Tubulin Subunits Roger Penrose and Stuart Hamerhoff (Hamerhoff and Penrose 1996) are responsible for an elaborate theory that applies QM at the level of the tubulin subunits that compose brain microtubules. Microtubules help transport chemicals within neurons and their fibers. By the way, microtubules occur in all eukaryotic cells. Does the liver, for example, have capacity for consciousness? Or how about, as the ancients thought, mind and soul emanated from the heart? Physicists require neuroscientists to understand QM. It would help if physicists understood biology. Anyway, this theory seeks to explain the generation of each conscious event as a result of collapse of the wave function of tubulin subunits locked into a synchronous state. What exactly does this mean? First, this theory assumes that the tubulin components of many neurons somehow become locked in a state of synchrony by means of an unspecified quantum mechanism. There are some promising possibilities for this mechanism, but they are inherently complicated and best explored elsewhere. What exactly is meant by synchrony in this theory? The word is used in the same context as it is in neuroscience, to describe two waves that are oscillating at the same frequency and have a fixed phase relationship. Specifically, the waves referred to in this theory are the probability-wave functions of individual tubulin components, which are considered to all be superposed, or overlapping. This coherence becomes manifest at a macroscopic level and is supposedly free from the effects of entanglement. Once this coherent state is created, it can undergo collapse. Recall that collapse of a wave function means, more or less, that one outcome of a particle’s wave function is observed. Also recall that traditional quantum theory
The Quantum Theory of Consciousness
305
dictates that collapse results from the measurement of a particle’s location, since upon measurement a particle can only be observed at one location rather than many. Self-collapse occurs independently of any form of measurement. Rather, it is observed when a quantum system becomes large enough that it reaches some sort of quantum gravitational energy barrier, causing instability. Collapse returns the system, in this case coherent tubulin components, to a stable state, and selects tubulin conformations, which in turn affect neural functioning. This selection process is random and non-computable. If so, how would one get orderly thinking out of such a process? The question remains of how these processes lead to the generation of consciousness. Penrose and Hamerhoff assert that self-collapse of spatially and temporally distant coherent tubulin components binds the components into the selection of one space-time state which presents itself as the “now” experience of consciousness. A stream of these collapses in turns leads to what is experienced as a “stream of consciousness.” One inherent shortcoming of this theory is its reliance on a yet to be fully developed area of QM, quantum gravity. Considering that quantum gravity is the assigned mediator of self-collapse, it would be preferable to have a wider knowledge of it before assigning it such an important role in a theory.
Field Theory This quantum theory, Ricciardi and Umezawa (1967), applies the concept of quantum fields to explain consciousness and more specifically how brain and memory states are brought into conscious awareness. Like the other quantum theories of consciousness, this theory involves complex quantum concepts. It is probably best explained in a metaphorical manner, since at least parts of it seems inherently metaphorical. In simplest terms, this theory states that different subconscious memories are akin to variations of the lowest-energy states seen in the quantum fields. Quantum fields are systems consisting of many quantum particles, and are described by an expanded version of QM called quantum field theory (QFT). Continuing the metaphor provided by this theory, quantum fields relate to neural assemblies that are connected such that they become coherent when activated. Thus, where a quantum field corresponds with specific lowest-energy states, reciprocally connected neural assemblies correspond with certain subconsciously stored memories. We consciously access a particular memory when some sort of external stimulus leads to the activation of a neural assembly, which in turn causes the activation of reciprocally connected neural assemblies and leads to synchrony. This activation can be viewed as the temporary excitation of the lowest-energy configuration of a quantum field to a higher-energy configuration. This state of excitation, however, only lasts for a short amount of time. Equivalently, we can only access a particular memory for a short time. As stated, this theory only applies to neural assemblies that, when activated, are locked in a state of synchrony. So how does this theory explain the mechanism by which neural assemblies become synchronous? The theory’s originators suggest that
306
8 Theories of Consciousness
states of coherence between neural assemblies are the same as states of order (i.e. correlated activity patterns) in quantum fields. Logically, then, they assert that they can describe the emergence of synchrony by detailing the mechanism by which order emerges in quantum fields, and applying this very same quantum concepts to neural assemblies (presumably, the quantum-sized particles that compose neural assemblies). Thus, the question to ask becomes “How does order in quantum fields arise?” The quantum mechanism of ordered states involves a concept known as spontaneous symmetry breaking. That basically means that the lowest-level energy state for a system is constantly changing. The fluctuation of the lowest-level energy state leads to the creation of specific particles called bosons that have an integer spin. These particles, which are dictated by Bose-Einstein statistics, can then undergo a phenomenon called Bose-Einstein condensation, which causes them to become more ordered by making them occupy the same state in space. This mechanism is admittedly difficult to grasp, so more specific details concerning it, including details on the creation of bosons and the creation of order among them, are best explained in a book on QM, and will not be explored here. This theory also attempts to explain how we store many memories without overlapping them, as well as why our memory capacity is limited. Both of these explanations rely on a quantum concept called dissipation, which describes how a system such as the brain interacts with its environment. Again, specifics of how dissipation leads to these memory storage outcomes is best explored in a book on QM. To paraphrase, dissipation allows for the creation of many different lowest-energy states (or subconscious memories), and also provides a mechanism for why memories are eventually forgotten if not actively recalled during a given amount of time. This theory addresses neural activity at the level of the neural assembly, by describing the emergence of synchrony between assemblies, since this level is generally accepted as the level correlating with mental activity. However, no explanation of how stimuli are processed to lead to the activation of an assembly and the emergence of coherence is offered, which leaves some big questions unanswered by this theory.
Quantum Interference In physics, superimposing two or more waves produces another wave. This idea forms the basis of quantum consciousness experiments by Elio Conte (2008) in Italy. His laboratory uses perception of ambiguous figures, such as vase/face or Necker cube illusions, as the experimental environment. Recall the earlier discussion of how my colleagues and I used ambiguous figures and EEG recordings. When confronted with such stimuli, the brain can attend to only one of the alternative representations at a time, yet can willfully switch from one representation to another. Two states of mind exist: the manifest state of what is current perceived and a potential (subconscious?) state of an alternative yet currently unrealized percept. Across human subjects, uncertainty arises in the frequency of perspective reversals, the speed at which reversals occur, and the duration of “holding” a given manifest percept. Conte chooses to view these effects from a perspective of quantum interference. For example, the probability for a potential perceptual state as a vector in
The Quantum Theory of Consciousness
307
Hilbert space. While the quantum perspective is interesting, it is not the only way to view human perception of ambiguous figures, as was illustrated in the earlier discussion of my lab’s experiments on oscillatory coherence.
Spintronics Spintronics is a new paradigm of electronics based on the spin degree of freedom of the electron. Either adding the spin degree of freedom to conventional charge-based electronic devices or using the spin alone has the potential advantages of nonvolatility, increased data processing speed, decreased electric power consumption, and increased integration densities compared with conventional semiconductor devices. All spintronic devices act according to the simple scheme: (1) information is stored (written) into spins as a particular spin orientation (up or down), (2) the spins, being attached to mobile electrons, carry the information along a wire, and (3) the information is read at a terminal. Spin orientation of conduction electrons survives for a relatively long time (nanoseconds, compared to tens of femtoseconds during which electron momentum decays), which makes spintronic devices particularly attractive for memory storage and magnetic sensors applications, and, potentially for quantum computing where electron spin would represent a bit (called qubit) of information. Given the incredible intricacies of the brain’s ultrastructure and the billions of years it has had to evolve, it is certainly conceivable that the brain may utilize spintronics. The idea is speculative, but definitely worth further consideration, bearing in mind that one potential problem with spintronics is whether spin states are stable long enough to be used in neural computation.
Quantum Metaphors There is also a category of quantum theories of consciousness that, rather than seeking to describe consciousness directly in terms of quantum theory, seeks to link the two concepts metaphorically. Many comparisons can be drawn between particular characteristics of QM and characteristics of the mind. For example, both run on an evolving time scale that consists of many specific outcomes strung together. In the mind, these outcomes are particular conscious experiences, whereas in quantum theory, these are particular outcomes of the probability wave. Similarly, up until the point where a particular conscious state or outcome occurs, there exist a limitless amount of potential outcomes, as dictated by the quantum probability wave and the myriad pathways within the mind. The probabilistic nature of certain mental processes, such as exocytosis mentioned above, and the inherent probabilistic nature of QM represents an interesting parallel. Non-locality, manifested by dynamic global interactions in the brain and entanglement of distant particles in QM, is another interesting likeness between the two. Although such metaphors typically do not carry as much scientific potential, they are worthy of mention, if for no other reason than they likely steer the direction of
308
8 Theories of Consciousness
quantum consciousness research. They may lead us to speculate that the parallels between quantum and brain function are not just coincidences. Certainly, this assumption has inspired some of the quantum consciousness theories listed above, and will drive scientists to explore quantum possibilities.
Conclusions About Quantum Theories of Consciousness By no means is this a comprehensive overview of the quantum theories of consciousness. So many diverse theories exist, that an entire book would be necessary to cover them all separately and in depth. However, the sampling presented here does account for some of the more prominent and intriguing ideas. Taken as a whole, these theories reveal one stark weakness in applying QM to consciousness: each theory has its holes and unanswered questions, and needs substantially more evidence. The strange nature of QM and the discoveries left to be made in the field are problematic. Future quantum research may show a promising union between quantum theory and neuroscience. For now, I conclude that the problem with quantum theories of consciousness is that its advocates are trying to explain a mystery with another mystery. Even if QM were explained, there is no evidence that quantum phenomena are amplified from the subatomic level to the macroscopic level of neurons, or to any macroscopic level for that matter. For quantum consciousness to occur, we have to assume either that subatomic processes get expressed in neuron function. At issue is whether the brain runs by molecular chemistry or by subatomic physics. The evidence for chemistry seems to me to be overwhelming. It is not clear how QM can apply to anything beyond the realm of sub-particle physics. Except for electrons, how can these ideas apply even to the ordinary chemistry that underlies the biological properties of mind? The whole story could revolve around the properties of electrons, given that their surplus or deficiency in a given atom such as sodium or potassium could account for the ionic current properties that we know are central to nerve impulses. But even for electrons, the case for QM as a basic mechanism of the impulse has not been made. More likely, in my view, the mind can be explained in terms of nerve impulses themselves and there is no need to consider any quantum alternatives. How does the nerve impulse, which I have claimed is the currency of thought, fit into the QM mindset? We must never lose sight of the fact that the nerve impulse is a real event, not a mathematical probability, not a wave function. It is a signal that occurs whether we observe it or not. Electromagnetic waves, such as EEG voltages, are not like quantum waves. No, quantum waves are mathematical functions that make predictions about the state and location of subatomic particles. What has that got to do with impulses, much less consciousness? QM is so weird that even some physicists don’t believe it. Even the many who do, admit that the theory cannot yet be reconciled with space-time relativity. Many of those who continue to labor in vain for a unifying “Theory of Everything,” naively think that it will automatically explain consciousness.
The Quantum Theory of Consciousness
309
Even to the extent that QM is true, I cannot find one scintilla of evidence that quantum phenomena are relevant beyond the world of subatomic physics. What happens sub-atomically may not get expressed at the atomic and molecular level. I know of no evidence that subatomic physics is relevant to molecular chemistry. Where is the evidence that quantum effects operate at a whole organism level, or even at a level of macromolecules? Where is the evidence that any subatomic particle could be an information carrier and processor? Subatomic particles no doubt do exhibit quantum behavior, but it does not follow that atoms and molecules must obey the same laws of physics. Atoms and molecules obey the laws of chemistry. And everything we know about the brain is about chemistry, not particle physics. Maybe an analogy helps. Consider bulk water. It is composed of atoms of hydrogen and oxygen, and the sub-particles of each kind of atom no doubt obey the laws of QM. But water itself is a qualitatively different domain with its own unique rules and properties. For example, water has solvent properties and characteristics such as specific heat, phase transitions, hydrostatic pressure, flow dynamics, and the like that are not applicable nor explainable at the quantum level. And don’t forget: most of the brain is water! The worlds of QM and chemistry may be separate domains, each with its own rules and properties, yet relatively independent. The burden to prove otherwise is on the physicists. The main problem with quantum theories of consciousness, it seems to me, is that QM is so weird and counter-intuitive. Even in the limited circumstances where quantum phenomena can be experimentally documented, many of the fundamentals of the theory make no sense. One of the leaders of quantum theory who has advocated a quantum basis for consciousness, Roger Penrose, suggests that neither he nor anybody else really understands QM and that maybe it is time to start doubting its validity (Kruglinski 2009). Not all physicists believe in QM, even though many of its weird aspects have been experimentally verified. In his era, Einstein led an unsuccessful fight against QM. But resistance persists today and is led by such people as Antony Valentini in England (Folger 2009). To this day, no easily accepted explanation has been offered for such conundrums as how matter can exist as particles one moment and as waves at another, that the mere act of observations alters the physical reality, and that subatomic particles are entangled at near instantaneous speeds. I have to respect physics. I agree with those who say that physics is the “mother science.” But historically, this mother has given rise to some bastard ideas. Quantum consciousness may prove to be one of them. We are left with the clearly demonstrable reality that signals and commands in the nervous system are carried and conveyed in the patterns of positive ion current pulses. There is no compelling reason to invoke any other mechanism. When it comes to explaining something like consciousness, why should we abandon what we know to be clearly established about brain function in favor of some vague “spooky” science that cannot yet be shown to be relevant to thought? The previously described two theories, Chaos Theory and Bayesian Probability,
310
8 Theories of Consciousness
at least have demonstrable relevance to brain function. Unlike QM, these theories at have great descriptive power for brain function, though I contend that they offer little explanatory power. Do the positively charged ions that constitute the nerve impulse have wave/particle duality? I submit that the question is irrelevant to consciousness. The existence of these ions as matter, not waves, seems real enough and sufficient to create consciousness from the collective impulse patterns in large neuronal assemblies. Now, subatomic particle physicist might counter that nerve impulses only appear as particles because we have forced them away from their wave-like state by observation with electrodes and amplifiers. In addition to the complete absence of any such evidence, if wave-particle duality applies to sodium and potassium ions, the brain would have the properties of a quantum computer with instant and near-infinite processing capacity. That clearly is not the case. To conclude, I will present my own impulse-based theory for consciousness, based on unambiguous evidence from a century of study of what happens in subconscious and subconscious thinking. This theory flows logically from what we already know and does not require us to invoke “spooky science” to explain how the “ghost” of mind can materialize.
Circuit Impulse Pattern Theory of Consciousness I am about to introduce a theory that regards consciousness as a kind of avatar, yet one grounded in physical reality, both in the experienced world and in the genesis of the avatar. We don’t have to invoke some kind of “ghost in the machine” to understand consciousness. The answer to the enigma of mind might be “right under our noses,” so to speak. The answer, I think, lies in the atoms of mind, the sodium and potassium currents that constitute the network patterns of nerve impulses.
A Little Common Sense Please Common sense, as well as a great deal of neuroscientific evidence, indicates that the conscious mind emerges from the same place that houses non-conscious and subconscious minds—circuits in the brain. A CIP theory could be assailed for being too materialistic or too rooted in traditional science. The anti-materialist argument would not be a scientific challenge, and the anti-traditional science argument may lead us to junk science. Non-scientists are more likely to raise the anti-materialist argument. This would likely come from the same people who object to Darwin’s theory of evolution, without recognizing that his theory only shows the mechanism of human evolution. Evolution does not presuppose the absence of a creator God who set the laws of physics and chemistry in motion that enabled evolution. Likewise, if human mind can be reduced to physics and chemistry, it also would have no such anti-religious presuppositions.
Circuit Impulse Pattern Theory of Consciousness
311
Of course, it is easy to see how the ancients (and some moderns) get the idea that mind is somehow an “out-of-body” phenomenon. Conscious mind is a mind that thinks about itself. Of course, conscious mind thinks about things outside of itself, but even here, the context is typically in the frame of reference of the self. When I think about how great a sizzling steak and a cold beer taste, it really is about how good that seems to me. Neuroscientist Walter Freeman (2000) even has a fancy name for this sense of self-reference: epistemological solipsism. This idea of epistemological solipsism should not be confused with general solipsism or metaphysical solipsism. Solipsism holds that only the self exists and can only reference changes in the self, and metaphysical solipsism holds that anything that exists beyond the self is a projection of imagination. Epistemological solipsism instead recognizes that other people and things exist but with the caveat that each person creates their own experience and knowledge of these externals and views the world in a unique way that only that particular self can access. Thus, no one person can ever know how another truly perceives the world. We all have an impenetrable barrier of privacy established in our brain. When we think consciously about things we experience in our environment, there is a CIP representation attributed to the relevant senses: smell, taste, touch, sight, or sound. Higher-level thought, such as music, language, or mathematics, is an abstraction created by the brain, and these also have CIP representations. Disembodied mind is not possible—at least not in the four-dimensional universe that we experience.
Neocortex as the Origin of Consciousness Recall the earlier summary of evidence that shows that consciousness arises when the outer mantle of brain, the neocortex, is activated (disinhibited) by influences from the ARAS in the brainstem’s central core. But consciousness itself operates through the circuits of an aroused neocortex. The brainstem can only trigger consciousness. It cannot sustain consciousness because it lacks the complex network architecture of the neocortex. Damage to neocortex, whether by trauma or stroke, for example, can eliminate the conscious operations of the damaged part of the brain, even when the ARAS is fully operational. Moreover, heightened consciousness is the hallmark of humans, and the human brain differs most conspicuously from other animals in the development of the neocortex. Indeed, the reason it is called “neocortex” instead of just cortex is that is regarded as the evolutionarily most recent cortex. The subconscious neural platform for higher levels of self-representation is the brainstem. Bob Vertes and I reviewed the scientific literature that shows how the brainstem integrates the converging signals from the viscera, internal milieu, and the bodily senses (Klemm and Vertes 1990, 595p). It also contains circuitry that regulates vital functions of the heart and respiratory system, sleep and wakefulness cycles, arousal, attention, and the emotions. Consciousness itself emanates from the actions on the cerebral cortex of the central core of the brainstem. That is why people with damage to this central brainstem core become comatose, even if their neocortex is completely normal.
312
8 Theories of Consciousness
L3P
L3P
L3P
L4P
L4P
L5P
L5P
L5P
L6P
L6P
Thal
Thal
Sub
Fig. 8.2 Simplified diagram of the excitatory neurons in any given column of the human neocortex and the interconnections with other columns. The vertical layer location of neurons is indicated by L3, L4, etc. Shown are input sources from subthalamus (Sub) and thalamus (Thal) (From Binzegger et al. 2005). The nodes of the graph are organized approximately spatially; vertical corresponds to the layers of cortex, and horizontal to its lateral extent. Arrows indicate the direction of excitatory action. Thick edges indicate the relations between excitatory neurons in a local patch of neocortex. Thin edges indicate excitatory connections to and from subcortical structures and inter-areal connections. Each node is labeled for its cell type. For cortical cells, Lx refers to the layer in which its cell body is located; P indicates that it is an excitatory neuron. Thal thalamus, Sub other subcortical structures, such as the basal ganglia
But saying that conscious self-representation requires interaction of brainstem reticular activity and the cerebral cortex does not explain conscious mind. It merely shows where conscious mind comes from. Microscopic examination of the neocortex shows that all parts of it have a similar architecture (Fig. 8.2) (Douglas and Martin 2004). The neocortex everywhere has six layers, and inputs terminate in the same specific layers, while output projections arise in other specific layers. Neocortex has an outer surface layer of fibers. Beneath that are arranged neurons that are primary targets of input fibers. Neocortical neurons send output projections to spinal cord or other parts of the neocortex, or to other parts of brain, or to interneurons that modulate activity of the other neocortical neurons. What gives neocortex its localization of specific functions is the source of the inputs.
Circuit Impulse Pattern Theory of Consciousness
313
The rich interconnections of various neocortical areas provide a way for the whole complex to operate as one unit, despite all the diversity and specificity of various CIPs contained therein. I contend that this mode of operation is what produces consciousness. That is, the simultaneous engagement of the whole neocortex allows it the opportunity to have a sense of self that is aware of what any particular part of neocortex is doing (seeing, hearing, deciding, planning, etc.). The outer layer is of special interest for several reasons. It contains a sprinkling of inhibitory neurons and many terminal fibers of input neurons from deeper layers and from other cortical areas. Then also, this outer layer is the proximate source of most of the voltage field in an EEG, inasmuch as it is the closest to scalp electrodes. Signals arising from deeper in the brain are not detectable at the surface because of attenuation by the insulating barriers of skull, periosteum, and scalp. Not shown are the inhibitory neurons and the modulating brainstem inputs, such as noradrenergic neurons in the locus coeruleus, serotonergic neurons in the raphe nuclei, dopaminergic neurons in the ventral tegmental area, and the energizing cholinergic neurons in the nucleus basalis. Note that primary input comes from the thalamus, terminating in layer 4. Numerous feedback loops capable of supporting oscillatory activity are evident: L3 to other L3 neurons in the same column and back to L3; L3 to L5 to L3; L3 to L5 to L6 to L4 to L3; L5 to other L5 neurons in the same column to L3 to L5. Feedback loops also come from other columns: L3 to (L4 to L3) to L3; L3 to (L4 to L3 to L5) to L3; L3 to (L4 to L3 to L5 to L6) to L3. Another detail about this circuitry is not shown in the diagram. That is that the L2 and L3 cells get different kinds of input at different levels of their dendrites and cell body. Thus the same cell, and by extension the circuits with which it is associated, can simultaneously contribute to different CIP representations. One representation might be for a specific sensory input, while another might be a representation of “I,” thus enabling the conscious sense that it is “I” who see, heart, and so on. I will elaborate this idea of a CIP representation in the next section. Clearly, such organization suggests that cortical columns are mutual regulators. Clusters of adjacent columns can stabilize and become basins of oscillating attraction, and the output to remote regions of cortex can facilitate synchronization with distant basins of attraction. This allusion to Chaos Theory is intentional. Control in such a system is collective and cooperative. Mind-body dualists like to speak of a “ghost in the machine,” a top-down executive mind that provides the brain with consciousness. The data, however, argue for a ghost of mutually regulating circuits that when fully coordinated operate as conscious mind. One consequence of this organization is that excitatory input arriving in a given column can ripple through its adjacent columns like ripples that move through a pool of water after a rock is dropped in. At the same time that input spreads laterally, column by column, there is an accompanying faster spread to many other parts of cortex via the fiber tracts within and between the hemispheres. This is parallel processing on steroids. If allowed to go out of control, an excitatory ripple effect can result in grand mal epileptic seizures.
314
8 Theories of Consciousness
The converse way to confirm and illustrate the “ripple effect” is with the p henomenon of “spreading depression,” an orderly wave of depression that can be triggered in various ways (Somjen 2005). Andreas Burkhalter (2008) points out that this elemental circuit design includes recurrent excitatory and inhibitory connections within and between layers. Most of the excitatory drive is generated by local recurrent connections within the cortical layers, and the sensory inputs from the outside world are relatively sparse. The usefulness of this design is that weak sensory inputs are amplified by local positive feedback. The risk of such organization is runaway excitation and, in epilepsy, the problem emerges when pathology removes the normal inhibitory influences that hold the circuitry in check. Much of this detail was determined by microelectrode studies in cats and monkeys and has not been confirmed in humans. Though human neocortex shows similar anatomical layering and cells types, there may be differences in interneuronal connectivity. Nonetheless, the animal data make clear that neocortex has rich interconnections and capacity for generating multiple oscillatory frequencies that package impulse traffic. We need to emphasize that the amount of neocortex in humans is relatively much larger than in other primates. It may be that a certain “critical mass” of neocortical processing circuitry is needed to achieve the unique human cognitive abilities, particularly the level of consciousness and capacity for introspection. One recent study (Molnár et al. 2008) that has recorded activity from pairs of human neocortical cells revealed that the relations between so-called chandelier neurons and pyramidal cells is different than in animals. Triggering activity in chandelier cells produces a precisely timed chain of electrical events. The pathways are much stronger than has been demonstrated in animal studies. This chain of events in the human experiments led to network activation lasting approximately an order of magnitude longer than had been detected previously in response to a single action potential in a single neuron. Inhibitory circuits are crucial for controlling oscillations and time-chopping of impulse streams, both within and among columns. Some 10–20% of all synapses in neocortex are thought to be inhibitory. Two main types of inhibitory neurons, using GABA as the neurotransmitter, are so-called basket cells and some chandelier cells. No one has identified the temporal succession of CIPs for a given mental state, even in a single cortical column. No one has tried to identify distinct CIPs using the mathematical techniques for identifying combinatorial coding. If the impulse activity from multiple neurons in a given column were recorded at the same time, we might have a way to examine the possibilities for combinatorial coding in a given column as it progresses over time for a given mental state. This kind of recording and analysis is technically feasible for monkeys with implanted microelectrodes. Douglas and Martin have proposed a speculation, as follows. Patches of superficial pyramidal cells in L2 and L3 receive excitatory input from thalamus and pyramidal cells in the same and other columns. Pyramidal cells in this patch also receive feedback (I would add, probably oscillatory) from pyramidal cells in deeper layers
Circuit Impulse Pattern Theory of Consciousness
315
of the same patch and from all layers of other patches. All of these inputs to dendrites of superficial pyramidal cells are processed (I would add, algebraically integrated) by inputs in the outer fiber layer. This processing may involve a “winner-take-all” selection process that determines which L2/L3 neurons are activated. Thus, the superficial pyramidal cells “interpret” input, while the pyramidal cells in deeper layers are connected to provide a feedback that “exploits the evolving” interpretations. L5 pyramidal cells not only carry much of this feedback, but they also are a primary output source to subcortical brain structures such as basal ganglia, brainstem, and spinal cord (not shown in figure above). Thus, the deeper layers constrain the decision making processes in superficial layers, while at the same time delivering neocortical output to the rest of the brain. Not explicitly included in this speculation is the role of inhibitory neurons, which is profound and affects either directly or indirectly activity of pyramidal cells in all layers.
CIP Representations of Consciousness Recall that earlier in this book that I discussed a subject almost never covered in books like this: the non-conscious mind of spinal and cranial nerve reflexes and brainstem functions such as neuroendocrine control. I did that because what has been learned about non-conscious mind in the last 100 years of neuroscience tells us the general principles of how nervous systems work. Why throw all that understanding out when trying to explain subconscious and conscious mind? Could they not operate as extensions of the same basic principles? In other words, study of non-conscious mind teaches us that the currency of its “thought” is the nerve impulse, enabled by biochemistry and circuit anatomy. Likewise, I would expect the currency of subconscious and conscious mind to also be the nerve impulse, or more precisely, the spatial and temporal patterns of impulses in distributed and linked microcircuits. There are, of course, many biochemical and physiological phenomena associated with nerve impulses. These give rise to a wide range of correlates of consciousness, as have been elegantly described by Christof Koch. But correlates are not always necessary or sufficient to explain consciousness (Koch 2004). To make sure we don’t look for mind in “all the wrong places,” I am promoting the view that mind is based on circuit impulse patterns (CIPs) and the phase relations of electrical activity among circuits. A starting point of explanation can be how sensory stimuli are registered in brain. We know from monitoring known anatomical pathways for specific sensations, that the brain abstracts elements of the outside world and creates a representation with CIPs (recall, for example, the earlier discussion of the Hubel and Weisel study of vision, for example). As long as the CIPs remain active in real time, the representation of sensation is intact, and may even be accessible to consciousness. However, if something disrupts ongoing CIPs to create a different set of CIPs, as for example would happen with a different stimulus, then the original representation disappears and may be lost. Of course, the original CIPs may have been sustained long enough to have been consolidated in memory, in which
316
8 Theories of Consciousness
case retrieval back into active working memory would presumably reconstruct the CIP representation of the original stimulus. How then, can we relate these facts to the issue of consciousness? At a minimum, it suggests that the CIP representations in cortical architecture either enable conscious awareness or are themselves the essence of consciousness. Consider the possibility that conscious mind has its own CIP representation. Conscious mind may not emerge as more than the sum of its parts. It may be a separate representation that the brain constructs from CIPs to represent another sense, the sense of self. Specifically, when the brain constructs a sense of self, it must do so in terms of neural representation, which I argue takes the form of unique CIPs. I think most neuroscientists would agree with my assertion that an idea, for example, has a neural representation in a set of CIPs. Is that the same as saying that the CIP is the idea? Why not? Maybe not quite, because an idea to be useful has to be expressed in some way. If the idea can be visualized, then it becomes expressed when the visual cortex creates the image of the idea in the “mind’s eye.” If the idea can be described verbally, it becomes expressed when the CIPs include the language systems in the left hemisphere. By this reasoning, mind could be defined as a set of CIPs, which has a life of its own—as an avatar. Much more needs to be discovered about the nature of these CIPs. The subconscious CIPs seem to be straight-forward; i.e. each circuit propagates spike trains that contain their information in terms of firing rate and inter-spike interval patterns (this can be visualized by imagining impulse traffic in a network like the one described earlier for the habenula. The same could be true for conscious CIPs, except that these circuits need not be a separate set of circuits. They could be a sub-set of CIPs operating in a different manner, as for example, being phase locked into more coherence among certain circuits. For the CIPs of consciousness especially, some of the thinking mechanisms mentioned in Chapters III and VI seem likely to be involved. These include interspike interval coding, combinatorics, and coherent oscillation.
The Created and Remembered “I” of Consciousness When I go to sleep at night, where do “I” go? Except for occasional re-emergence of “I” in dreams, “I” have vanished. Yet I wake up in the morning with my “I” intact, except for imperceptible changes in memory consolidation that occurred during the sleep. Why don’t I wake up as somebody I would rather be, such as when I was a teenager I would rather have been Stan Musial. No, I was stuck with my usual “I,” because that is the representation my brain has constructed in long-term memory over the years up to that time. The only things that change that representation are incremental changes as I age, which might conclude someday in a complete destruction of my sense of self from Alzheimer’s disease. I think we must consider that this sense of self is created by neurons, just as many neurons create a representation of what we see or hear in the outside world. When the “self” goes away when you fall asleep or are anesthetized, it disappears because
Circuit Impulse Pattern Theory of Consciousness
317
the CIPs that sustain it are temporarily gone. Why and how can it come back when you wake up? It comes back because the CIP for self is actually a memory that can be recalled. That memory of self began construction in the womb. As daily sensations and experience ensue after birth, the representation of self becomes richer and more nuanced. The sense of self grows with learning from experience but is also selfprogrammed by its own CIP representation. As I grow up, “I” helps to decide what kind of person I want to be and participates in guiding the process. My avatar has a free will. Where does this leave the relationship of conscious mind to the subconscious? When we humans are awake, we are automatically conscious. Given that subconscious and conscious functions are so different, I submit that each must have different CIPs, which would in theory be identifiable. Non-conscious and subconscious minds are like robots, responding passively to their internal programming and to environmental contingencies. I believe that when subconscious mind achieves a certain “critical mass” of distributed circuit activity that becomes interlinked and coordinated, a conscious mind can emerge. This created conscious mind then becomes available to enrich the limited processing of subconscious operations that I elaborated in my previous book, Blame Game. How To Win It. Consciousness lies at the heart of free will and personal responsibility. Conscious mind is not aware of the processes of subconscious activity but is aware of the consequences of such activity. No longer is the brain limited to passive execution of existing programs, but now conscious mind allows a more comprehensive consideration of what is being experienced. More information is accessible and coordinated. More options can be analyzed and done so more completely. Freely willed choices become available. Most importantly, the subconscious mind now has another source of programming. Conscious mind provides a new dimension for actively programming the subconscious. In short, conscious mind is the brain’s way of intervening with itself. This goes to the heart of my theme of the biological case for personal responsibility. Conscious mind is the “I” of each person, and “I” can be in control. Conscious mind not only can control the subconscious but it can also control itself. “I” choose what I read, I choose what people to associate with, I choose what is good for me, I choose my attitudes and what to believe and what to do. True, because of my pre-existing subconscious programming, some conscious choices are harder than others. Bad choices make my life hard. But because of conscious mind, I can at least become aware of the price being paid for bad choices and have the option to change course, to change my brain’s programming accordingly. The great thing about consciousness is that it enables introspection. We have the innate capacity to think about how and what we think.”What did I do wrong? How do I prevent this from happening again? What did I do right? How can I replicate this reinforcing behavior?” When we use this capacity, we increase the odds of spotting flawed logic and increase the options for better alternatives. Unfortunately, the conscious mind’s natural tendency is to believe its initial conclusions and not to second-guess them.
318
8 Theories of Consciousness
Logical thought is greatly facilitated by good language skills, because language is the medium that carries much of our thinking. Mathematical thinking is similarly enhanced by good math skills. Deductions can be made subconsciously, but consciousness allows us to make explicit the premises and propositions from which we derive conclusions. Inductive thought may begin in the subconscious, but consciousness allows us to make explicit the number of particulars or specific instances from which we can create a generalized conclusion. Of course, language and mathematics are not the only venue for conscious expression. Consider art. We consciously conceive and interpret images. We can be aware of our emotions and the varied input of our senses even when it is difficult to explain these things in words. How we represent ourselves resides in the brain. We can, through learned experience and self-talk, change our perceptions and representations about ourselves. Modern-day philosopher Patricia Churchland argues that self representations are multi-dimensional and that particular representations can be spared when others are impaired. Some amnesic subjects, for instance, have no conscious awareness of things that happen outside a small window of a minute or so of current and recent time. Such people are self-aware only in near real-time. Self representation of the past is not possible, because the parts of the brain that form memory have been damaged. A different example of self-dysfunction is schizophrenia. These patients have good autobiographical memory, but they are deeply confused about the boundary between self and non-self. For example, a schizophrenic may respond to touch stimulation by claiming that the sensation belongs to someone else or that it exists somewhere outside the body. “Voices” that schizophrenics hear are apparently self-talk that is incorrectly represented as coming from the outside. Brains that are wired to distinguish outer world representations from inner world ones are brains that by definition enjoy a degree of consciousness. Thus, such a system can represent an apple (outer thing) as something that I (inner thing) like to eat. Further, the color of that outer thing, red, is something that I can also represent. Given that representations in the brain are contained in neural circuitry, we can extend the analysis of conscious mind to patterns of interconnectivity among neurons in the brain. While the anatomy or “hard wiring” of the circuitry is fixed in the short term, the actual functional circuitry changes depending on which neurons are firing and how they are firing. Despite all the emphasis here on the conscious mind and the sense of self, Joseph LeDoux (2002, 416p) argues convincingly that who we are is largely learned through experience and that much of this learning has occurred and is “remembered” implicitly and subconsciously . These representations may not be accessible consciously but nonetheless affect our behavior. Though I argue that people have free will, it is nonetheless true that without consciously exerting introspection and free will, our thoughts and behaviors are driven by the subconscious mind. LeDoux contends that the self is constructed. You might say that we make it up as we go along, which is the point I have been making. If we thought more often about
Circuit Impulse Pattern Theory of Consciousness
319
this, we might be more careful about what we think and do, for that not only affects decisions and actions of the moment but also contributes to self construction (or destruction, as the case may be). This construction is a life-long process, being most evident during childhood. Recall from the opening chapter that the brain anatomy continues to change up until about age 25. The last part where visible change occurs is in the prefrontal cortex, where we weigh alternatives, make judgments, plan for the future, and monitor our behavior. In addition, microscopic and biochemical changes continue beyond the age at which anatomical change can be detected.
What CIPs of Consciousness Represent Just as the brain uses CIPs to represent the sensory and motor world, I suggest that it uses a complementary set of CIPs to create the “I” of consciousness. The brain not only creates its own version of reality, it creates a representation of selfhood to interpret and relate to the world. Moreover, the brain not only contains CIP representations of things we have experienced, but it also can create CIP representations of things and events that we have never experienced. Creativity is a marvelous mystery that no one can explain. Clearly, creating a representation of things never seen nor experienced requires combining in unique ways the CIP representations of things we have seen or experienced. No one knows how the brain decides which circuits to tap into. No one knows why some brains are better at the creative process than others. Nor do we know if brains can be taught to be more creative, or if so, how to do it. Few ask why dream content is more creative than we can usually generate during wakefulness. Regardless of which CIPs produce the “I” of consciousness, those processes should also be capable of freely willed modification of that processing according to the nature of their output, which is represented in the consciousness. When we have a conscious experience, the neural processes that make us aware of the output of those circuits provide a physical substrate for self-adjustment, which may also be manifest in the consciousness. When I say to myself, “stop smoking,” my subconscious brain circuits that generated the desire to smoke also made me aware that I want to smoke. But other brain circuits have been programmed, by information that my brain had received and made manifest in my consciousness, to resist smoking because it is unhealthy. Conscious activation of the reasons for not smoking suppress activity in the circuits that would otherwise make me pick up a cigarette. In other words, the brain can control its own consciousness. It is not just mind over matter. Mind is matter. How can the brain tag, match, or compare CIP representations with real-world phenomena? If the CIPs represent ongoing sensory stimuli, the matching is automatic in that the CIPs were generated by the stimuli and therefore are an automatic representation of those stimuli. In the case of memory or internally generated information, those sources of input to the virtual working-memory scratch pad have their own CIPs, and these have to be matched and integrated, in short successive steps, with the CIPs that are being generated elsewhere for working memory.
320
8 Theories of Consciousness
I don’t know the form such matching entails, but it almost has to relate to the fact that many circuits overlap physically. That is, one or more neurons in one circuit can be shared with other circuits. Recall the earlier discussion of such overlap in even a very simple brain structure such as the habenula. The impulses from one circuit add to the impulses coming from the other circuit, and thus in overlapping neurons the activity may be substantially increased. Maybe the “tag” is the localized increase in activity or a shift in time relationships. The overlap zone provides a way for one set of CIPs to be shared with the CIPs elsewhere. Moreover, each overlapping circuit has access to CIPs that are about to be generated in the immediate future and a back reference to the CIPs that are coming in from senses or that emanate from memory. For all this to work, the two kinds of CIPs have to share their information. Fig. 8.3 suggests a way: Such sharing of information allows each set of CIPs to know something of what the other is doing (i.e., “thinking”). More importantly, the overlap allows exchange; i.e. each set can help program the other, even though each can maintain a degree of autonomy. Impulse Pattern in One Circuit
Circuit Anatomy
A
B
A
+B or A, then B Fig. 8.3 Schema for sharing of information between two sets of CIPs. When one of the circuits becomes recruited into both sets (grey-filled circle with dashed lines), some of the information of both sets is shared and contained within the spike trains of neurons shared by both sets. One sharing mode would mix the two signals (A + B), effectively scrambling the message, unless information is packaged as bytes of serially ordered interval clusters). Alternatively, oscillatory time chopping could let one spike cluster pass through followed by the other. Slightly different spike amplitudes are probably irrelevant
Circuit Impulse Pattern Theory of Consciousness
321
How can this conjoint information be “read?” If the spike trains in one set of CIPs merge simultaneously with the trains from the other CIP, then the merged spike train would seem to be garbled, unless, of course, neurons merge the impulse patterns sequentially. The “time chopping” of oscillation would provide a way to let spike influence from one CIP enter another without scrambling the messages. This reinforces the possibility discussed earlier that spike trains may be “byte” processed. Although the idea of spike-interval coding has long since been abandoned by most neuroscientists, the evidence for it has not been refuted. If neural circuits do carry information in the form of spike clusters of serially dependent intervals, then mixing input from two or more spike trains could produce an output that preserves the distinct packets of information in ways that could be read and differentiated by other circuits to which it is projected. This model is somewhat analogous to the genetic code. While much of DNA is junk (“noise”), there are many isolated unique pieces (“bytes”) that do all the work in highly differentiated ways. In the case of abstract thought, such as occurs without any “real time” sensory input, the participating CIPs always have access to CIPs that can be generated from memories of events that do in fact have real-world representations of sensory events. You could think of certain circuits or parts of certain brain areas acting as nodes that functionally hold together the circuits within its immediate domain and act as relay and communication points with other hubs. This idea is analogous to the way commercial airline hubs operate. A perturbation of any one node has the possibility of spreading to other nodes and their circuit domains. If enough of these nodes become linked, consciousness may emerge. Perhaps consciousness emerges when these hubs become extensively synched at certain frequencies. In the context of our “scratch pad” working memory CIP (recall Fig. 6.13), what happens to the CIP as it enters the “thought engine?” To approach an answer to that, we need to know more about the anatomy of the scratch pad itself. Where is the scratch pad? I suspect that this virtual scratch pad is in reality a whole set of circuits in which there is considerable overlap with those circuits that supply input and those in the “thought engine” that act on the input.
Engagement of Meta-circuits There is an alternative way to consider working memory and the “thought engine” of conscious mind. Perhaps everything is being handled in one giant meta-circuit, which is so constructed that fragments of CIPs are sequentially accessed in conscious mind (working memory), and then used as feedstock to influence subsequent thought within that same meta-circuit. We know that when we are consciously thinking, there is still subconscious thought going on at the same time. Maybe one giant meta-circuit is handling all the processing, but some sub-set of that meta-circuit is operates as a conscious mind that can successively access snippets of what is going on in the meta-circuit.
322
8 Theories of Consciousness
Now we come to the crucial question of the origin of conscious mind that can “read” what is in working memory and deploy it in a conscious “thought engine.” Consciousness could be thought of as being created from a meta-circuit in which a constellation of neuronal assemblies becomes sufficiently coordinated so that each component circuit “knows” what other circuits are doing. This capacity may emerge from reaching the threshold of a critical mass of circuitry becoming phase locked at certain high frequencies. The real enigma lies in explaining how such emergent circuitry can detect or represent the knowledge of certain circuit activity in a way that we can sense it consciously. The meta-circuit may contain a representation of what certain other circuits are doing, but how does that make itself known to the sense of self? That is, how does the impulse activity in the meta-circuit make us aware of the information content represented by the impulse activity in the primary sensory circuit? Perhaps this is the wrong question. The correct question may be: How does the meta-circuit create a sense of self? Science thus far cannot explain consciousness. Research continues to identify neural correlates of consciousness, as elegantly summarized in the recent book by Christof Koch (2004). But such correlates do not provide an explanation. In the context of consciousness, we have yet another problem. If thoughts are tagged in the form of CIPs, how does the brain make itself consciously aware of what is represented by its own CIPs? Does the brain have some sort of meta-tagging mechanism wherein each CIP is itself tagged in a way that multiple CIPs, when merged at the same time, now have an emergent property that enables an awareness of what the various CIPs represent? If so, how could any such meta-tagging be accomplished? This possibility could operate at both subconscious and conscious levels. The difference for conscious mind, however, could be that conscious mind does not “see” the original stimulus, but mainly “looks in on” the CIP representation being held in subconscious mind. I propose that conscious mind contains CIP representation of another sort. Namely, the brain creates a conscious mind that is a representation of self-identity, as opposed to representations of external world. This view requires consciousness to be constructed, rather than emergent. Thus, sense of self-identity can grow with time, being modified by biological maturation and learning experience. CIPs are themselves very real and subject to biologic forces. They are also subject to mentalistic forces, given that those forces are actually CIPs. This view holds that conscious mind has CIPs that act as the brain’s active agent, a “free will” partner in brain function that operates in parallel and in conjunction with subconscious mind to make the total brain function more adaptive and powerful than could be achieved with subconscious mind only. Evolutionarily, this may be the mechanism that changed pre-human zombies to who we are today.
Consciousness as Brain-Constructed CIP Avatar I propose that the brain creates a CIP representation of self, a seemingly virtual agent, an avatar, that acts in the world on behalf of the brain and body. This consciousness
Circuit Impulse Pattern Theory of Consciousness
323
CIP is contained within the meta-circuit of whole brain and is therefore integrated with what goes on in subconscious processing. The avatar idea helps to reconcile the conflict between materialist and nonmaterialistic views of human mind. The avatar is born of materialistic processes. Though the avatar may exist as a combinatorial code of CIPs, its essential link to its materialistic origin provides a means for the code to influence and be influenced by what the rest of the brain’s circuits are doing. The brain can experience the world consciously because sensations are automatically processed in the context of “I.” The avatar also allows brain to live vicariously in the world of imagination where levels of thought may not be directly tied to sensation. Eric Baum has written a book that views the brain in terms of computer algorithms, a popular approach. Yet this is metaphor. The brain certainly does not use integers or computations the way a computer does. Baum does make a point that I wholeheartedly endorse: namely, that “the mind is an evolved program charged with making decisions to represent the interests of the genes, and that this program evolution leads to agency. … The self is simply what I call the avatar agent whose interests the mind is representing” (Baum 2004, 403p). When the brain constructs a CIP representation of a sensation like sound or sight, as far as the brain is concerned, the representation IS the sensation. It is the representation that the brain is aware of, not the outer world as such. When consciousness is present, it is the representation that the conscious mind is aware of. Another way to say this is that the avatar is aware of the CIP representation of the sensation. So you have one set of CIPs, of the avatar, sharing CIP information of another set, the otherwise subconscious sensations and processes. If consciousness is a combinatorial code of CIPs, how can such a code create personality, beliefs, feelings, decisions, etc.? This question is wrongly posed. The code doesn’t create these things. It is the representation of such things. The code, because it arises from materialistic CIP processes, is part of the machinery of mind. The code can therefore influence the very circuits from which it is being generated. In that way, a code in real time can change the nature of the code at some future time. In short, mind can change its mind. How do intentionality and free will arise from a code? As a representation of the current mental state, the code can be changed by external input or by feedback as the CIP code is routed through various sub-circuits and modified in the process. And let us remember what an intention really is. It too is a CIP representation that can be propagated to generate new thought, intent, or behavior. There are lines of evidence that support the CIP theory in addition to the rationale just developed. Evidence falls into two categories of predictions: (1) the CIPs, or some manifestation thereof such as EEG frequencies, should change as the state of consciousness changes, and (2) changing the CIPs or their manifestation should change the state of consciousness. In the first category, the whole history of EEG studies, both in laboratories and in hospital settings, attests to the fact that there is generally a clear correlation between the EEG and the state of consciousness. There are apparent exceptions, but I have argued elsewhere that these EEG-behavioral dissociations, as they are called, are usually misinterpretations of the state of consciousness (Klemm 1993).
324
8 Theories of Consciousness
The general observations can be summarized as follows: • In the highest state of consciousness and alert wakefulness, the EEG is dominated by low voltage-fast activity (beta and gamma), typically including oscillations in the frequency band of 40 and more waves per second. • In relaxed, meditative states of consciousness, the EEG is dominated by slower activity, often including so-called alpha waves of 8–12/s. • In emotionally agitated states the EEG often contains a great deal of 4–7/s theta activity. • In drowsy and sleep states, the EEG is dominated by large, irregular slow waves of 1–4/s. • In coma, the trend for slowing of activity continues, but the signal magnitude may be greatly suppressed, even to the point of no signal that can be detected from the scalp. • In death, there is no EEG signal anywhere in the brain. Because the EEG is a manifestation of overall CIP activity, changes in EEG correlates of consciousness support the notion that it is changes in CIP that create the change in state of consciousness and in the corresponding EEG pattern. Even so, these are just correlations, and correlation is not the same as causation. More convincing evidence comes when one can show that changing the CIPs, either through disease or through some external manipulation, changes the state of consciousness. For instance, massive cerebral strokes may wipe out neural responsiveness to stimuli from large segments of the body, and the patient no longer has any conscious awareness of stimuli from such regions. Injection of a sufficient dose of anesthetics produces immediate change in neural activity and unconsciousness ultimately follows. Similar effects can be produced unilaterally by injecting anesthetic into only one carotid artery. Naturally occurring epilepsy causes massive, rapid bursts of neural activity that wipe out consciousness. Even during the “auras” that often precede an epileptic attack, there are localized signs of epileptic discharge and the patient may be consciously aware that a full-blown attack may soon ensue (Schulz et al. 1995). Another line of evidence comes from the modern experimental technique of transcranial magnetic stimulation. Imposing large magnetic fields across spans of scalp is apparently harmless and produces reversible changes in brain electrical activity that in turn are associated with selective changes in conscious awareness. A wide range of changes in consciousness functions can be produced depending on the extent of tissue exposed to the magnetic field stimulus (Grafman and Wassermann 1998). Summarizing, let us first remember that consciousness has to be triggered, typically by activation (actually disinhibition) of the cerebral cortex from the brainstem reticular formation. The large slow electrical waves that permeate the neocortex during unconsciousness presumably reflect synchronous activity of neurons that are causing the inhibition that prevents emergence of arousal and consciousness. It is important not to equate EEG signs of arousal with consciousness. EEGs of various lower animal species, all of which show an “activated” EEG (Klemm 1973). What is not clear is the frequency bands and spatial coherences of such EEGs. Such
Circuit Impulse Pattern Theory of Consciousness
325
data, and the conclusions based thereon, were obtained before the age of digital EEG and frequency analysis. The pen and ink tracings of an EEG machine cannot display well frequencies above about 30/s. We know little about the full frequency band and the coherences of various frequency bands in the EEG in lower animals. It is entirely possible that lower animals only have beta activity (less than about 30/s), while humans have more predominance of gamma activity (40 and more/s). Moreover, the degree and topography of coherences have never been subjected to examination across species. One index of degree of consciousness could well be the ratio of gamma activity to beta activity. Another index could be frequency-band-specific differences in the level and topographic distribution of coherence. If it turns out that gamma activity and its coherences (both topographic and with other frequencies) are central to consciousness, we then must find an answer to another question. How the underlying high-frequency clustering of impulses that account for gamma activity could create a consciousness avatar is not at all clear. But it is likely that gamma activity reflects more active neural circuits with larger capacity for carrying information and mediating throughput. The CIPs of subconsciousness and consciousness might be basically the same, with the exception that consciousness might arise from some transcendent parameter, such as frequency coherence. It seems likely that binding of disparate elements of thought is an element of conscious thinking. It may be that it is not binding as such that creates consciousness, but rather the kind of binding. For instance in my lab’s study on ambiguous figures, cognitive binding was manifest in coherence in two or more frequency bands. These might even have had meaningful cross-frequency correlations, but that was not tested. Even so, it is not clear why or how consciousness would arise from multiple-frequency binding unless the coherence in different frequencies carries different information. One frequency may carry the information while another might carry the conscious awareness of the information. Another possibility is that coherence creates consciousness only if enough different areas of the brain share in the coherence. Only a fraction of subconscious processing seems accessible at any one time, suggesting that only a sub-set of CIPs could acquire the conditions necessary for consciousness. The corollary is that conscious registration has a limited “carrying capacity,” and that does seem to be the case. Maybe this is because the CIPs of consciousness have to hold in awareness not only the CIP information from the subconscious but also those for the sense of “I’ and all that it entails.
Learning by the Avatar When my brain goes to sleep at night, where does my “I” avatar go? It is no longer around to act as an agent for my brain and body. Except for occasional re-emergence of the “I” avatar in dreams, the avatar has vanished. Yet upon awakening in the morning, my “I” avatar returns intact and unchanged, except for imperceptible changes in memory consolidation that occurred during the sleep.
326
8 Theories of Consciousness
Why don’t I wake up as somebody I would rather be. No, I am stuck with my usual “I,” because that is the avatar representation my brain had constructed in longterm memory over the years. Yes, I do mean to say the sense of self is learned (see below). The only things that change that representation are incremental learning changes as one ages, and the complete destruction of sense of self that occurs with Alzheimer’s disease. The personal identity sense comes back after a sleep episode and changes only slightly from day to day. Because there is no hard wiring for consciousness (that we know of), the fact that the sense of identity comes back and changes only slightly from day to day, confirms that the self-identity CIPs have been learned. That is, the CIPs that represent our sense of self are stored in long-term memory and always come back intact into realization when consciousness is enabled by the requisite brainstem-cortical interactions. Unlike the traditional five senses, the brain must learn how to construct a neural representation of a sense of self. The brain must learn what circuits to recruit and what CIPs and spatio-temporal codes to use to represent a conscious sense of self. As a CIP representation, the avatar has the same opportunity to learn and be modified by experience as do the CIPs for subconscious sensory and motor operations. When “I” was a baby, my brain was teaching me who I was and what was my place in the world of mother’s milk, dirty diapers, and the crib that restrained me. As life progressed, my new experiences expanded my sense of self and the CIPs of my avatar. No doubt this was made possible by the creation of the many new circuits that form in babies as neurons grow their fiber branches and dendritic spines. And if I should be unlucky enough to develop Alzheimer’s disease, the CIPs supporting my avatar shrivel to the point where the brain no longer generates the avatar. I eventually lose what I know about myself, and even my sense of self altogether, even though I may still be able to move about and experience bodily sensations.
The Avatar and Its Sense of Self The CIPs of conscious mind represent a construct of self—the “I” of the ego. By analogy, think of a video game in which the players are computer avatars representing the real players. A good example is the increasingly popular Web environment known as Second Life, in which players create their own avatars and live vicariously through the avatar in the virtual world. My idea is that the brain creates a set of CIPs to represent a “sixth sense,” the sense of self, which because it arises from neural circuitry can readily interact with subconscious mind and with the external world. Thus, the brain can interact with the world vicariously via its avatar acting as its agent in the world. A computer avatar is controlled externally by the person who creates it. It has no direct effect on the person. However, the brain’s avatar is bi-directional. It is driven by the subconscious mind, yet it influences and even programs the subconscious.
Circuit Impulse Pattern Theory of Consciousness
327
This CIP avatar is a surveillance agent that monitors the CIP information that comes from subconscious mind and from the external environment. Just as the subconscious mind is not aware that it is aware of sensations, it may not be aware of its avatar. How then do these ideas relate to CIP representations of the avatar? One possibility could be that conscious mind does not “see” original stimuli, but mainly “looks in on” the CIP representation being held in subconscious mind. Certain sensations can only be perceived in the conscious mind, such as itch or pain. Representations of such stimuli are contained in CIPs, but when accessed by the CIPs of conscious mind, become detected with the special identity of either itch or pain. Consciousness, in this view, is not something “out there,” but something (i.e. CIPs) “in here.” In other words, the “ghost in the machine” has materialized.
How Does the Avatar Produce Consciousness? This is the hard question. Conscious mind is an entity, in the sense that it is a neural representation of self, one that is grounded in the currency of all thought, CIPs. The hardest part of this hard question is why should this sense of self be conscious? How does it accomplish a level of knowing beyond the mere recognition that operates in subconscious mind? Indeed, even subconscious mind must reference its operations to bodily sense of self. We must ask not only how is the avatar aware of its existence as a unique entity, but how does it know that it knows what the other five senses detect? Advanced nervous systems still use the same currency of thought that primitive systems use. What then, could be different about the neural activity of consciousness? First of all, we have to identify the neural correlates of consciousness, as Chris Koch so ably strives to do. Further, we must identify which of these correlates are not just coincidental, but essential causes of consciousness. I suggest that science has not completed this task, inasmuch as combinatorial codes have not been sought and temporal coherences have received only modest inquiry. Both mechanisms most likely increase the informational “carrying capacity” needed to achieve the more robust processing capacity of consciousness. Once we establish that combinatorial coding and temporal coherences provide the foundation for consciousness, as I think will eventually be demonstrated, we are still left with the problem of explaining what this unique kind of neural activity does to enable the higher levels of awareness. Clearly, this must have something to do with the fact that the avatar, by definition, processes information in the context of its own self identity. It is the avatar that is self-aware. The nervous system’s fundamental design principle is to accomplish awareness—to detect things in the environment and then generate appropriate responses. In higher animals, that capability extends to detecting more and more abstract things, ultimately the most abstract thing of all, the sense of self. Such a CIP-based system is not only able to detect and code events in the “outside” world,
328
8 Theories of Consciousness
but it can do the same for its inner sense of self. Thus, the conscious mind, being simultaneously aware of the outside world, and its inner world automatically has the capacity to know that it knows. Any time this sense of self is active, the brain detects the sense of self. This sense has an autonomy not found with the traditional five senses. It is an entity that has a life of its own. The circuits of the avatar overlap those of the senses and their subconscious processing circuits. Thus, the information content in the circuits mediating traditional senses is accessible by the avatar circuits. And when that information is accessed, it is processed in a self-aware system, in turn making explicit some significant portion of the traditional senses and their associated thought. The self is an explicit frame of reference by which all senses can be evaluated in the context of self-awareness. The smell of sizzling steak is only detected by my olfactory pathways, but it is perceived by my avatar. Likewise, the sight of a beautiful woman, or a touch of kindness, or the sound of great music, or the taste of fine wine, are all detected not just by my primary sensory pathways, but by my avatar’s sense of I. The brain constructed my avatar in a way to detect, evaluate, and respond to such sensations with richer adaptational capability. The avatar is what makes us human.
Implications of the Brain-Constructed Avatar The avatar idea has some major implications: 1. Evolving brains progressively incorporated and perpetuated the capacity for generating the “sixth sense” because it was beneficial to survival of the species. 2. “I” am a construct of my brain. 3. The avatar arises out of the laws of chemistry and physics that govern brain function. 4. Conscious thought is accomplished by the avatar, which because it is so inextricably linked to brain, can “pick the subconscious brain” for feelings, memories and ideas. 5. This avatar is an active agent, able to:
(a) act on “my” behalf with my inner and outer worlds of experience, (b) teach (program) subconscious mind, (c) alter, within limits, the functions that generate it, (d) exert its own free will.
6. Within its limited domain of representation, my avatar is aware of what it thinks. It knows that it knows. Consider the Fig. 8.4 to visualize how the relationships of the “I Avatar”. The avatar shares circuitry with the CIPs that are generated from the senses, memory stores, and motor commands. The avatar can access and modify the CIPs generated subconsciously. This avatar accomplishes introspection by its own central
Circuit Impulse Pattern Theory of Consciousness
329
Conscious Mind (“1” Avatar) A
D
Outer World Inner World
C B
Subconscious Mind
Fig. 8.4 The “I” Avatar is a sixth sense, one that senses its identity, “I.” The avatar monitors objects and events in the inner world of the body and the outer worlds detected by our senses with CIPs (path A), much the same as subconscious mind does (path B). Subconscious mind generates the “I” avatar as well as supplies it with some of its CIP information (path C). Conscious mind feeds back its CIP information to subconscious mind (path D) and thus helps to program it
processing unit, i.e., it’s own virtual brain. Thus the avatar is a free-will agent, because having a mind of its own, it can operate with some independence from the subconscious mind. The avatar explanation of consciousness helps to address the objections of blogger Michael Egnor, who asserts “Matter, even brain matter, has third-person existence; it’s a ‘thing.’ We have first-person existence; each of us is an ‘I,’ not just a thing. How can objective matter fully account for subjective experience?” He tries to argue that “Not a single first-person property of the mind — not intentionality, qualia, persistence of self-identity, restricted access, incorrigibility, nor free will — is a known property of matter.” But from an avatar perspective, these properties of mind do not have to have the properties of matter. If “I” am an avatar, acting on behalf of my brain as a material set of CIPs, then my CIPs can include representations for the features of conscious mind that Egnor thinks are fundamental: intentionality, qualia, persistence of selfidentity, restricted access, incorrigibility, and free will. My CIPs, a materialist product of brain acquire non-materialistic properties when the CIPs are constructed as an avatar. The avatar CIPS are free to perform mental acts, such as free will, because they are not totally controlled by the constraints of subconscious mind. The avatar, though it may owe its birth to subconscious mind, is released to be its own active agent in the world.
330
8 Theories of Consciousness
Why would a brain do that? What is the advantage for survival of the species? First, as previously implied, having an avatar is liberating for the subconscious mind. The avatar gives the brain extra capacity for introspection, planning, and correction. The avatar can also filter, organize, and facilitate the programming of the subconscious mind. The avatar gives the brain a different perspective on what works in the world and the kinds of programming that would be advantageous to the brain and its body.
Unleashing the Avatar The brain learns, memorizes, retrieves and interprets its representation of the “I.” Those processes appear every time the necessary CIP conditions are met, as when we wake up each morning in response to a brainstem reticular formation disinhibition of the cortical circuitry. How can the “I” avatar get unleashed from its slumber? There must be a threshold for the non-linear processes that create the conditions for emergence of the avatar and its properties of wakefulness and consciousness. Though some people wake up in the morning more groggy than others, consciousness at least in some people suddenly “comes on,” like a light switch. Though we cannot yet specify these processes in detail, we know that once consciousness threshold is reached, the effect must involve the brainstem reticular formation interacting with neocortical circuitry to re-instate the avatar. Subconscious CIP representations have sense organs and movement of body parts as their frame of reference. But consciousness CIP representations have “I” as the frame of reference. The avatar experiences everything in the context of “I.” What the avatar knows includes how it is interfacing with the CIP representations of the inner and outer worlds. That is, “I” can know that I know.
How the Avatar Knows It Knows How can the consciousness avatar be aware of what it is representing with its CIPs? The representation must include more than just the self. That is, my CIP avatar becomes aware of the worlds it encounters because it either constructs other CIPs to represent worldly experience or else it can “read” the CIP representations that otherwise would remain subconscious (see earlier diagram). Maybe we should frame the issue this way: how are we consciously aware of our sense of self? This is equivalent to asking how is the avatar aware, at least in some limited sense, that it is an avatar? Perhaps the avatar is self-aware because it was created by brain to represent the self, not to represent the sensory world, but to serve as an interface to it. All sensation that is presented to the avatar is presented in the context of self. Likewise, all choices, decisions, and commands issued by the avatar’s circuitry are made with reference to self.
Circuit Impulse Pattern Theory of Consciousness
331
Fig. 8.5 Summary diagram of the “I avatar” basis for consciousness. Nonconscious and subconscious brain interact directly with the world (1 and 2), while the avatar that brain generates on its behalf (large arrow) interacts with the world indirectly via its intimate integration with brain. 1. nonconscious and subconscious awareness. Sensory input from the world is abstracted and represented in brain in the form of CIPs. 1a. conscious awareness. The CIPs of the avatar can access sensory input indirectly via the CIP representations in brain. Those CIPs are perceived consciously because they are registered in circuitry that is generated to reference input in terms of selfhood. 2. Subconscious action. Brain issues commands to act in the world, more or less reflexly. 2a. Conscious action. The avatar responds to input reflectively, considering choices, making decisions, and issuing commands in reference to self
The avatar CIPs, representing the sense of “I,” are accessible to the subconscious mind operations that generated the avatar (Fig. 8.5). That is, the brain knows that it has this avatar and knows what it is doing, even if only subconsciously. The avatar, however, knows consciously because its information is processed within the context of the sense of “I.” Stimuli are not isolated physical events. The representation is the awareness. Awareness is intrinsic to the CIPs. Once sensory CIP representations become integrated with the sense of “I” CIPs, the sensory information moves beyond being detected to being perceived. The avatar also registers sensory input and its own self-generated intent, decisions, plans, and the like in its own terms―all referenced to itself. Thus, it achieves these registrations consciously. CIPs contain the representation of subconscious thought. Thus a first step in enabling conscious awareness of such thought would be for the various circuits to have overlapping elements; that is, certain neurons participate in several or more circuits and in that sense monitor the information contained therein and share it among all shared circuits. This is a physical way by which circuits can be made “aware” of what is going on in other circuits.
332
8 Theories of Consciousness
Such monitoring need not be continuous. Neurons participating in shared circuitry could be gated, through inhibitory processes, to sample activity in other circuits by a process akin to multiplexing in an analog-to-digital converter that sequentially samples activity in multiple signals. That process would allow the brain to sample across multiple streams of its subconscious thought in parallel pathways, resulting in a combinatorial code. Multiplexing across parallel circuits provides a possible mechanism for the “binding” phenomenon. Multiplexing may seem counter-intuitive, especially since we (think) we know that humans multi-task. Young people seem especially able to do multiple things at the same time, such as text messaging on a cell phone, driving a car, listening to a CD, playing a video game, and talking to a friend. But this is an illusion. The brain really does do only one thing at a time. Our brain works hard to fool us into thinking it can do more than one thing at a time. Recent MRI studies at Vanderbilt prove that the brain is not built for good multi-tasking (Dux et al. 2007). When trying to do two things at once, the brain temporarily shuts down one task while trying to do the other. In that study, even doing something as simple as pressing a button when an image is flashed causes a delay in brain operation. The MRI images showed a central bottleneck occurs when subjects were trying to do two things at once, such as pressing the appropriate computer key in response to hearing one of eight possible sounds and uttering an appropriate verbal response when seeing images. Activity in the brain that was associated with each task was prioritized, showing up first in one area and then in the other—not in both areas simultaneously. In other words, the brain only worked on one task at a time, postponing the second task and deceiving the subjects into thinking they were working on both tasks simultaneously. The delay between switching functions can be as long as a second. It is highly likely, though not yet studied, that the delays and confusion magnify with increases in the number of different things one tries to do simultaneously. Could we be consciously aware of our other senses of smell, taste, sight, hearing, etc. without having a sense of self? I think not. The subconscious mind can register these sensations, but not consciously. In the real time during which subconscious mind registers sensations, the consciousness avatar must be perceiving the sensations. How can this be? Is there some shared access to sensation? How is the sharing accomplished? Consider the following example in which the eyes detect a tree (Fig. 8.6). The image is mapped in subconscious mind by a CIP representation. The brain searches its circuits for a template match in memory. When a match occurs, the brain searches further in memory stores for other associations, such as the word “tree” and any emotional associations. The memory CIPs are then accessed by the CIPs of the consciousness avatar which becomes aware of what the subconscious mind has processed (and does so in its context of self: “I see the tree”). Conscious mind monitors and adjusts as necessary its representation of itself. It also monitors some of the CIP representations of subconscious mind, but presumably has no direct access to the operations of subconscious mind. The representations of self in conscious mind can do other free-will kinds of things, such as reflect on what it knows, plan, decide, and veto. In other words, conscious mind is a “mind of its own.”
Circuit Impulse Pattern Theory of Consciousness
333
CIPs matched to image in memory and to the associated word
CIPs register this image
tree
=“tree”
Subconscious Mind
CIPs mapped into avatar, which can “read” subconscious representation
Conscious Avatar
Fig. 8.6 Illustration of how CIPs of a stimulus can be mapped into a memory registry that can be accessed by the proposed conscious avatar
Hypotheses cannot be proved by experiment, but they can be disproved. The theory that all three minds are contained in CIPs seems reasonable to me, and I think research effort would be better spent trying to falsify CIP hypotheses about mind than with more fanciful ideas such as chaos theory, dark energy, QM, Bayesian probability, or others.
Testability of the CIP Avatar Theory Any scientific theory should have the potential for being tested or shown to be false. But how can one possibly test such a theory, or any other theory, of consciousness? The CIP theory does have virtues not found in the alternatives (Bayesian probability, Chaos, QM,). First, it is based on what we already know to be the currency of information processing in the brain, at least for the non-conscious and subconscious brain. What is it we need experiments to prove? … certainly not the idea that consciousness acts like an avatar. That is just a metaphor. Metaphors create the illusion of understanding the real thing. Here, I use the term “avatar” in an operational way; i.e., consciousness is an agent of embodied brain. It is real, not just a metaphor. We don’t live in some kind of cyberspace like the movie Matrix. We are our consciousness.
334
8 Theories of Consciousness
Methodological limitations provide the major reason why this theory will be d ifficult to confirm. fMRI imaging, popular though it is, can never address the CIP theory, because the method does not monitor nerve impulse patterns, not to mention the other limitations I reviewed earlier. It might, however, be useful to conduct fMRI studies in which stimulus processing is compared in conscious and subconscious states. At least that would do what fMRI does best: identify “regions of interest.” To monitor impulse activity in distributed circuits could require hundreds, even thousands of microelectrodes implanted directly into brain. Perhaps an optical method can be developed where impulse-sensitive dyes can display, in three dimensions, the impulse activity coming from individual neurons. Also, to the extent that coherence may be a key mechanism, we need more robust statistical methods that get beyond pair-wise correlation coefficients to detect coherence among multiple spike trains. These limitations are not devastating. No other leading theory of mind meets the testability requirement either. In the meanwhile, the CIP theory has virtues not found in the alternatives. First, it is based on what we already know to be the currency of information processing in the brain, at least the non-conscious and subconscious brain. We do not have to invoke metaphors and mathematical models. We do not have to invoke either ghosts or science of the future (such as dark matter or dark energy). I envision several experimental approaches. One can either disrupt CIPs by external means and monitor changes in conscious thought or attempt to compare CIPs when consciousness is present versus when it is not, as for example, when responses to stimuli are compared in sleep and wakefulness.
Disrupting CIPs We already know that consciousness can be abolished or dramatically disrupted by drastic disruption of CIPs, as with anesthesia, heavy drug sedation, or electroconvulsive shock. A more nuanced approach can be achieved with trans-cranial magnetic stimulation (TCMS) (Walsh and Pascual-Leone 2003). Such stimulation indiscriminately affects both excitatory and inhibitory neurons, but it most certainly disrupts whatever CIPs are present at the time of stimulation. The technique is usually applied focally on specific parts of the neocortex. By selectively altering the CIPs of parts of neocortex that have specific conscious functions, such as language comprehension, musical analysis, or certain conscious spatial tasks, one could demonstrate an association between disruption of CIPs in a given area with disruption of the conscious operations usually performed by that area. Anodal stimulation increases spike firing rates, while cathodal stimulation decreases it. TMS also changes power and phase of oscillation of brain electrical activity. One study shows that TMS changed the ratio of alpha to gamma activity over the human parietal cortex, while at the same time increasing the accuracy of a cognitive task (Johnson et al. 2009). An effect on conscious choice behavior has recently been reported. Anodal stimulation of the dorsolateral prefrontal cortex of the left hemisphere and simultaneous cathodal stimulation of the corresponding area in the right hemisphere
Circuit Impulse Pattern Theory of Consciousness
335
changed the freely chosen strategy for choosing whether a random draw from a deck of cards would be red or black, but the opposite stimulus condition did not (Hecht et al. 2010). Cognitive responses to TCMS would confirm only a role for CIPs and not the proposed avatar. But it might be possible by manipulating TCMS pulsing parameters and topography to dissect implicit from explicit operations and show that implicit processing remains while the explicit avatar function disappears. Perception of ambiguous figures might be a useful cognitive task. TMS, however, has its limits. One should expect, for example, that applying focal TMS over the part of cortex that recognizes specific objects would alter the subject’s cognitive responses. But such studies might yield interesting findings about whether the subject knows errors are being made if corrective feedback is not supplied.
Monitoring CIPs We already have an indirect indication that states of consciousness are governed by CIPs and their oscillatory properties. The EEG clearly reveals changes associated with different states of consciousness: low voltage, fast activity during alert wakefulness, alpha rhythms during relaxation or meditation, a progression of large, slow waves in the sequential stages of sleep, and low voltage, fast activity during dreaming. To detect CIPs most meaningfully, investigators need to identify combinatorial patterns of nerve impulses at successive time increments, but also look for embedded serially ordered impulse interval “bytes” across each neuron in the circuit. For example, if a +++–pattern occurs non-randomly in one neuron during a given cognitive state, there may be temporal linkage to that or some other ordered pattern elsewhere in the circuit. We can never describe the CIPs of consciousness until we can record what they are. That would require simultaneous recording of impulse patterns from many neurons in identified circuits. Coherence of local field potentials may suffice as an index of those CIPs. Such a process is expedited by knowing in advance which brain areas are necessary for generating a given conscious operation. With language processing, which is best done consciously, we know, for example, where the two major speech centers are and the other areas they interact with. CIPs in this circuitry surely account for the conscious language operations. The two monitoring approaches need not be mutually exclusive. It is entirely possible that if combinatorial code CIP changes cause consciousness, they would be expressed as changed field-potential patterns in topographical and co-frequency coherence. For both CIP or field potential monitoring, there are several important comparisons that should be made. The most obvious is to do this monitoring as the brain switches among the various stages of sleep. We can, for example, compare normal adults with babies at various stages of their brain’s maturation. Another way to test is to compare activity in comatose patients (locked-in state, and persistent vegetative state) with normal subjects.
336
8 Theories of Consciousness
To detect combinatorial codes requires simultaneous recording of impulses from many identified neurons and better methods of pattern detection and description. There is also a pressing need for neuroscientists to learn combinatorial mathematical tools. Perhaps new methods to characterize combinatorial coding may be needed. Animal studies provide the best chance to place multiple-electrode arrays into multiple cortical columns and thereby observe associations of certain CIPs with certain cognitive processes. Nanotechnology may lead the way in providing the needed electrode arrays. At a minimum, such electrodes would have to be placed in one cortical column, one or more adjacent columns, and one or more remote columns that is known to have hard-wired connection. The problem of course is the assumptions we have to make about animal consciousness, a problem that diminishes the higher up the phylogenetic scale one goes. Studies in humans, such as patients with severe epilepsy, might be feasible. Studies of this kind could also shed some light on the role of combinatorial coding for conscious processes. An alternative to recording of multiple units is to let averaged evoked responses in multiple cortical locations serve as the index of differential CIPs. This approach could allow us to monitor impulse activity at a population level and correlate evoked responses in different cortical areas under conditions when conscious awareness is manipulated, as with sleep or drugs. Changing conscious state (i.e. the avatar) would surely produce topographical changes in evoked response, which in turn can only be caused by changes in CIPs. This would not prove the existence of the avatar, but it would certainly prove a role for CIPs in consciousness. The second research thrust should focus on the role of oscillation and synchrony in consciousness. This is most conveniently done by field potential monitoring, as in EEG, where the signal is approximates an envelope of the underlying spike trains. Learning more about consciousness will require us to cover a wide spectrum of frequencies, ranging from DC to over 200 Hz. Moreover, frequency coherence needs to be studied not only in relation to many different areas of cortex but also to coherence between and among different frequencies in the same general area of cortex. We may need better methods for examining shifting patterns of synchrony among multiple impulse generators. Two recent EEG studies confirm that conscious sensory perception, for example, is governed by the phase of ongoing EEG oscillations (Wyart and Sergent 2009). Both of the recent studies showed that the phase of 5–10 Hz oscillations just before presentation of a near-threshold visual stimulus determined the actual conscious detection. It could be argued that identifying combinatorial CIP codes or specific field potential coherences is just a more refined way of identifying electrophysiological correlates of consciousness. Showing a causal relationship may be impossible, because you can’t change one without changing the other. But, like the correlation between cigarette smoking and lung cancer, the correlation may become so robust, precisely defined, and even self-evident that assuming causality is warranted. In the case of neural correlates of consciousness, causality seems likely if changing CIPs changes conscious thought and if changing thought changes CIPs.
Circuit Impulse Pattern Theory of Consciousness
337
For purposes of characterizing how the brain creates self-conscious awareness, it doesn’t matter whether the process is manifest as combinatorial codes in CIPs or the field potential coherence patterns that are driven by CIPs. They both seem to be core mechanisms.
Specific Test Designs The CIPs and frequency coherences must surely differ between subconscious thinking and conscious thinking. Both processes presumably operate in parallel at roughly the same time. With these basic assumptions, a few questions arise that could influence design of experiments aimed at testing the idea of an “I Avatar.” 1. How can we dissect specific conscious functions and their associated neural activity? For example, we can design experiments that will record from or manipulate specific cortical areas known to mediate specific conscious functions, such as speech centers, somatosensory cortex, premotor neocortex, and mirror-neuron zones? More studies are needed with ambiguous figures. The beauty of evaluating perception of ambiguous figures is that one can compare the same image when it is consciously perceived and when it is not. Evaluating combinatorially coded CIPs from defined circuits in humans may not be feasible (electrodes implanted to detect epileptic foci are not normally placed in the areas of neocortex that would be most useful for study of consciousness. CIPs might be amenable to study in monkeys, assuming some clever artist can design ambiguous figures that have biological meaning to monkeys (such as a drawing that could be interpreted either as an apple or as a pear). Certainly coherence studies such as the one my lab performed can be extended in humans, and focal TMS can be used to see how the percept can be changed. 2. How can we distinguish subconscious and conscious thinking under otherwise comparable conditions? Perhaps this might be accomplished by comparing a classically conditioned response (subconscious) with the same motor activity generated through conscious and voluntary decision. Even Libet-type experiments could be useful if one compares spontaneous, spur-of-the-moment action with pre-planned action. 3. Is the distinction between subconscious and conscious processes attributable to CIPs or to frequency coherences or both? Obviously, the experiment ideally would examine both combinatorial coding of CIPs and frequency coherences of field potentials recorded at the same time and under the same conditions. 4. Are the distinguishing characteristics of subconscious and conscious thinking restricted to the specific cortical area under investigation or do other more distant brain areas differentially participate, depending on whether the thought is subconscious or conscious?
338
8 Theories of Consciousness
Widespread interactions seem likely, even with focal cognitive processes such as speech. The design should also include monitoring of other cortical areas that directly connect to the specific conscious processing areas. 5. What kinds of discrete conscious thoughts might be useful? Possible tasks could include word priming (speech centers), willful intent to make certain movements (premotor cortex), or situations where an observer witnesses an action by another that takes place within and without the observer’s personal space (mirror neuron sites). The latter approach can be tested in monkeys, where distinct mirror neurons can be identified, or in humans where fMRI methods can identify areas that appear to function as a “mirror neuron system” (Iacoboni 1999). 6. Can we know if the CIPs and frequency coherences of subconscious thinking occupy the same circuitry as do those of conscious thinking? Is the neural activity synchronous during both kinds of thinking or is there a phase lag? It would seem necessary to simultaneously monitor neural activity in several places, such as adjacent cortical columns and columns in the other hemisphere that are directly connected. 7. How can we distinguish between the “noise” of background neural activity of consciousness as a global state of special awareness and the activity associated with specific conscious thought? Experiments must include a conscious null-state in which daydreaming is minimized, and perhaps avoided altogether by including some kind of conscious focus on a single task concurrent with the conscious thought task under investigation. For example, one might require a subject to operate a joy stick that tracks a slowly moving target on a computer screen while at the same time performing the conscious task under investigation. 8. What possible neural mechanisms could provide the “I Avatar” circuitry with a “free will agency” capability that is not found in subconscious mind? Experiments should compare a subject’s performance of the conscious task at freely selected intervals, rather than on cue. Alternatively, the subject could function in a cued mode, but freely choose whether to generate or withhold response to the cue. The experiments can be based on electrical recordings of previously discovered CIP or frequency coherence signatures of a specific conscious thought or when subjects attempt the task when the cortical areas are temporarily disabled, was with local anesthetic or focal TMS. Is neuroscience research headed in these directions? I think so. Attending the annual meeting of the Society of Neuroscience is a great way to sense what is happening in brain research. Before submitting the final draft of Atoms of Mind, I attended again just as I have as a Charter member for most of the meetings over the 40 years of the Society’s existence. The meeting this year featured over 31,000 scientists, most of whom gave presentations on the kind of research they were doing. The “hot areas” of research are converging around the theme of explaining a material basis for mind. The most dominant approach involves brain scans, particularly using fMRI. I have explained earlier my reservations about the usefulness of brain
References
339
scans for explaining how the brain does things, so I won’t belabor the point here. More promising, I think, is the growing interest by researchers in studying time relationships among spike trains and field potentials. The limitations of computational methods have been holding this field back, but I see many innovative developments in computational neuroscience. The tools needed are emerging. It is less clear that scientists recognize the right questions to address with their powerful new mathematical tools. I think we need to apply these tools to compare the timing relationships of neural system in various cognitive states from several perspectives, using for example: • Spike trains from different neurons in the same circuit. • Spike train synchrony with simultaneously occurring field potentials. • Synchrony of field potentials at a given frequency with field potentials at other frequencies. • Synchrony of field potentials from a given brain area with those from all other areas at the same time. Such studies should focus on changes of the various electrical events as cognitive states transition from one to another, as for example being awake, going to sleep, shifts among the various sleep stages, and REM sleep. Other useful comparisons include monitoring the various electrical indicators of thought as the mind shifts among thoughts and tasks, particularly comparing thoughts that are explicitly realized in consciousness and those that are not. I don’t see much of this kind of research yet. But it is coming, and soon. There is another area of research that scientists are ignoring. Recall my earlier comments about the huge DC shifts in brain activity when rats are sacrificed by decapitation? Well, I never followed that up nor has anybody else. That result may be telling us something about consciousness that no one to my knowledge has considered. Consider the possibility that one of the things the ARAS system does during triggering of consciousness is to push the brain into a global DC or ultra-slow shift that reaches a certain threshold point at which the ongoing population coding of impulses and synchronicities of field potentials can operate in the consciousness. Below that set point, such process still go on, but at subconscious levels. Such shifts are likely to come from glial cells, which are known to produce large slow potentials when they are depolarized. Suppose the really important function of the ARAS is to induce a synchronous activation of cortical glial cells. Consciousness may well be mediated as much by the non-neural glial cells as it is by neurons!
References Baum, E. B. (2004). What is thought. Cambridge: MIT Press. Beck, F. (2008). Synaptic quantum tunneling in brain activity. NeuroQuantology, 6(2), 140–151. Beck, F., & Eccles, J. C. (1992). Quantum aspects of brain activity and the role of consciousness. Proceedings of the National Academy of Science, 89, 11357–11361.
340
8 Theories of Consciousness
Beck, F., & Eccles, J. C. (2003). Quantum processes in the brain. In N. Osaka (Ed.), Neural basis of consciousness (pp. 141–200). Amsterdam: John Benjamin. Binzegger, T., et al. (2005). Cortical architecture. In M. De Gregorio et al. (Eds.), BVAI 2005. LNCS, Vol. 3704, pp. 15–28. Berlin/Heidelberg: Springer. Burkhalter, A. (2008). Many specialists for suppressing cortical excitation. Frontiers in Neuroscience, 2, 155–167. Buzsáki, G. (2006). Rhythms of the brain. New York: Oxford University Press. Clark, W. R., & Grunstein, M. (2000). Are we hardwired? The role of genes in human behavior. New York: Oxford University Press. Conte, E. (2008). Testing quantum consciousness. NeuroQuantology, 6(2), 126–139. Douglas, R. J., & Martin, K. A. (2004). Neuronal circuits of the neocortex. Annual Review Neuroscience, 27, 419–451. Doya, K., et al. (2007). Bayesian brain. Probabilistic approaches to neural coding. Cambridge: MIT Press. Dux, P. E., et al. (2007). Isolation of a central bottleneck of information processing with timeresolved fMRI. Neuron, 52(6), 1109–1120. Folger, T. (2009). Is quantum mechanics tried true, wildly successful, and wrong? Science, 324, 1512–1513. Freeman, W. J. (2000). How brains make up their minds. New York: Columbia University Press. Freeman, W. J. (2009). Consciousness, intentionality, and causality. In S. Pockett, W. P. Banks, & S. Gallagher (Eds.), In Does consciousness cause behavior? (pp. 73–105). Cambridge: MIT Press. Gilder, L. (2008). The age of entanglement. New York: A. A. Knopf. Gisin, N. (2009). Quantum nonlocality: how does nature do it? Science, 326, 1357–1358. Grafman, J., & Wassermann, E. (1998). Transcranial magnetic stimulation can measure and modulate learning and memory. Neuropsychologia, 37(2), 159–167. Greene, B. (2004). The fabric of the cosmos. New York: Vintage Books. Hamerhoff, S., & Penrose, R. (1996). Orchestrated objective reduction of quantum coherence in brain microtubules: the “Orch OR” model for consciousness. In S. R. Hameroff, A. W. Kaszniak, & A. C. Scott (Eds.), Toward a science of consciousness – The first Tucson discussions and debate (pp. 507–540). Cambridge: MIT Press. Hecht, D., Walsh, V., & Lavidor, M. (2010). Transcranial direct current stimulation facilitates decision making in a probabilistic guessing task. Journal of Neuroscience, 30(12), 4241–4245. Izhikevich, E. M. (2007). Dynamical systems in neuroscience. Cambridge: MIT Press. Janssen, P., & Shadlen, M. N. (2005). A representation of the hazard rate of elapsed time in the macaque area LIP. Nature Neuroscience, 8, 234–241. Johnson, J.S., Hamidi, M.X., and Postle, & B.R. (2009). It’s not a virtual lesion. Evaluating the effects of rTMS on neural activity and behavior. Program No. 92.6. Neuroscience Meeting Planner. Chicago: Society for Neuroscience [Online]. Klemm, W. R. (1973). Typical electroencephalograms: vertebrates. In P. L. Altman & D. S. Dittmer (Eds.), Biology Data Book, Vol. II, 2nd Ed. (pp. 1254--1260). Federation of American Societies for Experimental Biology, Bethesda, MD. Klemm, W. R. (1993). Are there EEG correlates of animal thinking & feeling. Neuropsychobiology, 26, 151–165. Klemm, W. R., & Vertes, R. (1990). Brainstem mechanisms of behavior. New York: Plenum. Koch, C. (2004). The quest for consciousness. A neurobiological approach. Englewood: Roberts & Company. Krasner, S. (Ed.). (1990). The ubiquity of chaos. Washington: Amer. Assoc. Advancement of Science. Kruglinski, S. (2009). The discover interview: Roger Penrose. Discover Magazine, September, pp. 54–57. Latham, P., & Pouget, A. (2007). Computing with population codes. In K. Doya et al. (Eds.), Bayesian brain. Probabilistic approaches to neural coding. Cambridge: MIT Press. LeDoux, J. (2002). Synaptic self. How our brains become who we are. New York: Viking.
References
341
Lorenz, E. N. (1993). The essence of chaos. Seattle: University of Washington Press. Molnár, G., et al. (2008). Complex events initiated by individual spikes in the human cerebral cortex. PLoS Biology, 6(9), e222. doi: 10.1371/journal.pbio.0060222 doi: dx.doi.org. Nunez, P. L. (2010). Brain, mind, and the structure of reality. New York: Oxford University Press. Platt, M. L., & Glimcher, P. W. (1999). Neural correlates of decision variables in parietal cortex. Nature, 400, 233–238. Rabinovich, M., Huerta, R., & Laurent, G. (2008). Transient dynamics for neural processing. Science, 321, 48–50. Ricciardi, L., & Umezawa, H. (1967). Brain and physics of many body problems. Kybernetik, 4, 44–48. Schultz, W., et al. (2000). Reward processing in primate orbitofrontal cortex and basal ganglia. Cerebral Cortex, 10, 272–284. Schulz, R., et al. (1995). Amnesia of the epileptic aura. Neurology, 45(2), 231–235. Somjen, G. G. (2005). Aristides Leão’s discovery of cortical spreading depression. Journal of Neurophysiology, 94, 2–4. Song, D. (2008). Incompatibility between quantum theory and consciousness. NeuroQuantology, 6, 46–52. Stapp, H. (2007). Mindful universe: quantum mechanics and the participating observer. Berlin: Springer. Turin, L. (1996). A spectroscopic mechanism for primary olfactory reception. Chemical Senses, 21, 773–791. Walsh, V., & Pascual-Leone, A. (2003). Transcranial magnetic stimulation. Cambridge: MIT Press. Wyart, V., & Sergent, C. (2009). The phase of ongoing EEG oscillations uncovers the fine temporal structure of conscious perception. Journal of Neuroscience, 29(41), 12839–12841.
9
Conclusions
So where does all this leave us? Before answering that question, there is yet one other alternative that I should mention in the interests of completeness. That is the notion of solipsism. Solipsism is a philosophical doctrine that comes in many flavors, but the one most relevant here is the idea is that there is no reality other than consciousness. More specifically, the idea is that only consciousness exists and that all else is constructed in consciousness. It is as if the whole universe and our role in it is a gigantic simulation. Even some scientists actually believe this, in spite of the complete absence of evidence. How scientific is that? The other problem with solipsism is that it is ultimate anti-science. Most advocates of solipsism are just contriving to be intellectually cute. In considering the old “ghost in the machine” idea, we confronted the enigma of what happens to mind, at least conscious mind, when we sleep or are anesthetized or experience certain kinds of head trauma that produce coma. In those circumstances, conscious mind is clearly gone, though the potential to reactivate it may still be there. Non-conscious mind, however, still can operate in such circumstances. Blood pressure, heart rate, and respiratory control still operate, though perhaps not optimally (as in sleep apnea, for example). Certain neuro-hormone cycles still operate, most obviously the daily rhythmic controls over melatonin from the pineal gland and cortisol from the adrenal gland. Subconscious mind also still operates in sleep, as evident by active consolidation of the day’s memories. But where did conscious mind go during sleep or anesthesia? Most likely, it just ceased to be expressed. It is still possible, however, that conscious mind still tries to operate, even in the deep, non-dreaming stage of sleep. As Michael Steridae showed, the high-frequency gamma EEG activity normally associated with higher levels of conscious thought are still present, riding on top of slow and larger, irregular voltage waveforms that preclude any manifestation of consciousness. Some level of “conscious” processes may still be going on, as suggested by a very recent sleep-learning study in
W.R. Klemm, Atoms of Mind: The “Ghost in the Machine” Materializes, DOI 10.1007/978-94-007-1097-9_9, © Springer Science+Business Media B.V. 2011
343
344
9 Conclusions
which sensory learning cues presented during the deepest level of non-dream sleep improved the memory formation of memories that used those cues during the immediately preceding wakefulness period. In other words, perhaps consciousness is a semantic convenience and not as distinct a biological state as we think. I started my own quest to understand mind in the hope that I could explain mind on the basis of what scientists have already discovered. After considering leading alternative ideas, I now conclude that CIPs must lie at the heart of what we call “mind,” whether such mind is non-conscious, subconscious, or conscious. The other theories of consciousness seem less defensible. Whatever theory of consciousness is correct, it should accommodate all the basic facts that have been discovered about brain function over the last century. That is why I argue that the correct theory of mind will be based on such fundamentals as neural circuits and networks, and on the primary carrier of thought, which is the patterning of nerve impulses and the attendant effects. In the past 100 years, researchers have clearly established that the nervous system creates a CIP abstraction of what it detects in the environment. That representation takes the form of nerve impulses, propagating through neural circuits. The abstraction serves to represent what is sensed and is the currency by which the brain evaluates and generates intentions, choices and decisions. Circuits connect and overlap with each other to create huge networks of CIPs. The anatomical and biochemical changes associated with CIPs allows such CIPs to be re-generated when cued and are the way the brain remembers what it knows. But what of conscious mind? Thought, whether conscious or not, is a CIP representation of neural processing. If we are explicitly aware of those representations, then we call that conscious thought. Consciousness is a CIP representation but differing in that what is represented is an abstract sixth sense, an awareness of self and what the self encounters and engages. As with the ordinary five senses, this sixth sense is embodied. But the sixth sense CIPs represent the five ordinary senses in two contexts simultaneously, the context of the avatar world as well as of the outer world. Thus, this mind is empowered at the outset to know that it knows. Long-term memory stores these representations, and they are released for operation and updating whenever consciousness is triggered. This sense of self may even exist in a less robust way in higher animals. In objective terms, to know is to register information. The conscious avatar knows information the same way the subconscious mind does; that is, through CIP representations of that information. So, the key question is “What is different about the CIPs of consciousness and those of subconsciousness?” Surely, the CIPs must different in some way, most likely in spatial and temporal distribution. Conscious mind, I contend, operates like everything else the brain does. We don’t necessarily have to invoke mathematical models or particle physics, and certainly not solipsism or “ghosts in the machine.” Conscious mind constructs CIP representations just as do subconscious and subconscious minds, with the difference that what is represented is not the outside world or the world of the body, but rather the world of the ego.
9 Conclusions
345
Self identity is learned, beginning with the fetus and newborn, and develops as the brain develops capacity to represent itself consciously. In short, you have learned to be you. I have learned to be me. Choices we make and our learning experiences continue the growth and development of our avatars. Many of these intentions, choices, and decisions are freely willed, and in that sense we are personally responsible for what we become as individuals. Central to my explanation for consciousness in biologically advanced brains is the ability of brain to create the equivalent of an avatar, an agent that acts in the world on behalf of brain. The avatar CIPs, when generated, act as a self-aware interpreter of other representations in the brain. It interprets not just linguistically, but also in such terms as qualia of sensation, reinforcement contingencies, emotions, and probable outcomes of action alternatives. The avatar is also a “free will” partner in brain function that operates in parallel and in conjunction with subconscious mind to make the total brain function more adaptive and powerful than could be achieved with subconscious mind alone. This explanation suggests that the Holy Grail search for consciousness has been missed because it was so close under our noses that we could not see it. To restate my position, mind is matter. The ghost in the machine is material. Being constituted as CIPs, the avatar has the innate capacity to exert control, including free will. The avatar CIPs are accessible to subconscious mind operations.. The brain knows that it has this avatar and knows what it is doing. But stimuli and assorted thoughts are not isolated physical or CIP representations. The avatar knows consciously because its information is processed within its CIP representation of the sense of “I.” This representation IS the awareness. Conscious awareness is inherent in those CIPs. These CIPs are not only correlates of mind, but are its actual physical basis, including conscious thought and the sense of self. The CIP avatar is not only “in the loop,” as far as conscious action is concerned, it IS the loop. Just as the CIPs of subconscious mind can make certain intentions, choices, and decisions, the avatar’s CIPs can do likewise, except here the actions can be freely chosen or vetoed in the light of self awareness. When consciousness is present, it is the CIP representation of sensations or thought processes that the conscious mind is aware of. The “I Avatar” is not just a virtual embodiment of conscious thought: it physically embodies thought by virtue of its own CIP representations. As for mechanisms of mind, it seems clear that the “Ghost” has indeed materialized— in the form of nerve impulses flowing around in circuits. Without this impulse flow, there is no active mind, not even non-conscious mind. As mind operations operate in consciousness, it also seems clear that the time-chopping effect of high-frequency oscillation and synchronization processes seem necessary mechanisms for packaging and routing impulse traffic. These processes may well manifest certain Bayesian and Chaotic properties. We cannot exclude a role for QM, which has the advantage that its materialistic phenomena could serve as a “ghost in the machine.” After all, Einstein said QM was “spooky science.” The “I Avatar” idea is a crucial perspective on our humanness. Our avatar is the ultimate expression of who we are. Our avatar is the physical basis of Freud’s ego
346
9 Conclusions
and super-ego. Though our avatar is an inevitable creation of brain function, the nature of the avatar (personality, intellectual competence, and value systems) is created not only by genes but also by learning experiences and free-will capabilities. Each person’s avatar gets to choose many of those experiences through exercise of its own free will. Thus the “we” contained in our avatars is personally responsible for what we have made of ourselves and what we can become. The CIP theory could be assailed for being too materialistic, especially by people of religious faith. But such an argument would not be a scientific challenge. Moreover, the idea of materialistic mind is not inherently atheistic. The basic idea of evolution should be agonistic on the possibility of a creator God who set the laws of chemistry, mathematics, and physics in motion as the mechanism of creation and evolution. Likewise, just because human minds are materialistic is no reason to reject religious faith. Creation of mind also needs a mechanism. As with evolution, religious beliefs need to accommodate science. Yet I realize that the CIP hypothesis is not yet sufficient for our understanding of conscious mind. The “Holy Grail” of science is still well hidden. The most telling objection to the CIP avatar idea of consciousness is that it still leaves major unanswered questions. Critics will argue that consciousness is so mysterious that it can never be understood, or else understanding it requires study of simpler systems. Well, the CIP idea did originate in the nervous system of simpler systems, such as invertebrates, where we learned about postsynaptic potentials, nerve impulses and neuronal circuits. But beyond that, there is no evidence that simpler animal systems can help us understand consciousness. Some scholars say we have to look to computer models because they are simpler than human mind. Simpler may be better, but how could you expect a computer system to have rudimentary consciousness, which would be necessary in order to study how the simple system does it? Indeed, how could you define consciousness for a simple system or even know if it is even experiencing it (given that consciousness is, by definition, subjective and “in the eye of the beholder?”) Even for fellow humans, we can only infer their consciousness. The only consciousness we can actually experience is our own. As intuitive and irrefutable as I believe CIP explanations of mind are, we scientists should be open to the possibility that at least some dimensions of mind may exist “out there,” in some form separate from brain or at least in some form that our present scientific understanding requires us to consider as non-materialistic, or spiritual. In the process of their Holy Grail quest, scientists should be a little more humble. We don’t know as much about mind as we think we know. We certainly do not fully comprehend whatever spiritual realities exist. The apparent incompatibility of science and religion might be explained by the limitations of both. We need a twenty-first century neuroscience. When that arrives, we will need a twenty-first century religion. Absolute truth is like a mirage: it tends to disappear when you approach it …Passionately though I may seek certain answers, some will remain, like the mirage, forever beyond my reach. ―Richard Leakey and Roger Lewin
Index
A Aarts, H., 83 Acetylcholine, 46, 68 Achlioptas, D., 153 Ackerman, J.M., 83 Addiction, 141, 250, 251, 254, 275 Adenosine, 59 Adey, W.R., 138, 148 Adrenal gland, 67–69, 343 Adrian, L., 26, 54, 123–124 Agnew, H.W. Jr., 242 Alle, H., 43 Alpha activity, 119 Alzheimer’s disease (AD), 73, 209, 316, 326 Ambiguous figures, 5, 95, 177, 212–214, 216, 224, 269, 306, 307, 325, 335, 337 Amnesia, 171, 253 Amphibians, 91, 187 Amplitude modulation (AM), 138, 152, 290–293 Amygdala, 24, 44, 53, 72–74, 86, 96, 108, 163, 166, 167, 184 Amzica, F., 133 Analog computing, 142–143 Anesthesia, 15, 56, 60, 61, 86, 89, 133, 147, 248, 249, 334, 343 Animal hypnosis, 65, 147 Anterior commissure, 184 Aphasia, 185 Armadillo, 241 Arousal, ARAS, 56, 60 Aserinsky, E., 240 Atheism, 5 Attention, 15, 44, 46, 57, 65, 72, 74, 93, 94, 102, 146, 152, 153, 157, 159, 162, 195–197, 202, 206–208, 215, 226, 228, 250, 261, 267, 275, 285, 302, 311 Autonomic nervous system (ANS), 52, 58, 66–70, 75, 76 Avatar, 89, 310, 316, 317, 322–338, 344–346
Awareness, 15, 22, 24, 25, 29, 32, 51, 59, 64, 73, 80, 86, 87, 96, 98, 202, 209, 223, 224, 226, 228, 229, 231, 235, 249, 253, 254, 256–259, 261, 264–266, 268, 270, 277, 292, 305, 316, 318, 322, 324, 325, 327, 331, 336–338, 344, 345 B Babies, 37, 186, 240, 245, 247, 326, 335 Baccus, S.A., 166 Bakker, A., 179 Bandura, A., 280 Barco, A., 140 Barry, C., 161 Basal ganglia, 15, 54, 72, 73, 113, 167, 183, 312, 315 Baum, E.B., 284, 323 Baumeister, R.F., 252 Bayesian probability, 3, 167, 283–286, 309, 333 Bayne, T., 256 Bear, M., 14 Beauregard, M., 6 Beck, F., 303, 304 Behavior, 8, 16, 17, 28, 35–37, 40, 44–45, 51–60, 64, 65, 70, 73–75, 77–80, 82–85, 87, 94–96, 107–109, 111, 118, 120, 127, 132, 135, 136, 143, 147, 148, 152, 159, 161–163, 168, 169, 172, 178, 180, 184, 189, 197, 206, 208, 211, 217, 223, 225, 226, 229–231, 233, 241, 250–255, 257, 258, 263, 271–276, 279, 286–288, 290–292, 294–297, 299, 309, 317–319, 323, 334 Behaviorism, 250, 251 Bennett, M.R., 19 Beta activity, 325 Bhattacharjee, Y., 6 Bias, 3, 8, 22, 79–81, 83, 84, 94, 96, 110, 119, 121, 126, 140, 168, 265, 268, 279
W.R. Klemm, Atoms of Mind: The “Ghost in the Machine” Materializes, DOI 10.1007/978-94-007-1097-9, © Springer Science+Business Media B.V. 2011
347
348 Binding (sensory/cognitive), 196, 204, 205, 211–213, 216, 325 Binzegger, T., 312 Biochemistry, 14, 21, 22, 24, 28, 34, 39, 40, 139–142, 251, 288, 315 Blind, 80, 88, 91, 181, 276 Blindsight, 80 Bonifazi, P., 199 Bouquet effect, 105, 106, 191, 192 Boyle, P.J., 60 Brain scans, 41–44, 79, 86, 108, 179, 257, 338 Brainstem, 2, 17, 22, 30, 34, 46, 51–54, 56, 57, 59–61, 63, 65–67, 70, 73, 74, 76, 82, 85, 109, 113, 114, 119, 126, 132, 147, 150, 159, 160, 167, 182–184, 201, 241, 242, 292, 311–313, 315, 324, 326 Brainstem reticular formation, 56, 57, 59, 66, 74, 119, 147, 167, 244, 246, 249, 324, 330 Brazier, M.A.B., 59 Brenner, N., 191 Broca, P., 128, 150, 177, 182, 185 Buck, L., 192 Bucy, P.C., 74 Buonomano, D., 158 Burkhalter, A., 314 Buzsáki, G., 46, 50, 51, 149, 161, 162, 209, 211, 215, 216, 290 “Byte” processing, 130–132, 149, 321 C Caggiano, V., 230 Cajal, S.R.y., 102–103 Calcium, 25, 26, 121, 141, 199 cAMP response element-binding (CREB) proteins, 108, 140, 141 Canolty, R.T., 215 Cantero, J.L., 206 Capgras syndrome, 229 Cat, 1, 25, 37, 43, 56, 58, 70, 116, 128, 137, 187, 196, 202, 208, 228, 240, 247, 314 Caudate nucleus, 72, 167 Cavanagh, J.F., 206 Cerebellum, 15, 24, 53, 56, 66, 72, 117, 126, 127, 173, 182, 183, 293 Cerebral lateralization, 185–190 Chaos theory, 3, 120, 148, 149, 195, 283, 286–295, 309, 313, 333 Children, 4, 78, 79, 88, 96, 185, 228, 229, 274 Choice, 10, 20, 80, 81, 84–86, 96, 183, 206, 209, 225, 226, 234, 250–253, 255, 256, 260–263, 266, 269, 271–280, 285, 317, 330, 331, 334, 344, 345 Cholinergic, 59, 313
Index Christ, 92 Christophel, T., 258 Churchland, P.S., 255, 318 Cimenser, A., 202 Circadian rhythm, 150 Circuit (types), 13, 17, 20, 26–32, 38, 39, 45, 46, 71, 73, 93, 98, 101, 104, 110, 112, 117, 118, 120, 121, 124, 125, 132, 143, 145, 149, 151, 168, 173, 190–194, 198–200, 202, 205, 207, 230, 244, 247, 286, 287, 314–317, 320–322, 335, 339 Circuit impulse patterns (CIPs), 15, 28, 31–33, 45, 120, 121, 132, 173, 185, 201, 235, 283, 285, 310–339, 344–346 Cisek, P., 153, 199 Clark, W.R., 275, 287 Cognitive therapy, 86, 229 Coherence, 36, 47, 61, 92, 137, 177, 196, 201–217, 223, 224, 244, 269, 304, 306, 307, 316, 324, 325, 327, 334–338 Colgin, L.L., 41, 215 Coma, 53, 56, 57, 59, 60, 65, 74, 89, 223, 245, 324, 343 Combinatorial coding, 30–32, 165, 166, 169, 191–194, 223, 244, 303, 314, 323, 327, 332, 335–337 Combinatorics, 106, 190–194, 316 Comi, G., 209 Command neurons, 16, 17, 128 Compatibilism, 255 Complex spikes, 126–127 Complex system, 148, 286, 287, 294 Compulsions, 24, 34, 250–251, 254, 275 Computers, comparison with, 26–27 Conscious consciousness, 3, 20, 63, 106, 160, 187, 223, 283, 343 mind, 2, 21, 63, 108, 160, 192, 224, 283, 343 Consilience, 92 Conte, E., 306 Cook, E.P., 126 Corkin, S., 171 Corpus callosum, 184, 186, 187 Correlation coefficients, 42, 190, 334 Cortex, 11, 24, 64, 104, 160, 173, 229, 285 Cortical columns, 113–114, 117, 153, 191, 268, 290, 313, 314, 336, 338 Cowan, N., 236 Cranial nerve, 59, 63, 65, 69, 108, 182, 315 Cross-frequency synchronization, 200, 216 Custers, R., 83 Czisch, M., 237
Index D Dahlin, E., 185 Dalal, S.S., 238 Danquah, A.N., 265 Dark energy, 3, 6–10, 38, 296, 333, 334 Dark matter, 3, 4, 6–10, 38, 334 Darvas, F., 216 Darwin, C., 7, 250, 271, 310 Davie, J.T., 126 DC potentials, 119, 336, 339 DC shifts, 339 Decapitation, 119, 216, 217, 339 Decision, 4, 10, 15–17, 20, 31, 34, 38, 39, 81, 83–87, 89, 90, 118, 165, 167–169, 206, 209, 223, 225–227, 234, 236, 237, 250–254, 256–274, 276, 277, 280, 284, 285, 304, 315, 319, 323, 330, 331, 337, 344, 345 Deconstruction, 10–14, 33, 34, 204, 233 Dehaene-Lambertz, G., 186 Dement, W., 240 Demiralp, T., 215 Dennett, D.C., 223 Descartes, R., 89, 92 Desmurget, M., 259, 268 Destexhe, A., 137 Diaz, J., 138 Differentiation (math), 142, 143, 165, 207, 286 DiLorenzo, P.M., 132 Disinhibition, 44, 119, 201, 203, 324, 330 Doeller, C., 161 Dog, 1, 21, 25, 36, 37, 51, 58, 179, 180, 228, 230, 239, 240, 247, 250 Dopamine, 57, 76, 78, 97, 167 Dosenbach, N.U.F., 90 Douglas, R.J., 123, 312, 314 Doya, K., 284 Dreaming, 1, 35–37, 60, 65, 66, 86, 92–94, 176, 195, 205, 208, 212, 223, 227, 237–245, 247–250, 335, 338 Dreams, 1, 23, 35–37, 51, 60, 64, 66, 92–94, 135, 147, 176, 188, 205, 206, 208, 211, 227, 238–250, 278, 316, 319, 325 Dux, P.E., 332 E Eagleman, D.M., 15 Eccles, J.C., 115–117, 303, 304 Eckhorn, R., 216 Edden, R.A.E., 199 Edelman, G.M., 231, 232 Einstein, A., 2, 5, 189, 226, 250, 271, 298, 309, 345
349 Eisenberger, N.I., 169 Electrical synapses, 117 Electroencephalogram (EEG), 28, 36, 40–43, 54, 56, 59, 61, 73, 91, 95, 97, 110, 120, 124, 133, 136, 137, 145–148, 152, 177, 189, 194–198, 203, 205–210, 212–217, 286, 288–291, 294, 296, 299, 306, 308, 313, 323–325, 335, 336, 343 Electrons, 25, 27, 189, 295–300, 303, 307, 308 Electrostatic effect, 138, 147, 148, 208 Emergent property, 14, 15, 24, 148, 187, 190, 283, 322 Emotions, 17, 24, 30, 33, 34, 44, 51, 53, 54, 63, 70, 72–74, 76, 78, 79, 82, 84–89, 93, 96, 97, 109, 166–169, 183, 185, 224, 227, 229, 231, 232, 240, 243, 252, 263, 266, 271, 291, 292, 311, 318, 324, 332, 345 Endocrine system, 70–71 Engel, K., 233 Entanglement (quantum), 3, 230, 300, 301, 304, 307 Entorhinal cortex, 72, 74, 160, 163, 164, 174, 177, 179, 191, 215, 292 Epilepsy, 50, 71, 94, 119, 124, 146, 171, 174, 186, 187, 195, 217, 286, 289, 314, 324, 336 Evolution, 2–4, 7, 9, 10, 34, 37, 53, 113, 228, 277, 280, 286, 291, 310, 311, 322, 323, 346 Excitatory postsynaptic potential (EPSP), 116–120, 140, 151 Exocytosis, 303–304, 307 F Face recognition, 165–166 Farah, M., 80 Feedback, 17, 28, 29, 33, 39, 45, 48–50, 69–71, 96, 108–110, 145, 187, 199, 232, 244, 261, 271, 291, 302, 313–315, 323, 335 Fell, J., 177, 212 Fenton, A.A., 164 Fiete, I.R., 164 “Fight or flight,” 52, 57, 68, 69 Fingelkurts, A., 216 Foffani, G., 132 Forgetting curve, 49 Fornix, 184 Fractal dimension, 195, 294 Freeman, W.J., 20, 148, 288, 290–293, 311
350 Free will, 2, 21, 83, 95, 165, 199, 202, 206, 209, 225, 250–255, 258–261, 263, 264, 266, 268–273, 275–280, 287, 291, 317, 318, 322, 323, 328, 329, 332, 338, 345, 346 Free will debates, 21, 72, 73, 206, 238, 253–280 Free will, purpose, 275–280 French, J.D., 61 Freud, S., 87, 94, 241, 275, 345 Friedman, R., 79 Fry, E., 300 Functional MRI (fMRI), 42–44, 150, 158, 165, 169, 174, 179, 186, 189, 203, 217, 257, 334, 338. See also Brain scans G GABA, 199, 314 Galdi, S., 80, 81 Gall, 181 Gallup, 228 Gamma activity, 133, 135, 149, 152, 177, 198, 204, 208, 209, 215, 239, 240, 325 Gangestad, S.W., 22 Gao, 78 Gating, 143–144, 149 Gazzaniga, M.S., 92, 95, 255, 273 Gebhart, G.F., 66 Geiger, J.R.P., 43 Gelbard-Sagiv, H., 173 Gerstein, 131 Gestalt, 137, 187 Ghost in the machine, 34, 37, 89, 277, 310, 313, 327, 343, 345 Giambrone, S., 228 Gilder, L., 300 Gisin, N., 300, 301 Glia, 1, 28, 30 Glial cells, 41, 43, 239, 339 Glimcher, P.W., 167 Globus pallidus, 72 God, 2–6, 8–10, 35, 240, 252, 310, 346 Golgi,C., 59, 102, 103 Goodman, M., 91 Grafman, J., 322 Grastyán, A., 51, 53 Greene, J.D., 183 Gregoriou, G.G., 107 Grey, 196 Grid cells, 164, 191 Grill-Spector, K., 165, 261 Groh, J.M., 126
Index H Habenula, 108–111, 232, 316, 320 Hacker, P.M.S., 19 Haenschel, C., 198 Haggard, P., 257 Hagman, P., 114 Hallucinatory thought, 23 Hamerhoff, S., 304, 305 Han, J.-H., 108 Hasson, U., 158 Hawking, S., 5 Haynes, J.D., 258 Hecht, D., 335 Hetherington, A.W., 75 Hickok, G., 150 Hinton, G., 285 Hippocampus, 32, 42, 44, 51, 53, 78, 91, 103, 136, 140, 141, 147, 160–164, 171–173, 175–179, 185, 189, 202, 206, 210, 211, 215, 216, 237, 292 Hirabayashi, T., 212 Histology, 102–103 H.M., 171–174, 176 Hobson, J.A., 59 Hodgkin, A.L., 25, 26, 117 Homeostasis, 66, 70, 71, 75 Hotz, R.L., 97 Hubel, 11–14, 20, 173, 178, 233, 315 Hueretz, C.P., 150 Huxley, A., 25, 115, 117, 250, 274, 275 Hypothalamus, 52, 53, 58, 66, 67, 70, 72–75, 77, 78, 113, 150, 157, 163, 167, 183, 284 I “I,” 94, 254, 273, 316–319, 326, 328–332 Ilg, R., 189 Illusion, 3, 5, 60, 87, 95, 117, 214, 223, 253–255, 276, 306, 332, 334 Imai, J., 66 Immobility reflex, 65 Impulse interval, 129, 132, 335 Information, 1, 20, 63, 101, 157, 171, 223, 283 Information processing, 43, 45, 65, 122, 127, 141, 148, 193, 198, 290–292, 334, 335 Inhibition, 31, 44, 48–50, 56, 60, 71, 95, 96, 117, 119, 121, 135, 142, 146, 149, 211, 324 Inhibitory postsynaptic potential (IPSP), 116, 117, 119, 120, 140, 151 Intention, 56, 112, 209, 223, 225, 227, 250, 251, 253, 255–257, 259, 263, 267, 271, 274, 291, 292, 302, 323, 329, 344, 345 Internal capsule, 184
Index Interpeduncularis nucleus, 110 Interval code, 125, 127, 129 Ions, 27, 29, 119, 122, 296 Ions, ion flow, 27 IQ, 207, 236 Izhikevich, E.M., 32, 238 J Jaeggi, S.M., 236 Janssen, P., 285 Jaynes, J., 35, 36, 235, 254 Jeannerod, M., 281 Jeffries, L.A., 159 Jiang, Z.Y., 209 John, B., 300 John, C.E., 115–117, 303 John, L., 92 John O’Keefe, 160, 161 John, R.E., 14, 178 Johnson, J.S., 82, 192 Joordens, 265 Jutras, 202 K Kalra, K., 216 Kane, R., 277 Kanwisher, N., 165 Karrass, C.L., 87 Kasanetz, F., 251 Kawahara, J.-I., 264 Kelso, J.A.S., 202 King, E.E., 61 Kirschfeld, K., 265 Kjelstrup, K., 163 Klein, S., 264 Kleitman, 240 Klemm, W.R., 34, 52, 63, 65, 86, 88, 109–111, 128, 129, 131, 137, 147, 188, 189, 195, 196, 212, 217, 272, 311 Klimesch, W., 215 Klüver, H., 74 Knee jerk reflex, 28, 51, 63, 79, 81, 231 Kobayashi, A., 191 Koch, C., 120, 313, 325, 327 Korman, J., 176 Krasner, S., 287 L Labeled lines, 104–107, 112, 191 Lakatos, P., 152 Language, 10, 24, 54, 86, 88, 96, 112, 150, 151, 159, 185, 186, 188, 189, 224, 226, 227, 288, 311, 316, 318, 334, 335
351 Lashley, K., 177 Lateralization, 185–190 Latham, P., 285 Lau, H.C., 167, 257 Laurent, G., 293 Lawrence, C.J., 72, 80 Lazarus, R.S., 85 Learning curve, 48 and memory, 28, 54, 57, 78, 111, 210 LeDoux, J., 98, 318 Lee, B., 279 Lee, D., 269 Lepore, F., 181 Leptin, 75 Lestienne, R., 132 Le Van Quyen, M., 136 Libet, B., 93, 254–258, 260, 261, 263, 265–267, 269, 273, 276 Lieberman, M.D., 169 Limbic system, 52–54, 58, 73–76, 109, 110, 183, 232, 263, 292 Lindsley, D., 57 Lipton, B., 35, 278 Liquid-state electronics, 303 Lisman, J., 216 Llinas, R.R., 207, 208 Lockery, S., 143 Locus coeruleus, 57, 76, 313 Logothetis, N.K., 198 Long-term post-tetanic potentiation (LTPTP), 140 Lopez-Fernandez, M.A., 141 Lorenz, P.M., 132, 287 Luo, H., 150 M MacLean, P.D., 53, 54 Macphail, E.M., 92 Magnetic resonance imaging (MRI), 42, 87, 186, 189, 257, 332 Magnetoencephalography, 205, 208, 216 Magoun, H.W., 54, 56, 57, 61 Makeig, S., 269 Mantini, D., 217 Markov transition, 129 Marshall, L., 14 Martin, K.A., 312, 314 Masquelier, T., 207 Masse, N.Y., 126 Massimini, M., 91 Maurer, A.P., 210 May, H.U., 78, 131
352
Index
McCarley, R.W., 59, 242 McGowan, K., 79 McNaughton, B.L., 210, 235 Medulla, 65, 67, 69, 244 Mele, A.R., 263 Melloni, L., 209 Melzack, R., 143, 144 Memory consolidation, 36, 37, 175, 202, 211, 215, 225, 236, 237, 241, 248, 250, 316, 321 declarative, 172–174, 176, 235 procedural, 86, 171–173, 176 Merelogical fallacy, 19 Merzenich, M.M., 90, 180 Metabolism, 59, 60, 121 Meta-circuit, 321–323 Microtubules, 300, 302–304 Mikeska, J.A., 119 Miller, G., 87, 183 Minnebusch, D.A., 165 Minzenberg, M.J., 82 Miodinow, L., 5 Mirror neurons, 229, 230, 280, 337, 338 Miyashita, Y., 212 Modularity, 181–184 Moeller, S., 165 Mogenson, G., 185 Molnár, G., 314 Mongillo, G., 121 Montgomery, S.M., 211 Monto, S., 217 Morgan, V.L., 203 Mormann, F., 215 Morrison, S.E., 168 Moruzzi, G., 54, 56, 61 Motor cortex, 17, 39, 56, 199, 204, 216, 237, 257–259, 264, 266, 267 Mountcastle, V.B., 113, 128 Movshon, J.A., 205, 206 Mukamel, R., 43 Multiple-unit activity (MUA), 29, 60, 109, 137, 152, 196, 198, 208 Multi-plexing, 332 Musallam, 268
Nerve impulse, 3, 11, 20–22, 26, 27, 33, 39, 40, 64, 107, 112, 117, 118, 121–123, 212, 242, 292, 308, 310, 315 Networks (neural), 23, 26, 29, 39 Neural cell-adhesion molecules (NCAM), 141 Neuroendocrine, 2, 27, 30, 69–70, 76, 315 Neurohormone, 71, 75, 183 Neuromodulation, 183 Neurotransmitters, 1, 10, 22, 23, 29, 35, 37, 40, 46, 69, 75, 76, 118, 121, 129, 140, 141, 249, 301, 303 Niessing, 1, 10, 22, 23, 37, 40, 46, 48, 56, 68, 69, 75, 76, 82, 97, 118, 121, 123, 139–143, 199, 303, 314 Nijhawan, R., 264, 265 Nocebo, 278 Nonconscious mind, 22, 24, 34 Non-linearity, 47, 49, 50 Norepinephrine, 46, 57, 68, 69, 76, 77, 167 Nunez, P.L., 3, 10, 296, 297
N Nadasdy, Z., 207 Nakahama, H., 131 Narcolepsy, 248, 249 Near-death experience, 5, 8 Neocortex, 28, 35, 37, 46, 52, 56, 59–61, 71, 73, 74, 110, 113, 148, 150, 151, 172, 176, 247, 311, 312, 314, 334, 337
P Pacherie, E., 89 Pain, 46, 57, 61, 64–66, 97, 124, 143, 144, 167, 169, 253, 327 Paleocortex, 71 Palva, J.M., 216 Parallel universe, 3, 5, 7, 9, 10 Parasympathetic, 67–69
O Obhi, S.S., 257 Ohm’s law, 26 O’Keefe, J., 160, 161, 211 Olds, J., 76–78 O’Leary, D., 6 Olfaction/olfactory, 36, 59, 74, 105, 106, 124, 137, 138, 168, 184, 192, 201, 290, 303, 328 Olten, D., 173 Ono, F., 264 Orbitofrontal cortex, 166, 168 Orekhova, E.V., 210 Orienting response, 52, 53 Osborne, L.C., 191 Oscillation, 7, 14, 15, 20, 31, 32, 41, 47, 61, 74, 78, 109, 110, 119, 125, 126, 133–138, 143–153, 157, 158, 163, 177, 188, 191, 193, 195–210, 213, 215–217, 233, 234, 269, 288–291, 314, 316, 321, 324, 334, 336, 345 Osipova, D., 177
353
Index Parnia, S., 8 Pascual-Leone, A., 334 Pavlov, 226 Pecka, M., 160 Peña, M., 186 Penrose, R., 304, 305, 309 Perception, 12, 13, 21, 57, 66, 95, 105, 106, 144, 172, 188, 192, 193, 209, 213, 224, 231, 233, 269, 285, 291, 306, 307, 318, 335–337 Personal responsibility, 184, 253, 271–275, 287, 317, 345, 346 Personal space, 230, 338 PET scan, 42, 97, 178 Phase, 14, 15, 26, 92, 110, 120, 125, 126, 133, 135, 136, 138, 149–153, 157, 161–163, 195, 196, 198, 201–203, 205–217, 233, 239, 288, 294, 304, 309, 315, 316, 322, 334, 336, 338 Philosophers, 8, 19, 34, 85, 89, 92, 223, 255, 276, 318 Piriform, 72, 74 Pituitary, 66, 70, 75, 108 Placebo, 6, 96, 97, 278 Place fields, 160–164, 210, 211 Plasticity, 94, 135, 179–181, 210 Platt, M.L., 285 Pockett, S., 205, 238, 255 Poeppel, D., 150 Poincaré, J., 287 Pollmacher, T., 237 Pons, 59, 66, 72, 182, 244 Population spike, 121, 126, 191, 198, 283 Post-stimulus time histogram, 130 Postsynaptic, 23, 30, 41, 43, 67, 69, 115–121, 123, 139, 140, 142, 194 Post-synaptic potentials (PSPs), 43, 116, 117, 120–122, 127, 128, 137, 139, 148, 200, 346 Potassium, 25, 26, 102, 117, 118, 122, 297, 298, 308, 310 Pouget, A., 285 Povinelli, D.J., 228 Precession, 162, 200, 202, 210, 211, 216 Protein kinase, 141 Pupil reflex, 65 Q Qualia, 329, 345 Quantum, 7, 117, 206, 230, 295–310 Quantum mechanics (QM), 2, 3, 5, 8–10, 22, 117, 230, 283, 295–310, 333, 345 Quarks, 7, 297
R Rabinovich, M., 293 Rakic, P., 91 Randic, A., 66 Ranson, S.W., 75 Raphe, 57, 110, 313 Rapid-eye-movement (REM), 36, 66, 135, 202, 206, 239–240, 242–250, 339 Rate code, 125, 126, 131 Ray, S., 144 Readiness response, 46, 52, 57–61, 65, 73, 147, 162, 241 Receptive fields, 11–13, 103–104, 113, 126, 180, 233, 285 Receptor molecules, 1, 10, 29, 75, 118, 303 Reece, M.L., 161, 211 Re-entrant mapping, 232, 233 Reflex, 2, 16, 22, 28, 34, 46, 51, 57, 58, 63–66, 69, 85, 96, 106, 108, 119, 231, 242, 243, 266, 277, 315 Reich, D.S., 191 Reinforcement, 76–78, 84, 166, 167, 210, 251, 284, 345 Releasing factor, 75 Religion/religious, 2–10, 22, 24, 35, 38, 79, 80, 83, 92, 94, 223, 240, 252, 271, 274, 278, 346 Renshaw circuit, 145 Representation, 11–14, 20–22, 30–33, 45, 89, 90, 95, 108, 112, 113, 120, 121, 124, 140, 153, 160–162, 166, 168, 169, 173, 174, 176–178, 181, 184, 196, 197, 209, 224, 225, 229, 230, 232, 234–236, 245, 246, 248, 268, 269, 306, 311, 313, 315–319, 321–323, 326–332, 344, 345 Reptiles, 36, 91, 240, 247 Reward, 76–79, 84, 85, 108, 111, 167–169, 207, 250–252, 263, 268, 269, 285 Ribrary, U., 208 Ricciardi, L., 305 Robertson, E.M., 128, 237 Rodriguez, E., 212 Rogers, B.P., 44 Rolls, E.T., 166 Roopun, A.K., 153 Ropper, A.H., 52 Rossini, P.M., 209 Roth, A., 43 Rudolpher, S.M., 131 S Saalmann, Y., 208 Saenz, M., 181 Salzman, C.D., 168
354 Samuelson, R.J., 173 Sarrazin, J.-C., 265 Sartre, J.P., 85 Sauseng, P., 215 Schizophrenia, 15, 35, 36, 209, 249, 318 Schneidman, E., 191 Schooler, J., 252 Schrodinger, 2 Schultz, W., 285 Schulz, R., 324 Second messengers, 1, 25, 47, 48, 141, 142 Sejnowski, T.T., 285 Self recognition, 228, 229 Senior, T.J., 135 Sense of identity, 171, 228–231, 326 Sense of self, 24, 25, 98, 171, 205, 224, 225, 228, 230, 231, 242, 243, 248, 311, 313, 316–318, 322, 326–328, 330, 332, 344, 345 Sergent, C., 336 Servo-system, 22, 30, 149, 151, 244, 247 Shadlen, M.N., 205, 285 Sherry, C.J., 128–131 Shrager, Y., 234 Shreeve, J., 91 Simpson, D., 181 Simpson, J.A., 22 Singer, W., 196, 197, 208, 211, 233, 234 Skinner, B.F., 77, 250 Skinner, J.E., 60 Sleep, 1, 24, 34, 36, 37, 46, 52, 54, 57, 59–61, 66, 70, 73, 74, 86, 89, 91–94, 108, 111, 128, 133–135, 137, 147, 157, 176, 177, 201, 202, 204–206, 208, 210, 211, 217, 223, 225, 237–250, 290, 311, 316, 324–326, 334–336, 339, 343, 344 Slow-wave sleep (SWS), 66, 133, 134, 176, 201, 202, 206, 208, 239 Sodium, 25, 26, 117, 118, 122, 297, 298, 308, 310 Solipsism, 311, 343, 344 Solstad, T., 164 Somjen, G.G., 314 Sommers, T., 271 Song, D., 299 Soon, C.S., 258, 269 Sound localization, 159–160 Sperry, R.W., 107, 186–188 Spinal cord, 22, 30, 34, 45, 46, 51, 53, 59, 63–69, 72, 74, 76, 112, 116, 143–145, 183, 312, 315 Spinal reflexes, 22, 34, 51, 64, 65, 106, 119
Index Spino-bulbo-spinal reflex, 65 Spintronics, 307 Split brain, 94, 97, 186–188 Spreading depression, 314 Stage IV sleep, 241–243, 245–249 Stapp, H., 302 Starzl, T.E., 59 Staude, B., 191 Steady state evoked response, 136, 137, 188 Steriade, M., 59, 133–135, 238 Stillman, T.F., 252 St. Paul, 92 Stress, 57, 73, 74, 79, 87, 108, 111, 113, 121, 240, 273 String theory, 3, 7 Subconscious mind, 2, 10, 15, 24, 25, 30, 32, 34, 54, 61, 70, 72, 74, 79–85, 87, 88, 90, 91, 93–96, 110, 209, 225–227, 233, 237, 238, 250, 253, 254, 266, 268, 270, 271, 276, 277, 310, 317, 318, 322, 326–332, 338, 343–345 Subliminal stimuli, 83, 209 Substantia nigra, 57, 72 Subthalamus, 72, 312 Suzuki, H.H., 143 Swartz, K.B., 228 Sympathetic, 67–69 Synapses, 1, 17, 23, 26–30, 38, 40, 51, 67, 68, 71, 90, 101, 115, 117, 118, 120, 121, 126, 139, 140, 149, 165, 173, 177, 180, 200, 201, 206, 232, 261, 302, 303, 314 Synchrony, 31, 61, 125, 138, 146, 149, 165, 191, 193, 196–209, 211, 212, 214–216, 233, 269, 304–306, 336, 339 Szegedy-Maszk, M., 88 T Takehara-Nishiuchi, K., 235 Tancredi, L., 273, 274 Taylor, C.W., 59 Tchumatchenko, T., 190 Temporal lobe, 74, 151, 165, 171–173, 184, 185, 207 Tero, A., 112 Thalamus, 12, 45, 46, 52, 53, 61, 64, 65, 72–74, 104, 108, 113, 119, 133, 143, 147, 150, 167, 184, 189, 202, 203, 244, 246, 284, 292, 312–314 Thatcher, R.W., 207 Theory of mind, 45, 224, 228, 255, 334, 344 Theta, 41, 59, 73, 133, 135, 136, 147, 149, 151, 152, 161–163, 177, 195, 198, 202, 206, 209–211, 213, 215, 216, 324
355
Index Thinking, 1, 7, 9, 16, 19–61, 65, 72, 73, 79, 82–87, 89, 93, 95, 96, 109, 119, 120, 122, 133, 142–152, 157–169, 179, 194, 198, 199, 201, 203, 207, 211, 213, 215, 224–227, 231, 234–240, 243, 244, 254, 257, 270, 274–276, 278, 279, 291, 293, 294, 298, 301, 305, 310, 316, 318, 320, 321, 325, 332, 337, 338 Thinking, thought, 15, 19–23, 25, 28–46, 51–54, 93, 234–236, 238, 245, 251, 254 Thivierge, J.-P., 153, 199 Thought engine, 236, 237, 321, 322 Time processing, 157–159 Timing, time chopping, 14–15, 21, 92, 132, 136, 149–152, 157, 159, 162, 167, 169, 202, 207, 208, 234, 257, 264, 265, 269, 314, 320, 321, 339, 345 Timofeev, I., 135 TMS, 334, 335, 337, 338 Topographic mapping, 91, 112–113, 180, 232 Touch, 45, 73, 81, 83, 84, 88, 103, 106, 144, 172, 181, 224, 226, 228, 238, 247, 271, 300, 311, 318, 328 Transcranial magnetic, 91 Transcranial magnetic stimulation, 324 Triune brain, 53, 54, 73 Tubulin, 304–305 Tunneling (quantum), 302–303 Turin, L., 303 U Uhlhaas, P.J., 61, 201 Ulrich, R., 264 Ultraslow oscillation, 216–217 Umezawa, H., 305 Uncertainty principle, 298, 299 V Value, 4, 42, 53, 78, 84, 86, 141, 167–169, 212, 225–228, 238, 249, 252, 263, 274, 284, 288, 290, 294, 346 Van der Werf, J., 199 Van Duuren, E., 168 Van Gaal, S., 95 Vertes, R.P., 34, 63, 66, 241, 311 Vinck, M., 200
Visceral control, 22, 67 Visual cortex, 11–13, 33, 43, 80, 104, 112, 113, 136, 137, 146, 152, 166, 173, 178, 180–182, 191, 196, 198–200, 202–204, 206, 233, 238, 268, 316 Visual motion, 166–167, 191 Vohs, K.D., 252 Vul, E., 42 W Wada, Y., 36 Walker, M., 237 Wall, P.D., 143, 144 Walsh, V., 334 Walter, F., 20, 148, 288, 290, 311 Walter, H., 255 Wang, H.-P., 202 Warrington, E., 253, 256 Wassermann, E., 324 Wavelet, 133, 203, 212 Webb, W.B., 242 Wegner, D.M., 255, 256, 263 Weiskrantz, L., 80, 253, 256 Werner-Reiss, U., 126 Wernicke, C., 150, 177, 182, 185, 186 Whitlock, J.R., 140 Whittingstall, K., 198 Wiesel, T., 11–14, 20, 173, 178, 204, 233 Wills, T.J., 161 Wilson, E.O., 92 Wilson, F.A.W., 178 Withdrawal reflex, 64 Womelsdorf, T., 208 Working memory, 120, 121, 131, 173, 175, 178, 185, 198, 210, 216, 225, 226, 234–238, 247, 259, 263, 264, 266, 267, 269, 271, 296, 316, 319, 321, 322 Wyart, V., 336 Y Yingling, C.D., 60 Yoo, S., 176, 237 Z Zombie, 30, 84, 250, 253–256, 273, 274, 322 Zombiism, 250, 255 Zou, Q., 203 Zou, Z.A., 192