Lecture Notes in Artificial Intelligence Edited by R. Goebel, J. Siekmann, and W. Wahlster
Subseries of Lecture Notes in Computer Science
5436
Bernhard Sendhoff Edgar Körner Olaf Sporns Helge Ritter Kenji Doya (Eds.)
Creating Brain-Like Intelligence From Basic Principles to Complex Intelligent Systems
13
Series Editors Randy Goebel, University of Alberta, Edmonton, Canada Jörg Siekmann, University of Saarland, Saarbrücken, Germany Wolfgang Wahlster, DFKI and University of Saarland, Saarbrücken, Germany Volume Editors Bernhard Sendhoff Edgar Körner Honda Research Institute Europe GmbH 63073 Offenbach/Main, Germany E-mail: {bs,edgar.koerner}@honda-ri.de Olaf Sporns Indiana University, Dept. of Psychological and Brain Sciences Bloomington, IN 47405, USA E-mail:
[email protected] Helge Ritter Bielefeld University, Neuroinformatics Group 33615 Bielefeld, Germany E-mail:
[email protected] Kenji Doya Okinawa Institute of Science and Technology, Neural Computation Unit Okinawa 904-2234, Japan E-mail:
[email protected] The cover illustration is the work of Dr. Frank Joublin, Honda Research Institute Europe GmbH Library of Congress Control Number: Applied for CR Subject Classification (1998): I.2, F.1, I.2.10, I.4, I.5, J.3-4, H.5.2 LNCS Sublibrary: SL 7 – Artificial Intelligence ISSN ISBN-10 ISBN-13
0302-9743 3-642-00615-9 Springer Berlin Heidelberg New York 978-3-642-00615-9 Springer Berlin Heidelberg New York
This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, re-use of illustrations, recitation, broadcasting, reproduction on microfilms or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer. Violations are liable to prosecution under the German Copyright Law. springer.com © Springer-Verlag Berlin Heidelberg 2009 Printed in Germany Typesetting: Camera-ready by author, data conversion by Scientific Publishing Services, Chennai, India Printed on acid-free paper SPIN: 12619917 06/3180 543210
Preface
The International Symposium Creating Brain-Like Intelligence was held in February 2007 in Germany. The symposium brought together notable scientists from different backgrounds and with different expertise related to the emerging field of brain-like intelligence. Our understanding of the principles behind brain-like intelligence is still limited. After all, we have had to acknowledge that after tremendous advances in areas like neural networks, computational and artificial intelligence (a field that had just celebrated its 50 year anniversary) and fuzzy systems, we are still not able to mimic even the lower-level sensory capabilities of humans or animals. We asked what the biggest obstacles are and how we could gain ground toward a scientific understanding of the autonomy, flexibility, and robustness of intelligent biological systems as they strive to survive. New principles are usually found at the interfaces between existing disciplines, and traditional boundaries between disciplines have to be broken down to see how complex systems become simple and how the puzzle can be assembled. During the symposium we could identify some recurring themes that pervaded many of the talks and discussions. The triad of structure, dynamics and environment, the role of the environment as an active partner in shaping systems, adaptivity on all scales (learning, development, evolution) and the amalgamation of an internal and external world in brain-like intelligence rate high among them. Each of us is rooted in a certain community which we have to serve with the results of our research. Looking beyond our fields and working at the interfaces between established areas of research requires effort and an active process. We believe that the symposium and this volume of contributions from the symposium fueled this active process. We know that for a good endeavor there can never be too much fuel and, therefore, this symposium will certainly have its successors, as soon as we feel that the time has again arrived for enriching our own thoughts by listening to someone else’s. We would like to thank all speakers and participants for their active contribution during the symposium. The inspiring discussions during the two days and particularly in the workgroups are reflected in the chapters that are assembled in this volume. All authors who went through the extra effort to commit their thoughts shared during the symposium to paper and to merge them with their most recent research results deserve our particular gratitude. We believe the spectrum of the chapters nicely demonstrates where we currently stand in our quest for brain-like intelligence. It represents the current state of art in several research fields that are embraced by brain-like intelligence. Furthermore, it aims at connecting those research fields toward the common goal of brain-like intelligence.
VI
Preface
Finally, we would like to thank the Honda Research Institute Europe, who organized and sponsored the 2007 symposium. December 2008
Bernhard Sendhoff Edgar K¨ orner Olaf Sporns Helge Ritter Kenji Doya
Table of Contents
Creating Brain-Like Intelligence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Bernhard Sendhoff, Edgar K¨ orner, and Olaf Sporns
1
From Complex Networks to Intelligent Systems . . . . . . . . . . . . . . . . . . . . . . Olaf Sporns
15
Stochastic Dynamics in the Brain and Probabilistic Decision-Making . . . Gustavo Deco and Edmund T. Rolls
31
Formal Tools for the Analysis of Brain-Like Structures and Dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . J¨ urgen Jost
51
Morphological Computation – Connecting Brain, Body, and Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Rolf Pfeifer and Gabriel G´ omez
66
Trying to Grasp a Sketch of a Brain for Grasping . . . . . . . . . . . . . . . . . . . . Helge Ritter, Robert Haschke, and Jochen J. Steil Learning Actions through Imitation and Exploration: Towards Humanoid Robots That Learn from Humans . . . . . . . . . . . . . . . . . . . . . . . . David B. Grimes and Rajesh P.N. Rao Towards Learning by Interacting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Britta Wrede, Katharina J. Rohlfing, Marc Hanheide, and Gerhard Sagerer Planning and Moving in Dynamic Environments: A Statistical Machine Learning Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sethu Vijayakumar, Marc Toussaint, Giorgios Petkos, and Matthew Howard
84
103 139
151
Towards Cognitive Robotics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Christian Goerick
192
Approaches and Challenges for Cognitive Vision Systems . . . . . . . . . . . . . Julian Eggert and Heiko Wersing
215
Some Requirements for Human-Like Robots: Why the Recent Over-Emphasis on Embodiment Has Held Up Progress . . . . . . . . . . . . . . . . Aaron Sloman
248
Co-evolution of Rewards and Meta-parameters in Embodied Evolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Stefan Elfwing, Eiji Uchibe, and Kenji Doya
278
VIII
Table of Contents
Active Vision for Goal-Oriented Humanoid Robot Walking . . . . . . . . . . . . Mototaka Suzuki, Tommaso Gritti, and Dario Floreano
303
Cognitive Adequacy in Brain-Like Intelligence . . . . . . . . . . . . . . . . . . . . . . . Christoph S. Herrmann and Frank W. Ohl
314
Basal Ganglia Models for Autonomous Behavior Learning . . . . . . . . . . . . . Hiroshi Tsujino, Johane Takeuchi, and Osamu Shouno
328
Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
351
Creating Brain-Like Intelligence Bernhard Sendhoff1 , Edgar K¨ orner1 , and Olaf Sporns2 1
2
Honda Research Institute Europe GmbH Carl-Legien-Str. 30 63073 Offenbach, Germany {bs,edgar.koerner}@honda-ri.de Department of Psychological and Brain Sciences Indiana University, Bloomington Indiana 47405, USA
[email protected] Abstract. In this chapter, we discuss the new research field brain-like intelligence and introduce and relate the contributions to this volume to each other.
1
Brain-Like Intelligence
This volume brings together contributions from researchers across a broad range of disciplines who attempt to define promising avenues towards the creation of brain-like intelligence. What is Brain-like Intelligence? Although it seems necessary to have a good understanding of what one wants to create before one starts, there is no crisp and clear definition. As is often1 the case, we have to be content by first identifying a list of ingredients and with drawing up brain-like intelligence and other types of “intelligence” that have been put forward in the past. Over 50 years ago, the field of artificial intelligence was founded during the now famous Dartmouth conference [1]. After an enthusiastic start it quickly became clear that the original goals would be much harder to achieve than anticipated. Since then artificial intelligence has proceeded in many different directions with a number of research “spin-offs” but without realizing the goal of achieving general-purpose intelligent computing applications. Maybe the most successful applications of artificial intelligence have been in the areas of search engines and of logical reasoning systems leading eventually to expert systems. Expert systems succeed when a problem is sufficiently well structured and its complexity can be controlled. Some problems have these properties, however, natural environments do not. This is one of the reasons why classical AI has not been successful in building systems that can autonomously operate in natural environments. Shakey was the first mobile robot that appeared to “reason” about its actions [2]. While it was a technological masterpiece, it was plagued by slow response times, resulting in sluggish interactions with its environment. Even if we acknowledge the tremendous increase of computational capacity during the 1
There is no unambiguous definition of “life”.
B. Sendhoff et al. (Eds.): Creating Brain-Like Intelligence, LNAI 5436, pp. 1–14, 2009. c Springer-Verlag Berlin Heidelberg 2009
2
B. Sendhoff, E. K¨ orner, and O. Sporns
last 50 years (roughly a factor of between 107 and 1011 ), there are several factors that still prevent a stricly rule-based approach, for example the richness of the natural environment, its inherent unpredictability and ambiguity, and the hidden combinatorial explosion. Nevertheless, as Sloman argues [3], lessons learned from and within classical artificial intelligence remain relevant to the research program of creating brain-like intelligence and the reaction against everything related to classical AI may even have held up progress. Computational intelligence is a research spin-off of AI that currently serves as an umbrella for several disciplines with three major directions represented by neural networks, evolutionary computation and fuzzy systems. While these directions started off as separate endeavors, they all addressed weaknesses of AI. AI’s lack of flexibility is addressed by fuzzy systems, while AI’s lack of adaptivity and learning is addressed by neural networks and evolutionary computation. These research areas have all been and still are successful in many disciplines, however, their contribution to a better understanding of the brain and hence to achieve brain-like intelligence has been limited (although recently Zadeh [4] has argued that fuzzy systems are a prerequisite to achieve human-level machine intelligence). Nevertheless, they continue to serve at least as starting points for our quest for brain-like intelligence. In this volume, we find a range of approaches based on neural networks and evolutionary computation, see the chapters by Suzuki et al. [5], by Elfwing et al. [6] and also the one by Sporns [7]. So where does this leave brain-like intelligence? As the name suggests, we would like to achieve intelligence as demonstrated by brains preferably those of highly evolved creatures. We could call it pragmatic intelligence or in an evolutionary sense, selected intelligence. This form of intelligence is required for the expression of certain behaviours which are in turn required to guarantee survival and reproduction (constituting the most basic values). This type of intelligence is not an end in itself; it is a means and it is expensive (in humans the brain consumes 25% of energy at 2% of body weight and it also requires the involvement of a very large proportion of genetic information, with about 70% of all genes expressed in the brain). At the same time, this type of intelligence must be available under all circumstances, it must be versatile and flexible, and since the environment constantly changes it must also be evolvable. It must allow the completely autonomous control of a body. It adapts and learns and if necessary – higher up on the phylogenetic ladder – it reasons. In a way, this line of thought also demystifies the brain in a similar way as Charles Jennings [8] wrote in his review of the book Accidental Mind [9]: “... the evolutionary history of the brain is a series of design compromises, culminating in a jerry-built assemblage of redundant systems that make us who we are today.”. Brain-like intelligence is a system property, it is the result of a well orchestrated interaction between control, body and environment; this is demonstrated by the experiments described by Sporns [7]. All three elements of the system grow and develop together; even if the environment in itself does not change, the perception of the environment changes in the course of development. We emphasize the role of the environment because it is the driving force behind it all. In an
Creating Brain-Like Intelligence
3
abstract sense, the only source of information during evolution and development is the environment, a point made also in the comments by Sloman [3]. In part, this information has been genetically stored, thus influencing development and then contributing to learning and adaptation. However, learning should not only be thought of as confined to task specific machine learning but also has to be newly interpreted from a systems perspective just as intelligence2 . Brain-like intelligence maintains a representation of the environment including the system itself. It has to cope with a continuous influx of an immense amount of mostly unspecific and for the current system state irrelevant information. Therefore, brain-like intelligence has to relate what it perceives to what it knows. There is abundant evidence that the brain actively creates its own perception, i.e., it predicts and manipulates the sensory input by feedback from its representation of the world. This seems to be achieved by hierarchically and dynamically organizing and controlling the interplay between different processing streams and areas in a highly organized structure or architecture (e.g. columnar structure). More detailed hypotheses of these processes have been put forward, e.g. [12]. Brain-like intelligence cannot be identified with a singular functionality, it is its versatility, its robustness and its plasticity which makes it the object of our quest. Superior functionality in a single problem domain can often already be achieved by specialized technological systems (think about a simple calculator or a chess programme), however, no technological system can robustly cope with the plethora of tasks encountered by any animal during its life time. Brain-like intelligence is not the only extension of artificial and computational intelligence that has been proposed in recent years: autonomous mental development [13], cognitive architectures [14] and brain-based devices [15] rate among them. As these designations already suggest, autonomy based on a developmental program to acquire knowledge during interaction with humans lies at the core of Weng et al. [13]. Krichmar and Edelman [15] formulate a list of six basic principles and properties of an intelligent machine which includes active and adaptive behavior, developing from a set of basic innate rules (values). While Krichmar and Edelman place emphasis on the relation to the neural details, Weng et al. do not make this a prerequisite for their approach. Krichmar and Edelman clearly state that the analogy to the natural brain must be taken seriously. Indeed their last criterion suggests to use experimental data from neuroscience to measure performance; we will come back to this point again in Section 7. The review article by Vernon et al. [14] nicely summarizes the trend that the two traditionally opposing camps of relying on physical symbol systems and emergent development systems merge into hybrid systems (e.g. shown in Fig. 1 in [16]). Vernon et al. promote the thinking in terms of phylogenetic and ontogenetic development of environmentally embedded systems as also outlined in Section 6 of this chapter. They relate autonomous mental development as 2
This “systems perspective” [10] is currently advocated “all over the life-sciences” e.g. in the emerging area of Systems Biology and in pharmaceutical research, where there is a growing trend to build data based models of whole organs and even organisms [11] to predict the effect of newly developed drugs.
4
B. Sendhoff, E. K¨ orner, and O. Sporns
has been put forward by Weng et al. to cognitive architectures. At the same time, development is entangled with the environment, and the right sequence of tasks [17] is important for an iterative refinement of skills. All of these approaches are closely related to our understanding of brain-like intelligence, with most of the differences found in the details of the structural and developmental constraints of the model and in the ways by which phylogenetic and ontogenetic development and learning can and must be integrated. By rightly focusing on the interaction of the systems with their environments (low-level perceptual and high-level social/cultural), we should not forget that the system is also internally driven by different layers of microscopic and macroscopic dynamics. It is generally agreed that brain-like systems will have to continuously consolidate and re-arrange their internal state. However, on a microscopic level there is a second continuous internal dynamics including cellular signalling and gene regulatory processes to guarantee continuous operability at the macroscopic level. The accumulation of ever more detailed biological, cognitive and psychological data cannot substitute for general principles that underlie the emergence of intelligence. It is our belief that we have to more intensively pursue research approaches that aim at a holistic and embedded view of intelligence from many different disciplines and viewpoints. We must augment “locality” by a global control architecture and functional isolation by environmentally integrated systems that are capable of autonomous (phylogenetic and ontogenetic) development and self-organisation consistent with brain evolution. The goal is to create systems that are vertically complete spanning all relevant levels and yet horizontally simplified, i.e. ignoring something in everything without ignoring everything about something. This line of thought is mirrored by the diverse contributions which we will briefly relate and cluster in the next sections.
2
The Theoretical Brain: Structure, Dynamics and Information
The aim of theoretical neuroscience is to understand the general principles behind the organization and operation of nervous systems. Therefore, insights from theoretical neuroscience would be the ideal starting point for creating brain-like intelligence. However, in many ways the foci are different. Brain-like intelligence is concerned with nervous systems inside organisms inside environments. This perspective yields theoretical as well as practical consequences. It is necessary to study the transition from microscopic to macroscopic levels of structural and temporal organization and it is necessary to embrace the interaction with the environment in the formulation of the mathematics of brain-like processing. Not surprisingly, extensions of information theory remain the best candidate for approaching the later problem. There have been several early attempts, e.g. [18,19], and in [7] Sporns describes an intuitive way how the information exchange between system and environment can be formalized. Information theory is intrinsically connected to entropy and thermodynamics, which with its statistical formulation has been one of the most successful
Creating Brain-Like Intelligence
5
approaches towards connecting a microscopic with a macroscopic level of description. In their contribution [20], Deco and Rolls aim at bridging the gap between cognitive psychology and neuroscience by formulating microscopic models at the level of populations of neurons to explain macroscopic behavior, i.e. reaching a decision. Secondly, they argue that statistical fluctuations due to the spiking activity of the network of neurons are responsible for information processing with respect to Weber’s psycho-physical law. Therefore, the probabilistic settling into one attractor is related to finite size noise effects, which would not be observable in a mean field or rate simulation model. The operation of networks in the brain is inherently noisy, which e.g. facilitates symmetry breaking. Thus, stochastic aspects of neural processing in the brain seem to be important to understand global brain function and if this is the case, we can expect that it will equally be important to create brain-like intelligence. The role of structure and dynamics for brain-like intelligence is also the focus of the chapter by Jost [21]. This collection of mathematical tools for the analysis of brain-like systems highlights the different sources for rich dynamics (network structure, various update functions, e.g. logistic map) as well as the importance of temporal synchronization of flexible neuron groups in a network. Many different organizational levels of dynamics make up the overall information processing patterns in brains. Neural dynamics is self-contained as well as stimulus modulated in an active way (the brain moves) while being organized on an equally dynamical structure that is changing on different time scales during learning, development and evolution. According to Jost all this must be seen in the light of information theory, which is also a driving force behind the experiments described in Sporns’ chapter [7]. Sporns chooses an abstraction level that is closer to the actual biological system. However, the triad of structure, dynamics and environment is also the central focus of Sporns’ chapter. The brain is a complex system because it consists of numerous elements that are organized into structural and functional networks which in turn are embedded in a behaving and adapting organism. Brain anatomy has long attempted to chart the connection patterns of complex nervous systems, but only recently, with the arrival of modern network analysis tools, have we been able to discern principles of organization within structural brain networks. One of the overarching structural motifs points to the existence of segregated communities (modules) of brain regions that are functionally similar within each module and less similar between modules. These modules are interconnected by a variety of hub regions that serve to functionally integrate the overall architecture. This duality of segregation and integration can be assessed with information theoretic tools and is at the root of brain complexity.
3
The Embodied Brain
Embodied Cognition advocates the view that the mind is shaped (or, to put it even more strongly, defined) by the body. Although the basic idea of embodiment has been proposed some time ago (see, for example, the writings of the French
6
B. Sendhoff, E. K¨ orner, and O. Sporns
Philosopher Maurice Merleau-Ponty), it was not prominently pursued in classical artificial intelligence, which instead focused on abstract and deliberately disembodied symbols and rules. Today, Rolf Pfeifer is an ardent advocate of embodied cognition, a view that is reflected in his contribution in this volume [22]. Pfeifer and G´ omez outline a number of different cases where complex control problems are fundamentally simplified by appropriate morphology (e.g. sensory morphology and material properties). They argue that information from the environment is structured by the physical characteristics and morphology of both the sensory and the motor systems (information self-structuring), a point also made in the chapter by Sporns. At the same time, Pfeifer and G´ omez point towards a tradeoff or balance between the maximal exploitation of the dynamics (requiring less sophisticated neural control) and maximal flexibility during operation. In other words, morphological computation is efficient and necessary, however, it is also restrictive with respect to the level of adaptation that can be achieved on a short time-scale, i.e., the individual life-time. It would be interesting to explore how this trade-off is realized by different species in biology. Embodied cognition is strongly related to how our control of the environment shapes what we experience and how we develop. Possibly the most important means to both control and experience our environment are our hands. Ritter et al. argue in their contribution [23] that manual control and grasping are strongly related to the development of language which is usually seen as an instantiation of “pure” cognition (i.e. symbolic and independent of sensorimotor processes). The term manual intelligence is used by Ritter to denote a paradigm shift towards an understanding of the environment as a source of contact and interaction that is embraced and not avoided by robots. As a result language has developed as an extension of the ability to physically manipulate objects to the skill to mentally re-arrange and assemble ideas and descriptions of objects. Although we will discuss embodiment in the light of evolution in Section 6, the embodied evolution framework put forward in the chapter by Elfwing et al. [6] complements our picture on embodied cognition. The population aspect inherent in any evolutionary interpretation adds a new perspective to embodied cognition which is intrinsically linked to the aspect of communication put forward by Ritter et al. and the notion of social and imitation learning discussed in [24,25,26] and summarized in Section 4.
4
The Social Brain
The brain resides in a body which lives in an environment. However, the environment is more than just a complex assembly of physical objects to explore and manipulate. Especially in more highly evolved species, the physical environment of an individual organism consists of other conspecifics which are often in the role of social partners. The importance of this type of interaction often termed social interaction has been studied since a long time in particular in animal behaviour [27]. It is known that the development of brain structure is altered by social deprivation.
Creating Brain-Like Intelligence
7
In developmental robotics, social interaction has been studied e.g. in the context of imitation learning, which is the focus of the chapter by Grimes and Rao [24] and which is also described in the chapter by Vijayakumar et al. [25]. In kinematic imitation learning the observed behaviour of the teacher has to be mapped into the kinematic space of the observer. Grimes and Rao build a dynamic Bayesian network model of the imitation learning process including sensory-motor learning and implement it on the humanoid robot HOAP. The probabilistic structure of their model allows to deal with uncertainties which are inherent to imitation learning and to incorporate prior-knowledge in a very natural way. HOAP learns from two sources of information: demonstrative and explorative and under two types of probabilistic constraints: matching (observing the teacher’s state) and egocentric (constraints of the learners state). Imitation being basically uni-directional from the teacher to the observer can be seen as a first step towards social interaction. Wrede et al. argue in their contribution [26] that truly bi-directional interaction and communication is the basis of successful infant learning. They describe the necessity of joint attention (a kind of mental focus) for learning of artificial systems in interaction. In this way, the attention of each interaction partner can be directed to a common focus. The top-down process of joint attention can be triggered by bottom-up saliency strategies. The synchrony of information of different modalities supports to achieve this focus on a subset of the available information. Wrede et al. suggest that interaction strategies derived from verbal and non-verbal interaction from which turn-taking and feedback strategies are derived are required in order to build systems that can engage successfully in social interaction.
5
The World Inside the Brain – The Brain Inside the World
Systems displaying brain-like intelligence need an appropriate structure or architecture of processing layers. In Section 1, we have mentioned the research field of cognitive architectures being deeply related to brain-like intelligence. Such architectures allow the system to operate in the world and to represent the world. It is the amalgamation of these traditionally separate views that have been referred to as hybrid systems and which will be a prerequisite to success. There is a spectrum of approaches ranging from the more operational to the more representational philosophy. Vijayakumar et al. suggest in their contribution [25] an adaptive control and planning system for robot motion within a statistical machine learning framework that can be coupled to a number of different information sources. They argue that the probabilistic and statistical level of description allows to abstract from the detailed underlying neural organization while being able to represent, process and fuse information in a functionally similar way. In particular, such system architectures can cope well with missing information and are suitable for statistical learning and inference mechanisms. Vijayakumar et al. compare the identification and learning of random latent, i.e. not directly observable,
8
B. Sendhoff, E. K¨ orner, and O. Sporns
variables to the development of “internal” representations in cognitive architectures. They discuss and extend classical robot control schemes and apply the proposed probabilistic framework to imitation learning in humanoid robotics. In the chapter by Goerick [28] an architecture called PISA for an autonomously behaving system that learns and develops in interaction with the environment is outlined. The acronym PISA stands for Practical Intelligence Systems Architecture and consists of a large number of different elements that are described in the chapter. Goerick puts emphasis on the role of internal needs and motivations in PISA and on the approach to incrementally realize such a complex system architecture embedded in the environment. He proceeds with the description of interactive systems that allow motion control, online learning and interaction in different contexts on the humanoid robot ASIMO and that have been developed within the proposed framework. Furthermore, he suggests a notation called “Systematica” that is especially designed to describe incremental hierarchical control architectures. Suggesting an architecture in between the cognitivist and the emergent view, Eggert and Wersing outline an approach toward a cognitive vision system in their chapter [29]. They focus on the conceptual role of control processes in the visual system to keep the combinatorial complexity of natural visual scenes under control. Questions concerning the high-level representational framework, the low-level sensory processes, the mediating structures of the control and the optimization criteria under which the control processes operate are discussed in the chapter. Eggert and Wersing proceed to highlight a few visual processing “subsystems” that are relevant for the general architecture. Examples are image segmentation, multicue tracking, and object online learning for classification and categorization. The self-referential build-up of a visual knowledge representation is an important element in Eggert and Wersing’s chapter. At the same time, they emphasize that visual scene representations are sparse and volatile and therefore only store what is needed and what was accessible under given resource constraints. We started out this section with a more “brain inside the world” focus represented by the work of Vijayakumar et al. and Goerick, then proceeded discreetly to a slightly (note that we are being very careful here) more “world inside the brain” view in the cognitive vision system discussed by Eggert and Wersing, arriving now at Sloman’s chapter [3] where he argues that embodiment is not the solution to the intelligence problem it is merely a facet of it, arguably an important one. Sloman moves further along the brain-world axis and puts forward that early AI has failed to put sufficient emphasis on the embodiment aspect but that does not mean that all of the earlier work is meaningless in our quest for brain-like intelligence. In his chapter, Sloman addresses several aspects of cognition including the development of children where it is evident that the explanatory power of primarily sensory driven systems with a relatively straightforward dynamics connecting the sensor with the motor side is not sufficient. Instead he makes a case for a multi-level dynamical system where the majority of processing happens
Creating Brain-Like Intelligence
9
decoupled from the direct environmental I/O. However, for Sloman that does not mean more or less but different emphasis on the environment. He suggests to study the features of the environment relevant for animal and robot competences and the different ways biological (and we would add artificial) evolution has (and would) respond to them. Tsujino et al. [30] outline two models of the basal ganglia for autonomous behavior learning. The system-level model uses a reinforcement learning framework whereas the neuron-level model employs a spiking neural network. The two different levels of abstraction allows the authors to address different questions which are related to the function of the basal ganglia. While the issues of reward setting and input selection are the central focus for the system-level model, the spiking neural network is used to investigate e.g. mechanisms of timing.
6
The Evolved Brain
Brain-like intelligence is selected intelligence. Therefore, it is inherently put in an evolutionary context. But what are the consequences? There is an iterative development of functionalities that relates to the phylogenetic development of the architecture that is genetically represented and environmentally adapted. Although Haeckels orginal statement “ontogeny recapitulates phylogeny” is false in its literal interpretation, it is true that phylogenetically older structures generally occur earlier during ontogenetic development. Since in a cascade of hierarchically organized processes that are executed during development, it is easier to implement change at a later stage of the process than at an earlier stage, this is a natural result of an evolving system. However, this puts an immense strain on the richness of the architectural and organizational (in the dynamical system sense) primitives which evolution could manipulate. Evolvability of brains heavily constraints its processing principles. Robustness is a consequence. Were the processing principles brittle any evolutionary change would result in system failure. The genetic representation in itself is a complex information processing structure. Gene regulatory networks build cascading nonlinear dynamical systems that encode information indirectly. High structural and temporal precision is expensive (energy) and cannot be achieved globally. The huge complexity of the brain can – in general – only be represented by the genetic apparatus in a coarse way, the result is an inherent requirement for flexibility. If there is no means to specify each neuron location and neural connection precisely, a system has to evolve that is flexible. In a sense the shortcomings of the evolutionary process is – to a certain degree – responsible for the desirable properties of brain-like intelligence. Evolution is situated design, i.e. system development and system operation are not spatially decoupled like in traditional system design. Therefore, the embodiment discussion is meaningless from an evolutionary perspective (Sloman [3] calls it a tautology); only both together constitute an individual which is subject to selection. However, current discussion on development generally assumes a developing control system inside a fixed (chosen) body; from an evolutionary perspective this makes little sense and it remains to
10
B. Sendhoff, E. K¨ orner, and O. Sporns
be seen to which degree this separation can be upheld. Note, that this goes beyond the co-evolution of body and brain that has been demonstrated by Lipson and Pollack [31]. Besides the principal relation between brain and evolution, there is also a more pragmatic one. Evolutionary computation offers a powerful approach to the optimization of complex structures on non-differentiable, noisy and multi-modal quality landscapes. In particular in combination with faster more local search techniques (reinforcement learning, gradient descent, BFGS) evolutionary algorithms have proven to be very successful for the adaptation of systems. The field of evolutionary robots [32] demonstrates this. In their chapter [6], Elfing et al. successfully integrate both methods for adaptation in their cyber rodent project. They present a framework for embodied evolution consisting of both a simulation environment and a few hardware robots using an elaborate mating scheme without explicit fitness assignment. The genotypes of the cyber rodents contain information on the neural top-layer controller and on the learning parameters. Reinforcement learning is used for lifetime adaptation. The two-layered control architecture selects learning modules dependent on behavior, environment and internal energy. Suzuki et al. co-evolve active vision and feature selection in a neural architecture in their chapter [5]. Active vision is the process of selecting and analyzing parts of a visual scene. Although the degree of freedom of the neural system is limited (the structure is fixed), the authors nicely demonstrate the selective advantage of active vision in their evolutionary set-up.
7
The Benchmarked Brain
The objective measurement of sucess and progress is a vital element in brain-like intelligence as it is in any other fields of science and engineering [33,34]. Although varying between science and engineering, the different aspects of system verification and validation are well established. However, in brain-like intelligence we face additional difficulties. Firstly, we are not clear yet, whether we shall position ourselves more within the scope of science or technology. In the first case, success has to be judged by neurophysiological or psychological experimental data as is the case in computational neuroscience. In the second case, the target is to provide evidence that the realized system accomplishes its intended requirements. Of course, in this case we have the initial burden to clearly define what the intended requirements are against which we want to judge our progress. The fact that there is a continuous transition between both extreme stand-points makes the measurement process even more ambiguous. In [35], Herrmann and Ohl clearly follow the “science path” by suggesting “cognitive adequacy” as a measure. They argue that if it is possible to observe the same or similar behaviour in artificial systems as the real brain demonstrates, then it is likely to work like the real brain. They proceed to identify a number of anchor-points suitable for the comparison between system and the real-thing: reaction times: differences/ratios; error rates: cognitively adequate algorithms should make errors under the same circumstances as humans would; perception measures: show
Creating Brain-Like Intelligence
11
similar illusory or ambiguous percepts as those in human perception. This view is similar to the one put forward by Krichmar et al. which we discussed in Section 1. If we follow the technological stand-point, we have to start by stating our system requirements or by laying out the rules for competition. There are small-scale (benchmarks for machine learning or image processing) and large-scale competitions (RoboCup, Darpa Urban Challenge), however, from the perspective of “brain-like intelligence” they always leave a feeling of dissatisfaction behind. The reason is that functionality can be achieved in many different ways and a functional approach to judging the level of “brain-like intelligence” inside a system or machine would end up in the endeavour to define brain-like intelligence. The results are typically lists with various items and this is where our dissatisfaction comes from, we know that lists are only of temporary validity. As we argued above, we would like to judge the complete system, but against what? In a recent BBC interview [36], Dharmendra Modha, manager of Cognitive Computing at IBM, proposed a radical solution to the problem: “We are attempting a 180 degree shift in perspective: seeking an algorithm first, problems second.” Although at first sight an intriguing and perhaps even plausible statement, it would by definition preclude an objective and unambiguous measurement of progress: this is a dangerous path for technology to choose. At the same time, the path which is currently pursued is not much preferable, as more and more publications solve more and more specific problems that a particular research group has committed itself to. This often does not allow comparison or objective measurement of quality. It does not even guarrantee reproducibility, because the complexity of most systems is too high and the provided detail of implementation information is too low. This problem receives less attention in the scientific community than it should because it lies at the core of an overtly successful approach towards creating brain-like intelligence. In particular for those who invest in this field of research (research agencies, industry), a solution to this question is vital. In our opinion, the truth will be somewhere between the scientific and the technological standpoint, where function can be combined with experimental observation to lead the way through the jungle of brain-like system architectures.
8
Summary and Conclusion
We are facing a puzzle where we believe that we have identified a couple of pieces to be important (those are the focus of our research) but the overall picture remains fuzzy; we cannot be sure about the importance or centrality of the pieces and we have not yet figured out which pieces will connect to one another. If we summarize our situation it is a bit like this: we aim for a fuzzy target using mostly relatively brittle approaches while having difficulties to measure progress. So where are the good news? The good news is: We are beginning to move in the right direction, even beyond the progress that is driven by the increase in computing power. The systems we build now are more open, more flexible and
12
B. Sendhoff, E. K¨ orner, and O. Sporns
more adaptive than those of the past. We have understood that we do not build systems to operate in the environment but with the environment and because of the environment. This collection of papers is evidence of this progress. It is interesting to note that there seems to be a certain reservation to bridge the gap from neuroscience to advances and new developments in intelligence research, e.g. brain-like intelligence. Judging by the conceptual proximity the number of chapters in this volume that relate to neuroscience is relatively small (mainly the ones by Deco and Rolls, by Sporns and one of the models by Tsujino et al.). This does not seem to be an observation that is restricted to our effort in brain-like intelligence. Indeed the recent collection of papers [37] from researchers in artificial intelligence dedicated to the 50th anniversary of AI has a similarly limited number of neuroscience related contribution. What could be the reason? Of course we can only speculate. However, on the one hand neuroscience is too much (from the view point of brain-like intelligence) focused on the details of neural processing instead of on the large-scale processing principles. On the other hand, research in brain-like intelligence must take care to emancipate – to a certain degree – from the grasp of technology. There are a number of interesting and important questions that we have not addressed in this chapter and which are also not addressed by any of the contributions to this book. High on this list of omissions rates the question concerning the computational substrate for systems exhibiting brain-like intelligence. It seems that over the last five decades, research in intelligent systems has proceeded by incorporating more and more biological principles into the blueprint for our approach towards intelligence. In this quest, can we ignore whether we compute with cells in an organic system or with gates in a silicon system? In the brain, we cannot distinguish between hardware and software. The architecture, structure and algorithms have evolved together and it is impossible to say where one ends and the other starts. It would be an evolutionary accident if we could extract some principles out of the context of the remaining ones and still expect this one to perform well. The community has learned to scale down expectations over the last 50 years. So where can we go in the next fifty? We will build machines that support us robustly and autonomously both in the real and the virtual world. Will they challenge us cognitively? We do not think so. However, every reader is invited to speculate about the future after meeting the present on the next 336 pages. Finally, we cannot phrase it better than Turing [38]: We can only see a short distance ahead, but we can see plenty there that needs to be done.
References 1. McCarthy, J., Minsky, M.L., Rochester, N., Shannon, C.E.: A proposal for the Dartmouth summer research project on artificial intelligence (1955) http://www-formal.stanford.edu/jmc/history/dartmouth/dartmouth.html 2. Nilson, J.: Shakey the robot. Technical Report 323, SRI International AI Center (1984)
Creating Brain-Like Intelligence
13
3. Sloman, A.: Some requirements for human-like robots: Why the recent overemphasis on embodiment has held up progress. In: Sendhoff, B., K¨ orner, E., Sporns, O., Ritter, H., Doya, K. (eds.) Creating Brain-like Intelligence. LNCS, vol. 5436, pp. 248–277. Springer, Heidelberg (2009) 4. Zadeh, L.A.: Toward human level machine intelligence – is it achievable? The need for a paradigm shift. IEEE Computational Intelligence Magazine 3(3), 11–22 (2008) 5. Suzuki, M., Gritti, T., Floreano, D.: Active vision for goal-oriented humanoid robot walking. In: Sendhoff, B., K¨ orner, E., Sporns, O., Ritter, H., Doya, K. (eds.) Creating Brain-like Intelligence. LNCS (LNAI), vol. 5436, pp. 303–313. Springer, Heidelberg (2009) 6. Elfwing, S., Uchibe, E., Doya, K.: Co-evolution of rewards and meta-parameters in embodied evolution. In: Sendhoff, B., K¨ orner, E., Sporns, O., Ritter, H., Doya, K. (eds.) Creating Brain-like Intelligence. LNCS (LNAI), vol. 5436, pp. 278–302. Springer, Heidelberg (2009) 7. Sporns, O.: From complex networks to intelligent systems. In: Sendhoff, B., K¨ orner, E., Sporns, O., Ritter, H., Doya, K. (eds.) Creating Brain-like Intelligence. LNCS (LNAI), vol. 5436, pp. 15–30. Springer, Heidelberg (2009) 8. Jennings, C.: The most wonderful organ – not! Nature Neuroscience 10(11), 1339 (2007) 9. Linden, D.J.: The Accidental Mind. Harvard University Press (2007) 10. de Schutter, E.: Why are computational neuroscience and systems biology so separate. PLoS Computational Biology 4(5) (2008) 11. Zheng, Y., Kreuwel, H.T., Young, D., Shoda, L.K.M., Ramanujan, S., Gadkar, K.G., Atkinson, M., Whiting, C.C.: The virtual NOD mouse: Applying predictive biosimulation to type 1 diabetes research. Annals of the New York Academy of Sciences 1103, 45–62 (2007) 12. K¨ orner, E., Matsumoto, G.: Cortical architecture and self-referential control for brain-like computation. IEEE Engineering in Medicine and Biology 21(5), 121–133 (2002) 13. Weng, J., McClelland, J., Pentland, A., Sporns, O., Stockman, I., Sur, M., Thelen, E.: Autonomous mental development by robots and animals. Science 291(5504), 599–600 (2001) 14. Vernon, D., Metta, G., Sandini, G.: A survey of artificial cognitive systems: Implications for the autonomous development of mental capabilities in computational agents. IEEE Transactions on Evolutionary Computation 11(2), 151–180 (2007) 15. Krichmar, J.L., Edelman, G.M.: Brain-based devices for the study of nervous systems and the development of intelligent machines. Artificial Life 11, 63–77 (2005) 16. Duch, W., Oentaryo, R.J., Pasquier, M.: Cognitive architectures: Where do we go from here? In: Wang, P., Goertzel, B., Franklin, S. (eds.) Frontiers in Artificial Intelligence and Applications, pp. 122–136. IOS Press, Amsterdam (2008) 17. Sendhoff, B., Kreutz, M.: A model for the dynamic interaction between evolution and learning. Neural Processing Letters 10(3), 181–193 (1999) 18. De Vree, J.: A note on information, order, stability and adaptability. BioSystems 38, 221–227 (1996) 19. Sendhoff, B., P¨ otter, C., von Seelen, W.: The role of information in simulated evolution. In: Bar-Yam, Y. (ed.) Unifying themes in complex systems – Proceedings of the International Conference on Complex Systems 1997, pp. 453–470 (2000) 20. Deco, G., Rolls, E.T.: Stochastic dynamics in the brain and probabilistic decision making. In: Sendhoff, B., K¨ orner, E., Sporns, O., Ritter, H., Doya, K. (eds.) Creating Brain-like Intelligence. LNCS (LNAI), vol. 5436, pp. 31–50. Springer, Heidelberg (2009)
14
B. Sendhoff, E. K¨ orner, and O. Sporns
21. Jost, J.: Formal tools for the analysis of brain-like structures and dynamics. In: Sendhoff, B., K¨ orner, E., Sporns, O., Ritter, H., Doya, K. (eds.) Creating Brain-like Intelligence. LNCS (LNAI), vol. 5436, pp. 51–65. Springer, Heidelberg (2009) 22. Pfeifer, R., G´ omez, G.: Morphological computation - connecting brain, body and environment. In: Sendhoff, B., K¨ orner, E., Sporns, O., Ritter, H., Doya, K. (eds.) Creating Brain-like Intelligence. LNCS (LNAI), vol. 5436, pp. 66–83. Springer, Heidelberg (2009) 23. Ritter, H., Haschke, R., Steil, J.J.: Trying to grasp a sketch of a brain for grasping. In: Sendhoff, B., K¨ orner, E., Sporns, O., Ritter, H., Doya, K. (eds.) Creating Brainlike Intelligence. LNCS (LNAI), vol. 5436, pp. 84–102. Springer, Heidelberg (2009) 24. Grimes, D.B., Rao, R.P.N.: Learning actions through imitation and exploration: Towards humanoid robots that learn from humans. In: Sendhoff, B., K¨ orner, E., Sporns, O., Ritter, H., Doya, K. (eds.) Creating Brain-like Intelligence. LNCS (LNAI), vol. 5436, pp. 103–138. Springer, Heidelberg (2009) 25. Vijayakumar, S., Toussaint, M., Petkos, G., Howard, M.: Planning and moving in dynamic environments: a statistical machine learning approach. In: Sendhoff, B., K¨ orner, E., Sporns, O., Ritter, H., Doya, K. (eds.) Creating Brain-like Intelligence. LNCS, vol. 5436, pp. 151–191. Springer, Heidelberg (2009) 26. Wrede, B., Rohlfing, K.J., Hanheide, M., Sagerer, G.: Towards learning by interacting. In: Sendhoff, B., K¨ orner, E., Sporns, O., Ritter, H., Doya, K. (eds.) Creating Brain-like Intelligence. LNCS (LNAI), vol. 5436, pp. 139–150. Springer, Heidelberg (2009) 27. Harlow, H., Harlow, M.: Social deprivation in monkeys. Scientific American 205(5), 136–146 (1962) 28. Goerick, C.: Towards cognitive robotics. In: Sendhoff, B., K¨ orner, E., Sporns, O., Ritter, H., Doya, K. (eds.) Creating Brain-like Intelligence. LNCS (LNAI), vol. 5436, pp. 192–214. Springer, Heidelberg (2009) 29. Eggert, J., Wersing, H.: Approaches and challenges for cognitive vision system. In: Sendhoff, B., K¨ orner, E., Sporns, O., Ritter, H., Doya, K. (eds.) Creating Brain-like Intelligence. LNCS (LNAI), vol. 5436, pp. 215–247. Springer, Heidelberg (2009) 30. Tsujino, H., Takeuchi, J., Shouno, O.: Basal ganglia models for autonomous behavior learning. In: Sendhoff, B., K¨ orner, E., Sporns, O., Ritter, H., Doya, K. (eds.) Creating Brain-like Intelligence. LNCS (LNAI), vol. 5436, pp. 328–350. Springer, Heidelberg (2009) 31. Lipson, H., Pollack, J.B.: Automatic design and manufacture of robotic lifeforms. Nature 406, 974–978 (2000) 32. Nolfi, S., Floreano, D.: Evolutionary Robotics: The Biology, Intelligence, and Technology of Self-Organizing Machines. MIT Press, Cambridge (2000) 33. Oreskes, N., Shrader-Frechette, K., Belitz, K.: Verification, validation and confirmation of numerical models in earth science. Science 263(5147), 641–646 (1994) 34. Oberkampf, W.L., Trucano, T.G., Hirsch, C.: Verification, validation and predictive capability in computational engineering and physics. Applied Mechanics Review 57(5), 345–384 (2004) 35. Herrmann, C.S., Ohl, F.W.: Cognitive adequacy in brain-like intelligence. In: Sendhoff, B., K¨ orner, E., Sporns, O., Ritter, H., Doya, K. (eds.) Creating Brain-like Intelligence. LNCS (LNAI), vol. 5436, pp. 314–327. Springer, Heidelberg (2009) 36. Palmer, J.: IBM plans brain-like computers. BBC News (November 2008) 37. Lungarella, M., Iida, F., Bongard, J.C., Pfeifer, R. (eds.): 50 Years of Aritficial Intelligence. LNCS, vol. 4850. Springer, Heidelberg (2007) 38. Turing, A.M.: Computing machinery and intelligence. Mind 59, 433–460 (1950)
From Complex Networks to Intelligent Systems Olaf Sporns Department of Psychological and Brain Sciences Indiana University, Bloomington Indiana 47405, USA
[email protected] Abstract. Much progress has been made in our understanding of the structure and function of brain networks. Recent evidence indicates that such networks contain specific structural patterns and motifs and that these structural attributes facilitate complex neural dynamics. Such complex dynamics enables brain circuits to effectively integrate information, a fundamental capacity that appears to be associated with a broad range of higher cognitive functions. Complex networks underlying cognition are not confined to the brain, but extend through sensors and effectors to the external world. Viewed in a quantitative framework, the information processing capability of the brain depends in part on the embodied interactions of an autonomous system in an environment. Thus, the conceptual framework of complex networks might provide a basis for the understanding and design of future intelligent systems.
1
Brain Complexity
The human brain has been called “the most complicated material object in the known universe” [1], a statement that most brain scientist would find unobjectionable. But why is this so? And does the brain’s indisputable complexity offer clues to the origin of intelligence? The key thesis of this chapter is that complexity is necessary for flexible and robust intelligence to emerge in both natural and artificial systems. Understanding and harnessing complexity is therefore an important first step towards creating brain-like intelligence. In fact, what makes the brain “special” even among other complex systems is that the brain displays complexity across many levels of organization. Brain morphology is a challenging subject for any brain, vertebrate or invertebrate. In the case of the human brain, its neural architecture is still only incompletely captured despite the development of ever more powerful tracing and imaging methods. The brain is complex not only in terms of structure but also in terms of its dynamics and embodiment. Brain structure is characterized by a very large number of heterogeneous components that are linked by an intricate network of physical connections (synaptic linkages). The brain’s morphology and connectivity generate complex neural dynamics that is thought to underlie cognitive and behavioral function. To make matters even more complex, the brain grows and matures over time, as the organism develops and collects experience. Thus, B. Sendhoff et al. (Eds.): Creating Brain-Like Intelligence, LNAI 5436, pp. 15–30, 2009. c Springer-Verlag Berlin Heidelberg 2009
16
O. Sporns
brain structure and the dynamics it supports are not fixed, but time-dependent. And finally, in a very fundamental sense, the brain is embodied. It is inextricably linked to the body of an organism that in turn is embedded in a physical and often a social environment. The organism’s ongoing interactions with this environment leave traces in brain and body that shape future inputs and outputs, thus binding brain and environment through perception and action. In this chapter we will briefly examine these three dimensions of brain complexity: brain structure, brain dynamics, and embodiment. We will argue that all three dimensions are essential for flexible and robust intelligence. We will highlight some connections between these distinct facets of the brain’s complexity and we will attempt to identify some potential avenues for utilizing our insights towards the design of intelligent systems.
2
Brain Structure
Even a cursory glance at brain anatomy shows that the anatomical structure of the brain is the product of evolution [2,3]. Even in simple brains, the structural arrangement of its cells and connections continues to defy the most sophisticated methods of neuroanatomy and neuroimaging. Very few nervous systems have been comprehensively mapped at the cellular level, and the neural structures of the vast majority of species remain completely uncharted. Across the animal kingdom, neural structures show great diversity [4,5], a reflection of each species’ ecological demands, as well as its body structure, sensory capabilities and motor repertoire. In each case, nervous systems seem closely matched to the “life style” of the species, a fact that is unsurprising in light of the brain’s evolutionary origins. In more highly evolved (i.e. phylogenetically younger) brains, neuroanatomical investigations have revealed intricate networks that link neurons and neuronal populations at different levels of scale, ranging from single cells at the microscopic level, to mesoscopic structures such as columns, and to macroscopic structural units such as nuclei and brain regions [6,7,4,8]. Across individuals these anatomical arrangements exhibit both constancy and variability. While the gross morphology of the brain, as well as its various cell types and basic physiology, appear largely under genetic control, a number of aspects of brain structure exhibit significant variability and remain malleable throughout the life of the organism. Mapping the structure of the brain and, specifically, the pattern of connections between its constituent elements is an important first step to understanding the brain’s functional capacity. As we will discuss later, a close consideration of brain anatomy and brain networks may lead us to identify a main source of functional flexibility and robustness. The human cerebral cortex has been mapped and studied for over a century. Despite rapid progress in functional neuroimaging technologies the large-scale interregional networks of human cortex have not yet been mapped at a level of detail that would allow us to relate in detail structural features of cortex to patterns of functional activation [9]. While histological and cytoarchitectonic studies
From Complex Networks to Intelligent Systems
17
[10] have yielded a parcellation of the human cortical surface into a small number of approximately 50 brain regions, there is mounting evidence that suggests that the true number of anatomically distinct cortical areas is much greater, perhaps on the order of a few hundred. How these areas are interconnected is largely unknown, due to the lack of reliable noninvasive fiber tracing techniques and the sheer complexity and intricacy of cortical fiber pathways. Currently the most promising avenue is provided by diffusion imaging techniques which are beginning to provide human connectivity maps [11,12,13,14]. In the future, the study of human brain networks will provide important insights into how the human cortex operates as an organized system, and how distinct brain regions coordinate their joint activity. For several mammalian species, comprehensive descriptions of anatomical patterns of cortical connectivity have been collated into structural connection matrices (e.g. [15,16]), whose analysis has offered a unique opportunity to characterize the structure of large-scale brain networks. Such analyses often utilize methods from graph theory. Brain networks are graphs (a set of nodes linked by edges) that can be analyzed using graph theoretic methods. Similar methods have also been applied to the characterization of other biological networks [17,18]. When applying these techniques to large-scale cortical connection patterns, cortical networks were found to contain clusters of densely and reciprocally coupled cortical areas [19]. These clusters comprise regions that are functionally related and are interlinked by polymodal association areas that serve as connector hubs [20]. Overall, the architecture combines high clustering with short path lengths, generally considered a hallmark of small-world networks, a type of network often found within the natural, social and technological world (e.g. [21,22,23]). The two main attributes of small-world networks (high clustering and short path lengths) map directly onto known features of cortical organization, cortical segregation and integration [24]. The high degree of local clustering is consistent with the existence of dense local communities and a high level of local specialization or segregation. The capacity to communicate between all their constituent vertices along short paths, measured as the characteristic path length, is consistent with global integration across the entire network. Small- world attributes are also associated with other structural characteristics such as near-minimal wiring lengths. Numerous studies (e.g. [25]) have argued that wiring length must be conserved (perhaps minimized) in development and evolution as only a limited amount of brain volume is available. Others have suggested that the actual “wiring diagram” found in present day mammalian brains tends to preserve a number of costly long-range projections despite the cost they impose on wiring volume. These long-range projections may have been preserved since they help to minimize the number of processing steps (i.e. the path length) between spatially distant brain regions [26]. These findings point to at least one major principle of how large-scale brain networks are structurally organized. Their connectivity appears to combine the existence of local communities (clusters, modules) with the capacity to integrate their operation across the entire network. In fact, this structural organization
18
O. Sporns
is commensurate with the idea that cortical function requires both segregation and integration. Segregation is linked to specialization, as important correlates of specific functional brain states are found in localized changes of neuronal activity within specialized populations. However, segregated and specialized brain regions and neuronal populations must interact to generate function. Coherent perceptual and cognitive states require the coordinated activation, i.e. the functional integration, of very large numbers of neurons within the distributed system of the cerebral cortex [27,28,29,30,31]. The need simultaneously to achieve segregation and integration poses a major challenge for neural information processing, and has consequences for brain dynamics.
3
Brain Dynamics
The anatomical connectivity of the brain gives rise to functional connectivity patterns, most generally defined as deviations from statistical independence (often measured as correlations, or mutual information) between distinct neural sites [32,33]. The brain’s structural connectivity is the substrate that shapes dynamic interactions as the brain becomes active, either in the course of spontaneous activity, or in response to external perturbations which may correspond to stimuli or tasks (Figure 1). There is a growing body of evidence indicating that dynamic interactions (functional connectivity patterns) are critical for the emergence of coherent cognitive states and organized behavior. Early experimental work in visual cortex revealed the existence of synchronous coupling both within and between brain regions [34], and led to numerous theoretical proposals for how such synchronous activity might underlie perceptual phenomena such as grouping, binding and object segregation [35,36]. Synchrony is one way in which a functional connection may manifest itself, and synchronous or coherent patterns of activity have since been demonstrated across a broad range of cognitive functions, including sensorimotor coordination [37], attention [38], working memory [39], and awareness [40]. Much of this work is carried out by simultaneously recording from multiple sites in the brain, while an animal or human subject is performing a task. Crucially, it has been shown in a number of experiments that synchrony or coherence is correlated with measures of behavioral performance, or that the disruption of synchrony degrades performance on a specific perceptual or cognitive task. Noninvasive methods for observing large-scale brain activity such as EEG, MEG and fMRI allow the collection of data sets that cover large portions of the human brain as subjects are engaged in behavior. Functional connectivity analyses of such data sets produce patterns of cross-correlation, synchrony, or coherence. Such patterns can be viewed as undirected weighted graphs whose edges represent the strengths of the statistical relationships between the linked vertices. Studies of patterns of functional connectivity (based on coherence or correlation) among cortical regions have demonstrated that functional brain networks exhibit small-world [41,42,43,44] and scale-free properties [45]. Functional connectivity patterns change as task conditions are varied [44], and they show
From Complex Networks to Intelligent Systems
19
Fig. 1. Structural and functional connectivity in the brain. (A) The image of the brain represents the surface of the macaque monkey cerebral cortex and the boxes mark the positions of four specialized cortical regions: V1 = primary visual cortex, 7a = visual cortex in the parietal lobe, IT = visual cortex in the inferior temporal lobe, and 46 = an area of prefrontal cortex. In the panel showing “structural connectivity” these areas are shown to be linked by fiber pathways. In the panel showing “functional connectivity” these areas are dynamically coupled, indicated by undirected statistical relationships of varying strengths (bidirectional arrows). (B) Functional connectivity is dynamic and time-dependent. Even in the absence of external inputs, functional brain networks undergo fluctuations due to spontaneous coupling and uncoupling of brain regions, here represented by bidirectional arrows of different widths. This process occurs on multiple time scale, with a lower bound at around 100 milliseconds.
often characteristic alterations in the course of brain dysfunction and disease, for example in the course of Alzheimer’s dementia [46]. Characteristically, functional connectivity patterns measured for any given task or stimulus include a large portion of the brain and a wide range of brain regions, often going beyond those regions that are thought to be central for the task. This has led to the idea that functions of brain regions are partly defined by “neural context” [47,48]. Functional connectivity is of interest not only in association with behavioral or cognitive tasks. Numerous studies of functional connectivity in the human brain have revealed that the brain is continually active even “at rest”, i.e. when a human subject is awake and alert, but does not engage in any specific cognitive or behavioral task [49,50]. These observations raise the possibility that this “cortical resting state” may be much more than a “noisy background” that can and should be subtracted away. Instead, the cortical resting state may contribute to cognitive processing. If this view were correct, it would be insufficient to characterize cognition as the transformation of input into output representations. Rather, mental function emerges from the interaction of exogenous perturbations (stimuli, motor activity) and endogenous spontaneous dynamics.
20
O. Sporns
The structural anatomy of the brain may play a unique role in shaping spontaneous dynamics, and individual differences in structural connections due to genetic factors and developmental history may contribute to individual differences in cognitive and behavioral performance. Recent empirical evidence suggests that patterns of structural and functional connectivity in the human brain area highly interrelated [12], Figure 2.
Fig. 2. Structural and functional connectivity of the human brain. (A) Diagrams of the sparse structural network of fiber pathways in the cerebral cortex (left) and of the fMRI correlation pattern recorded during resting state activity (right). The correlation pattern was derived by computing correlations over the entire cortical surface relative to a seed region placed in the posterior cingulated/precuneus (light gray = positively correlated regions; dark gray = negatively correlated regions) Both diagrams represent data obtained from the same human participant. (B) Structural and functional connectivity in matrix format (for the same data sets as shown in panel A) for 66 anatomical subregions of the cerebral cortex. (C) Scatter plot of the strengths of structural and functional connections shown in panel B. Note the presence of a significant correlation between connection strengths (r2 = 0.62, p < 0.001). Images and data sets are modified from [12].
Systems-level modeling of the cortical resting state suggests that structural connections can shape functional connectivity patterns at multiple time scales [51], and that variations in structural connections lead to changes in functional connectivity patterns. Furthermore, defects in the structural connection pattern, induced by lesions or brain disease, are increasingly linked to disturbances
From Complex Networks to Intelligent Systems
21
in functional connectivity, i.e. a disruption of dynamic interactions between brain regions. As defined above, functional connectivity is measured as statistical dependencies between variables. Statistical dependencies may be expressed as mutual information, information that is shared between sets of system components that may be widely distributed. Statistical information theory can provide a formal framework for network dynamics. Of particular interest are statistical measures that would allow us to measure the extent to which a system is both functionally segregated (i.e. small subsets of the system tend to behave independently) and functionally integrated (i.e. large subsets tend to behave coherently). A statistical measure that can assess the balance within a given system between segregation and integration was originally proposed by [27], Figure 3. The measure works by examining the spectrum of statistical dependencies across all scales of the systems, from individual units all the way to large subsets. If many of the system’s subsets, at many scales, share mutual information, then the system contains a high amount of structured interactions and in that sense may create a balance between local and global organization. Systems that contain a high amount of structured interactions across multiple scales may be considered complex. In fact, the measure proposed by [27] represents an information-theoretic measure of complexity that takes into account multi-scale network interactions. While there are numerous other measures of complexity [52], a case could be made that the hierarchical nature of this measure of complexity spanning all levels of scale within the system is inherently well suited for a system such as the brain, which is characterized by modularity at several different levels, ranging from single neurons to brain regions. Notably, small-world architectures of the kind discussed in the previous section are well suited to generate neuronal dynamics of high complexity. Large-scale connection matrices of cortical networks of the macaque and the cat have been shown to generate dynamics with high complexity [53] and networks that have been optimized for high complexity show structural attributes that match those of small-world networks [53,54]. The information-theoretic approach to complexity can be extended to cases where neural systems are coupled to their environment, either responding to inputs or generating outputs. Input patterns relayed to a nervous system will induce changes (perturbations) in the pattern of functional connectivity within the network. Some of the statistical relationships between neural elements will likely be increased, while others might be decreased, depending upon the nature of the stimulus. If a stimulus “integrates well” with the endogenous dynamics of the nervous system, its presence will promote and enhance functional connectivity, resulting in an increase in complexity. If a stimulus results in a perturbation that attenuates functional connectivity, its presence will result in a decrease in complexity. In this sense, some stimuli match the pattern of endogenous dynamics, while others do not. This matching can be quantified as a differential change in the complexity of functional connectivity [55,28]. Highly matching stimuli trigger differentiated patterns of functional connectivity that combine local segregation
22
O. Sporns
Fig. 3. Complexity as defined by [27] and its dependency on the balance between segregation and integration of the functional elements of a system. In systems that are highly segregated there are no dynamic couplings or, more generally, statistical relationships between its constituent elements. Such systems exhibit behavior where each of the elements displays its individual dynamics, but there are no higher-order structures. These systems have low complexity because their internal mutual information tends to be very low, since no part of the system has any information about the present or future state of any other part. A system (B) that is in lockstep, i.e. totally integrated and therefore completely regular in its dynamical evolution, will display some mutual information, but this information will be identical across all partitions. All parts of the system behave as if they are copies of each other, and their complexity is low. Finally, a system (C) that combines some degree of local segregation with a degree of global integration will display a mixture of segregation and integration. In this type of system, mutual information will be high across many partitions and the context will be rich and differentiated. Many parts of the system influence many other parts in many different ways.
and global integration. Matching complexity has interesting applications in the context of the cortical resting state which we discussed earlier. The performance of goal-directed tasks is accompanied by differential changes in functional connectivity across the entire brain, which may reflect the integration of an exogenous task-related perturbation with an existing set of endogenously rehearsed functional relations.
From Complex Networks to Intelligent Systems
23
The production of outputs by neural structures is often the end-result of the activation and co- activation of different sets of neural processing elements. The capacity of a system to produce identical outputs in many different ways has been called degeneracy [56]. It has been suggested that degeneracy is responsible for the great variety of functional activation patterns and lesion effects seen in the human brain across individuals [57]. In a systematic study of network patterns capable of degeneracy we found that only particular types of architectures can generate high degeneracy, and that these architectures balance the capacity of its elements to act redundantly with the capacity to generate independent effects. Thus, degeneracy is poised between the extremes of functional duplication (redundancy) and functional independence of system elements. Complexity, matching and degeneracy form a unified information-theoretic framework illustrating the relationship between structural connection patterns and functional connectivity patterns. So far, we have considered neural structures to be dis-embodied, engaging in spontaneous dynamics, or passively responding to inputs. The brain, however, is part of the body, and the connection between brain and body is never lost, neither during an individual life span, nor during the time course of evolution.
4
Embodiment
The study of brain structure and dynamics provides us with important insight into how brains are built and how they function. But if our goal is to create brainlike intelligence, is it sufficient to look at the brain in isolation? As we mentioned at the very outset of this chapter, the nervous systems of different organisms are evidently the product of evolution, finely tuned to the specific demands of the organism’s econiche and body structure. The human brain is no different. This may seem a trivial point at first, but as we will try to illustrate in this section of the chapter, the connections between brain, body and environment are not only pervasive but also fundamentally important for information processing within the nervous system. The area of embodied cognition has received much attention in recent years and even a cursory overview of the field is beyond the scope of this chapter. Most theories of embodied cognition are based on the notion that brain, body and environment are dynamically interactive and that coherent, coordinated or intelligent behavior is at least in part the result of this interaction [58,59,60]. According to embodied cognition, cognitive function, including its development, cannot be understood without making reference to interactions between brain and environment [61,62]. Our key thesis is that these interactions may serve to shape sensory inputs in ways that facilitate neural coding and information processing, thus providing material support to the internal workings of the neural control architecture. In other words, the dynamical coupling between brain, body and environment in an embodied system has consequences for the structure of information within the agent’s “control architecture”, i.e. its brain. Embodied interactions actively shape inputs, i.e. statistical patterns that serve as inputs to the agent’s brain (Figure 4).
24
O. Sporns
Fig. 4. External perturbations (for example sensory stimuli) are continuously sampled by our sensors. They are often the result of motor activity in the environment. Functional connectivity can be altered as a result of the arrival of external perturbations.
To say it even simpler, outputs shape inputs just as much as inputs shape outputs, and for this simple reason the brain is not autonomous, but depends on embodied interactions for structured information. Embodied systems are informationally bound to their surroundings, and the statistical interactions within their brain networks are subject to influences that result from these network’s actions in the real world. To approach this issue from a modeling perspective and to evaluate appropriate formal frameworks, we investigated the role of embodied interactions in actively structuring the sensory inputs of embodied agents or robots [63,64] for related approaches see also [65,66,67]. We found that coordinated and dynamically coupled sensorimotor activity induced quantifiable changes in sensory information, including decreased entropy, increased mutual information, integration, and complexity within specific regions of sensory space. We were able to plot these changes by comparing two different sets of systems. In one set, sensorimotor coupling was unperturbed, “naturally” leading to well- coordinated behavior. In the other set, we disabled the link between sensory inputs and motor outputs by substituting motor time series from another experiment, thus decoupling motor outputs from their sensory consequences. When comparing these two sets of systems, we found that intact sensorimotor coupling led to greater amounts of information in most sensors, which in turn could benefit the operation of brain networks. This additional information was not contained in the stimulus itself, it was created by the sensorimotor interaction — effectively allowing the system to go “beyond the information given”. On the basis of these studies, we proposed that active structuring of sensory information may be a fundamental principle of embodied systems, supporting a range of psychological processes such as perceptual categorization, multimodal sensory integration and sensorimotor coordination.
From Complex Networks to Intelligent Systems
25
The notion of brain-body-environment interaction implicitly (or explicitly) refers to causal effects. Somewhat simplistically, we may take a minimally but effectively embodied system as one in which sensory inputs causally affect motor outputs, and these motor outputs in turn causally affect sensory inputs. Such “perception-action loops” may be viewed as fundamental building blocks for learning and development in embodied systems. As dynamic structures, we would expect such loops to be intermittent and transient, waxing and waning in the course of behavior and linking sensory and motor events at specific time scales. Thus, mapping causal relations between sensory and motor states is likely to uncover networks that involve specific subsets of sensory and motor units localized in time. We applied a set of measures designed to extract undirected and directed informational exchanges between coupled systems, such as brain, body and environment [64], Figure 5. We found that patterns of non-causal as well as causal relations exist which can be mapped between a variety of sensory and motor variables sampled by two morphologically different robotic platforms (a humanoid robot and a mobile quadruped). We demonstrated that causal information structure can be measured at various levels of the robots’ control architectures and that the extracted causal structure can be a useful quantitative tool to reveal the pattern and strength of embodied interactions. We also examined the relation of information to body morphology. Using a simulated system and varying the spatial arrangement and density of photoreceptors on a simulated retina, we found that different morphological arrangements resulted in different patterns and quantities of information flow. This indicates that information processing within the control architecture (e.g. the brain) depends not only on sensorimotor interactions, but also on body morphology, for example the physical arrangement of sensory surfaces and motor structures. In an evolutionary context, it has been suggested that this interaction between body morphology and brain structure may have contributed to sudden jumps in the complexity of organisms. According to a new theory [68], the socalled Cambrian explosion, which around 540 Ma ago resulted in the sudden appearance of numerous new life forms, was triggered by the evolution of vision, which allowed the formation and representation of images of the environment within the nervous system. The ability to sense other organisms at a distance led to the emergence of complex predator-prey relationships, requiring numerous adaptations in body structure, for example hard exoskeletons and more elaborate motor structures for tracking and evasion. The availability of information via a new sensor thus may have had numerous effects on both brain and body, resulting in organisms with significantly more complex perceptual and motor capabilities. Does complexity provide a possible direction for the evolution of adaptive and intelligent biological systems? The measure of complexity introduced in the previous section can be applied in simulations of simple behaving creatures and if chosen as a cost function, its maximization can produce creatures that show coordinated behavior. Evolving artificial creatures (or agents) to perform specific
26
O. Sporns
Fig. 5. The information collected by sensors depends upon embodied interactions (images and data from [64]. (A) A small “humanoid” robot capable of tracking colored objects (e.g. the ball attached to its hand) with a pan-tilt CCD camera. (B) Diagram of the known sensorimotor interactions in the system. Arm movements displace the visual target which in turn is tracked with the pan-tilt unit. Under normal operation (condition “fov” in panel C) target movements result in smooth visual tracking. (C) Two conditions are contrasted in these plots, smooth visual tracking (“fov” or foveation) and random de-coupled movements (“rnd”). Plots show distributions of several informationtheoretic measures (entropy, mutual information, integration and complexity) across the visual sensors of the robot. Note that in the “fov” condition, visual inputs contain greater amounts of information as indicated by lower entropy, and higher mutual information, integration and complexity near the center of the visual field. For details see [64].
tasks has been successfully demonstrated in a variety of contexts. However, most models require the a priori definition of a cost function that drives the evolutionary process, and the definition of such a cost function in turn requires knowledge about the desired end state that has to be supplied by the programmer. Using an information-theoretic measure such as complexity as a cost function does not involve the specification of desired goal states, except that the sensori-motor-neural system should incorporate statistical structure that combines information that is
From Complex Networks to Intelligent Systems
27
both specialized (segregated) and coherent/coordinated (integrated). While initial applications of this information-theoretic approach are encouraging, much more work is needed to explore the applicability of this proposal to more complex systems.
5
Ingredients for Creating Brain-Like Intelligence
What then are key ingredients that we should utilize in our attempts to create artificial systems capable of brain-like intelligence? In this chapter I have argued that the complexity of the brain in all its various dimensions, involving structure, function and information, is central for the capacity of the brain to generate intelligent behavior. Complexity is conceptualized not as a vague theoretical notion akin to “complicatedness”, rather complexity is formalized and applied in the design of neural structures and autonomous agents. Undoubtedly, other ingredients are necessary, and more will be discovered in future studies of brain and behavior. These ingredients will be most effective for our scientific understanding and for our engineering efforts if they can be expressed as quantitative principles that underlie intelligence. It will be futile and scientifically meaningless to attempt to build intelligent systems by slavishly imitating or replicating the real brain. On the other hand, the extreme computational functionalism of the classical AI movement has done little to advance flexibility and robustness in intelligent systems. What is needed is a new synthesis of brain, cognitive and engineering sciences to harness the complexity of biological systems for the design of a new generation of more capable brain-like intelligence.
Acknowledgement The author was supported by a grant from the JS McDonnell Foundation.
References 1. Edelman, G.M.: Bright Air, Brilliant Fire. Basic Books, New York (1992) 2. Striedter, G.F.: Principles of Brain Evolution. Sinauer, Sunderland, MA (2005) 3. Barton, R.A.: Primate brain evolution: Integrating comparative, neurophysiological, and ethological data. Evol. Anthrop. 15, 224–236 (2006) 4. Swanson, L.W.: Brain Architecture. Oxford University Press, Oxford (2003) 5. Greenspan, R.: An Introduction to Nervous Systems. Cold Spring Harbor Press (2007) 6. Mountcastle, V.B.: The columnar organization of the neocortex. Brain 120, 701– 722 (1997) 7. Braitenberg, V., Sch¨ utz, A.: Cortex: Statistics and Geometry of Neuronal Connectivity. Springer, Berlin (1998) 8. Douglas, R., Martin, K.: Neuronal circuits of the neocortex. Annu. Rev. Neurosci. 27, 419–451 (2004) 9. Sporns, O., Tononi, G., K¨ otter, R.: The human connectome: A structural description of the human brain. PLoS Comput. Biol. 1, 245–251 (2005)
28
O. Sporns
10. Brodmann, K.: Vergleichende Lokalisationslehre der Grosshirnrinde in ihren Prinzipien dargestellt auf Grund des Zellenbaues. J-A Barth, Leipzig (1909) 11. Hagmann, P., Kurant, M., Gigandet, X., Thiran, P., Wedeen, V.J., Meuli, R., Thiran, J.P.: Mapping human whole-brain structural networks with diffusion mri. PLoS ONE 2(7), e597 (2007) 12. Hagmann, P., Cammoun, L., Gigandet, X., Meuli, R., Honey, C.J., Wedeen, V.J., Sporns, O.: Mapping the structural core of human cerebral cortex. PLoS Biology 6, e159 (2008) 13. Itturia-Medina, Y., Canales-Rodriguez, E.-J., Melia-Garcia, L., Valdes-Hernandez, P.A., Martinez-Montes, E., Aleman-Gomez, Y., Sanchez-Bornot, J.M.: Characterizing brain anatomical connections using diffusion weighted mri and graph theory. NeuroImage 36, 645–660 (2007) 14. Gong, G., He, Y., Concha, L., Lebel, C., Gross, D.W., Evans, A.C., Beaulieu, C.: Mapping anatomical connectivity patterns of human cerebral cortex using in vivo diffusion tensor imaging tractography. Cerebral Cortex doi (2008) 15. Felleman, D.J., Essen, D.C.V.: Distributed hierarchical processing in the primate cerebral cortex. Cereb. Cortex 1, 1–47 (1991) 16. Scannell, J.W., Burns, G.A.P.C., Hilgetag, C.C., O’Neil, M.A., Young, M.P.: The connectional organization of the cortico-thalamic system of the cat. Cereb. Cortex 9, 277–299 (1999) 17. Alm, E., Arkin, A.P.: Biological networks. Curr. Opin. Struct. Biol. 13, 193–202 (2003) 18. Barabasi, A.L., Oltvai, Z.N.: Network biology: Understanding the cell’s organization. Nature Reviews Genetics 5, 101–113 (2004) 19. Hilgetag, C.C., Burns, G.A., O’Neill, M.A., Scannell, J.W., Young, M.P.: Anatomical connectivity defines the organization of clusters of cortical areas in the macaque monkey and the cat. Philos. Trans. R. Soc. Lond. B. Biol. Sci. 355, 91–110 (2000) 20. Sporns, O., Honey, c.J., Kotter, R.: Identification and classification of hubs in brain networks. PLoS ONE 2, e1049 (2007) 21. Watts, D.J., Strogatz, S.H.: Collective dynamics of small-world networks. Nature 393, 440–442 (1998) 22. Strogatz, S.: Exploring complex networks. Nature 410, 268–277 (2001) 23. Albert, R., Barab´ asi, A.L.: Statistical mechanics of complex networks. Rev. Mod. Phys. 74, 47–97 (2002) 24. Sporns, O., Chialvo, D., Kaiser, M., Hilgetag, C.C.: Organization, development and function of complex brain networks. Trends Cogn. Sci. 8, 418–425 (2004) 25. Chklovskii, D., Schikorski, T., Stevens, C.: Wiring optimization in cortical circuits. Neuron. 34, 341–347 (2002) 26. Kaiser, M., Hilgetag, C.C.: Nonoptimal component placement, but short processing paths, due to long-distance projections in neural systems. PLoS Comput. Biol. 2, e95 (2006) 27. Tononi, G., Sporns, O., Edelman, G.M.: A measure for brain complexity: relating functional segregation and integration in the nervous system. Proc. Natl. Acad. Sci. USA 91, 5033–5037 (1994) 28. Tononi, G., Edelman, G.M., Sporns, O.: Complexity and coherency: Integrating information in the brain. Trends Cogn. Sci. 2, 474–484 (1998) 29. Tononi, G., Edelman, G.M.: Consciousness and complexity. Science 282, 1846–1851 (1998) 30. Friston, K.J.: Beyond phrenology: what can neuroimaging tell us about distributed circuitry? Annu. Rev. Neurosci. 25, 221–250 (2002)
From Complex Networks to Intelligent Systems
29
31. Friston, K.J.: Models of brain function in neuroimaging. Annu. Rev. Psychol. 56, 57–87 (2005) 32. Friston, K.J.: Functional and effective connectivity in neuroimaging: A synthesis. Hum. Brain Mapping 2, 56–78 (1994) 33. Sporns, O.: Brain connectivity. Scholarpedia 2(10), 4695 (2007) 34. Singer, W., Gray, C.M.: Visual feature integration and the temporal correlation hypothesis. Annu. Rev. Neurosci. 18, 555–586 (1995) 35. Sporns, O., Tononi, G., Edelman, G.: Modeling perceptual grouping and figureground segregation by means of active reentrant circuits. Proc. Natl. Acad. Sci. USA 88, 129–133 (1991) 36. Ross, W.D., Grossberg, S., Mingolla, E.: Visual cortical mechanisms of perceptual grouping: interacting layers, networks, columns, and maps. Neural Networks 13, 571–588 (2000) 37. Brovelli, A., Ding, M., Ledberg, A., Chen, Y., Nakamura, R., Bressler, S.L.: Beta oscillations in a large-scale sensorimotor cortical network: Directional influences revealed by granger causality. Proc. Natl. Acad. Sci. USA 101, 9849–9854 (2004) 38. Steinmetz, P.N., Roy, A., Fitzgeerald, P.J., Hsiao, S.S., Johnson, K.O., Niebur, E.: Attention modulates synchronized neuronal firing in primate somatosensory cortex. Nature 404, 131–133 (2000) 39. Sarntheim, J., Petsche, H., Rappelsberger, P., Shaw, G.L., von Stein, A.: Synchronization between prefrontal and posterior association cortex during human working memory. Proc. Natl. Acad. Sci. USA 95, 7092–7096 (1998) 40. Engel, A.K., Singer, W.: Temporal binding and the neural correlates of sensory awareness. Trends Cogn. Sci. 5, 16–25 (2001) 41. Stam, C.J.: Functional connectivity patterns of human magnetoencephalographic recordings: A small-world network? Neurosci. Lett. 355, 25–28 (2004) 42. Salvador, R., Suckling, J., Coleman, M., Pickard, J.D., Menon, D.K., Bullmore, E.T.: Neurophysiological architecture of functional magnetic resonance images of human brain. Cereb. Cortex 15, 1332–1342 (2005b) 43. Achard, S., Salvador, R., Whitcher, B., Suckling, J., Bullmore, E.: A resilient, low-frequency, small-world human brain functional network with highly connected association cortical hubs. J. Neurosci. 26, 63–72 (2006) 44. Bassett, D.S., Meyer-Lindenberg, A., Duke, S., Bullmore, T.: Adaptive reconfiguration of fractal small-world human brain functional networks. Proc. Natl. Acad. Sci. USA 103, 19518–19523 (2006) 45. Eguiluz, V.M., Chialvo, D.R., Cecchi, G.A., Baliki, M., Apkarian, A.V.: Scale-free brain functional network. Phys. Rev. Lett. 94, 018102 (2005) 46. Stam, C.J., Jones, B.F., Nolte, G., Breakspear, M., Scheltens, P.: Small-world networks and functional connectivity in alzheimer’s disease. Cerebr. Cortex 17, 92–99 (2007) 47. McIntosh, A.R.: Towards a network theory of cognition. Neural Netw. 13, 861–870 (2000) 48. McIntosh, A.R.: Contexts and catalysts. Neuroinformatics 2, 175–181 (2004) 49. Biswal, B., Yetkin, F.Z., Haughton, V.M., Hyde, J.S.: Functional connectivity in the motor cortex of resting human brain using echo-planar mri. Magn. Reson. Med. 34, 537–541 (1995) 50. Gusnard, D., Raichle, M.E.: Searching for a baseline: Functional imaging and the resting human brain. Nature Rev. Neurosci. 2, 685–694 (2001) 51. Honey, C.J., K¨ otter, R., Breakspear, M., Sporns, O.: Network structure of cerebral cortex shapes functional connectivity on multiple time scales. Proc. Natl. Acad. Sci. USA 104, 10240–10245 (2007)
30
O. Sporns
52. Sporns, O.: Complexity. Scholarpedia 2(10), 1623 (2007) 53. Sporns, O., Tononi, G., Edelman, G.M.: Theoretical neuroanatomy: Relating anatomical and functional connectivity in graphs and cortical connection matrices. Cereb. Cortex 10, 127–141 (2000) 54. Sporns, O., Tononi, G.: Classes of network connectivity and dynamics. Complexity 7, 28–38 (2002) 55. Tononi, G., Sporns, O., Edelman, G.M.: A complexity measure for selective matching of signals by the brain. Proc. Natl. Acad. Sci. USA 93, 3422–3427 (1996) 56. Tononi, G., Sporns, O., Edelman, G.M.: Measures of degeneracy and redundancy in biological networks. Proc. Natl. Acad. Sci. USA 96, 3257–3262 (1999) 57. Price, C.J., Friston, K.J.: Degeneracy and cognitive anatomy. Trends Cogn. Sci. 6, 416–421 (2002) 58. Chiel, H.J., Beer, R.D.: The brain has a body: adaptive behaviour emerges from interactions of nervous system, body, and environment. Trends in Neurosciences 20, 553–557 (1997) 59. Sporns, O.: Embodied cognition. In: Arbib, M. (ed.) Handbook of Brain Theory and Neural Networks, pp. 395–398. MIT Press, Cambridge (2003) 60. Iida, F., Pfeifer, R., Steels, L., Kuniyoshi, Y. (eds.): Embodied Artificial Intelligence. LNCS, vol. 3139. Springer, Heidelberg (2004) 61. Varela, F.J., Thompson, E., Rosch, E.: The embodied mind: Cognitive science and human experience. MIT Press, Cambridge (1991) 62. Thelen, E., Smith, L.B.: A dynamic systems approach to the development of cognition and action. MIT Press, Cambridge (1994) 63. Lungarella, M., Pegors, T., Bulwinkle, D., Sporns, O.: Methods for quantifying the information structure of sensory and motor data. Neuroinformatics 3(3), 243–262 (2005) 64. Lungarella, M., Sporns, O.: Mapping information flow in sensorimotor networks. PLoS Comp. Biol. (2006) 65. Klyubin, A.S., Polani, D., Nehaniv, C.L.: Empowerment: A universal agent- centric measure of control. In: Proc. CEC. IEEE, Los Alamitos (2005) 66. Philipona, D., O’Regan, J.K., Nadal, J.P.: Is there something out there? inferring space from sensorimotor dependencies. Neural Computation 15(9), 2029–2050 (2003) 67. Bertschinger, N., Olbricht, E., Ay, N., Jost, J.: Autonomy: An information theoretic perspective. BioSystems 91, 331–345 (2008) 68. Parker, A.: In the Blink of an Eye. Perseus, Cambridge (2003)
Stochastic Dynamics in the Brain and Probabilistic Decision-Making Gustavo Deco1 and Edmund T. Rolls2 1
Instituci´ o Catalana de Recerca i Estudis Avan¸cats / Universitat Pompeu Fabra Barcelona, Spain 2 University of Oxford, Dept. of Experimental Psychology Oxford, England Abstract. The stochastical spiking of neurons is a source of noise in the brain. We show that this noise is important in brain dynamics, by producing probabilistic settling into attractor states. This can account for probabilistic decision-making, which we show can be advantageous. Similar stochastical dynamics contributes to multistable states such as pattern rivalry and binocular rivalry. Stochastical dynamics also contributes to the detectability of signals in the brain that are close to threshold. Stochastical dynamics provides an interesting way to understand a number of important aspects of brain function.
1
Introduction
We show how an attractor network can model probabilistic decision-making. Attractor or autoassociation memory networks that can implement short-term memory consist of networks with recurrent associatively modifiable connections that can maintain the firing of a set of neurons whose activity has been associated together during training [22,38,37]. The attractor network is trained to have two (or more) attractor states, each one of which corresponds to one of the decisions. Each attractor set of neurons receives a biasing input which corresponds to the evidence in favour of that decision. When the network starts from a state of spontaneous firing, the biasing inputs encourage one of the attractors to gradually win the competition, but this process is influenced by the Poissonlike firing (spiking) of the neurons, so that which attractor wins is probabilistic. If the evidence in favour of the two decisions is equal, the network chooses each decision probabilistically on 50% of the trials. The model not only shows how probabilistic decision-making could be implemented in the brain, but also how the evidence can be accumulated over long periods of time because of the integrating action of the attractor short-term memory network; how this accounts for reaction times as a function of the magnitude of the difference between the evidence for the two decisions (difficult decisions take longer); and how Weber’s Law appears to be implemented in the brain. Details of the implementation of the model are provided elsewhere [15]. We show that the same formalism helps to understand multistable states; and that effects related to probabilistic firing of neurons can be important in the detectability of stimuli. B. Sendhoff et al. (Eds.): Creating Brain-Like Intelligence, LNAI 5436, pp. 31–50, 2009. c Springer-Verlag Berlin Heidelberg 2009
32
G. Deco and E.T. Rolls
A good paradigm for studying the mechanisms of decision-making is the vibrotactile sequential discrimination task, because evidence on the neuronal basis is available. In the two-alternative, forced-choice task used, subjects must decide which of two mechanical vibrations applied sequentially to their fingertips has the higher frequency of vibration. Neuronal recording and behavioural analyses [42,41,21,43,44,45,7] have provided sufficient detail about the neuronal bases of these decisions for a quantitative model to be developed. In particular, single neuron recordings in the ventral premotor cortex (VPC) reveal neurons whose firing rate was dependent only on the difference between the two applied frequencies, the sign of that difference being the determining factor for correct task performance [45,7]. They used a task where trained macaques (Macaca mulatta) must decide and report which of two mechanical vibrations applied sequentially to their fingertips has the higher frequency of vibration by pressing one of two pushbuttons. This decision-making paradigm requires therefore the following processes: (1) the perception of the first stimulus, a 500 ms long vibration at frequency f1; (2) the storing of a trace of the f1 stimulus in short-term memory during a delay of typically 3 s; (3) the perception of the second stimulus, a 500 ms long vibration at frequency f2; and (4) the comparison of the second stimulus f2 to the trace of f1, and choosing a motor act based on this comparison (f2-f1). The vibrotactile stimuli f1 and f2 utilized were in the range of frequencies called flutter, i.e. within approximately 5–50 Hz. We [15] were particularly interested in modelling the responses of ventral premotor cortex (VPC) neurons [45]. The activity of VPC neurons reflects the current and the remembered sensory stimuli, their comparison, and the motor response, i.e the entire cascade of decision-making processing linking the sensory evaluation to the motor response. Many VPC neurons encode f1 during both the stimulus presentation and the delay period. During the comparison period, the averaged firing rate of VPC neurons after a latency of a few hundred milliseconds reflects the result of the comparison, i.e. the sign of (f2-f1), and correlates with the behavioural response of the monkey. In particular, we are interested in VPC neurons which show the strongest response only during the comparison period and reflect the sign of the comparison f2-f1, i.e. these neurons are only activated during the presentation of f2, with some responding to the condition f1f2. These neurons [45] reflect the decision-making step of the comparison, and therefore we will model here their probabilistic dynamical behaviour as reported by the experimental work; and through the theoretical analyses we will relate their behaviour to Weber’s law.
2
Theoretical Framework: A Probabilistic Attractor Network
The theoretical framework within which the new model was developed [54] is based on a neurodynamical model [4] which has been recently extended and successfully applied to explain several experimental paradigms [38,8,9,10,16,49,12]. In this framework, we model probabilistic decision-making by a single attractor
Stochastic Dynamics in the Brain and Probabilistic Decision-Making
λ ext
(A MP A, NM DA )
λext
33
(A MP A) (A MP A)
W-
(A MP A, NM DA )
1 1 (GABA)
f1>f2
Inhi bitory Pool
WI
(GAB A)
f1
W+
λ1
WNonspecific Neur ons
f1f2) and (f1f2) is active, this corresponds to the decision that stimulus f1 is greater than stimulus f2. There is also a population of non-specific excitatory neurons, and a population of inhibitory neurons. Pool (f1>f2) is biased by λ1 which reflects the strength of stimulus f1, and pool (f2>f1) is biased by λ2 which reflects the strength of stimulus f2. (In the simulations performed f1 is the frequency of vibrotactile stimulus 1, f2 is the frequency of vibrotactile stimulus 2, and the stimuli must be compared to decide which is the higher frequency.) The integrate and fire network is subject to finite size noise, and therefore probabilistically settles into either an attractor with the population (f1>f2) active, or with the population (f1f2) as well as to pool (f1f2) and (f1f2), (f1f2. We observe that after 200 ms the populations (f1>f2) and (f1f2) wins the competition and the network performs a transition to a single-state final attractor corresponding to a correct discrimination (i.e. high activity in the population (f1>f2) and low activity in the population (f1f2)
Rate (Hz)
60 50 40 30 20
Inhibitory
10 Pool (f1f2). We observe that after 200 ms the populations (f1>f2) and (f1f2) wins the competition and the network performs a transition to a single-state final attractor corresponding to a correct discrimination (i.e. high activity in the population (f1>f2) and low activity in the population (f1
100
(%)
90 80 70 f2=17.5 Hz
60
f2)
50 0
Probabilty (f1 >
100
15 (Hz)
80 70 f2=20 Hz
60
f2)
100
Probabilty (f1 >
10
90
50 0
5
10
15(Hz)
(%)
90 80 70 f2=25 Hz
60
f2)
50 0 100
Probabilty (f1 >
5
(%)
5
10
15 (Hz)
(%)
90 80 70 f2=30 Hz
60 50 0
5
10
15 (Hz)
Delta frequency (f1 - f2)
Fig. 3. Probability of correct discrimination (± sd) as a function of the difference between the two presented vibrotactile frequencies to be compared. In the simulations, we assume that f1>f2 by a Δ-value (labelled ‘Delta frequency (f1-f2)’), i.e. f1=f2+Δ. The horizontal dashed line represents the threshold of correct classification for a performance of 85% correct discrimination. The second panel down includes actual neuronal data described by Romo and Salinas (2003) for the f2=20 Hz condition (indicated by *). (After [15])
of change needed for us to recognize that a change has occurred. Weber’s Law (enunciated by Ernst Heinrich Weber 1795–1878) states that the ratio of the difference-threshold to the background intensity is a constant.
Stochastic Dynamics in the Brain and Probabilistic Decision-Making
37
Figure 3 shows the probability of correct discrimination as a function of the difference between the two presented vibrotactile frequencies to be compared. We assume that f1>f2 by a Δ-value, i.e. f1=f2+Δ. (In Fig. 3 this value is called “Delta frequency (f1-f2)”.) Each diamond-point in the Figure corresponds to the result calculated by averaging 200 trials of the full spiking simulations. The lines were calculated by fitting the points with a logarithmic function. A correct classification occurs when during the 500 ms comparison period, the network evolves to a ‘single-state’ attractor that shows a high level of spiking activity (larger than 10 Hz) for the population (f1>f2), and simultaneously a low level of spiking activity for the population (f1 f2) attractor state on a proportion of occasions that is in the range 85–93%, increasing only a little as the number of neurons reaches 4,000 (top panel). The settling remains probabilistic, as shown by the standard deviations in the probability that the (f1 > f2) attractor state will be reached (top panel). When N is less than approximately 1,000, the finite size noise effects become very marked, as shown by the fact that the network reaches the correct attractor state (f1>f2) much less frequently, and in that the time for a decision to be reached can be premature and fast, as the large fluctuations in the stochastic noise can cause the system to reach the criterion [in this case of a firing rate of 20 Hz in the pool (f1>f2)] too quickly. The overall conclusion of the results shown in Fig. 5 is that the size of the network, N , does influence the probabilistic settling of the network to the decision state. None of these probabilistic attractor and decision-related settling effects would of course be found in a mean-field or purely rate simulation, without spiking activity. The size of N in the brain is likely to be greater than 1,000 (and probably in the neocortex in the range 4,000–12,000) [38,37]. It will be of interest to investigate further this scaling as a function of the number of neurons in a population with a firing rate distribution that is close to what is found in the brain, namely close to exponential [18].
Stochastic Dynamics in the Brain and Probabilistic Decision-Making
39
(%)
Probabilty (f1 >
f2)
100 90 80 70 60 0
1000
3000
4000
3000
4000
3000
4000
N
(ms) Time to 20 Hz activation in pool f1>f2
2000
360 340 320 300 280 260 0
1000
2000 N
(ms) Standard Deviation of the decision time
140 120 100 80 60 40 0
1000
2000 N
Fig. 5. The effects of altering N , the number of neurons in the network, on the operation of the decision-making network. The simulations were for f1=30 Hz and f2=22 Hz. The top panel shows the probability that the network will settle into the correct (f1>f2) attractor state. The mean ± the standard deviation is shown. The middle panel shows the time for a decision to be reached, that is for the system to reach a criterion of a firing rate of 20 Hz in the pool (f1>f2). The mean ± the standard deviation of the sampled mean is shown. The bottom panel shows the standard deviation of the reaction time. (After [15])
4
Properties of This Model of Decision-Making
Key properties of this biased attractor model of decision-making are now considered.
40
G. Deco and E.T. Rolls
The decisions are taken probabilistically because of the finite size noise due to spiking activity in the integrate-and-fire dynamical network, with the probability that a particular decision is made depending on the biasing inputs provided by the sensory stimuli f1 and f2. We (Deco and Rolls [15]) showed that the relevant parameters for the decision to be made to a criterion of a given percent correct about whether f1 is different from f2 by the network are found not to be the absolute value of f1 or f2, but the difference between them scaled by their absolute value. If the difference between the two stimuli at which they can be discriminated Δf = f1-f2, then it is found that Δf increases linearly as a function of the base frequency f2, which is Weber’s Law. The results show that Weber’s law does not depend on the final firing rates of neurons in the attractor, but instead reflects the nature of the probabilistic settling into a decision-related attractor, which depends on the statistical fluctuations in the network, the synaptic connectivity, and the difference between the bias input frequencies f1 and f2 scaled by the baseline input f2. Weber’s law is usually formulated as Δf / (f0 + f) = a constant, where f0 allows the bottom part of the curve to asymptote at f0 . In vision, f0 is sometimes referred to as “dark light”. The result is that there is a part of the curve where Δf is linearly related to f, and the curve of Δf vs f need not go through the origin. This corresponds to the data shown in Fig. 4. An analysis of the non-stationary evolution of the dynamics of the network model, performed by explicit full spiking simulations, shows that Weber’s law is implemented in the probability of transition from the initial spontaneous firing state to one of the two possible attractor states. In this decision-making paradigm, the firing rates of neurons in the VPC encode the outcome of the comparison and therefore the decision and motor response, but not how strong the stimuli are, i.e. what Weber called “sensation” (as described for example in a detection task [7]). The probability of obtaining a specific decision, i.e. of detecting a just noticeable difference, is encoded in the stochastic dynamics of the network. More specifically, the origin of the fluctuations that will drive the transitions towards particular decisions depends on the connectivity between the different populations, on the size of the populations, and on the Poisson-like spike trains of the individual neurons in the system. In other words, the neural code for the outcome of the decision is reflected in the high rate of one of the populations of neurons, but whether the rate of a particular population becomes high is probabilistic. This means that an essential part of how the decision process is encoded is contained in the synapses, in the finite size of the network, and in the Poisson-like firing of individual neurons in the network. The statistical fluctuations in the network are due to the finite size noise, which approximates to the square root of the (firing rate / number of neurons in the population) [29], as shown in Fig. 5. This is the first time we know when the implementation of a psychophysical law is not the firing rate of the neurons, nor the spike timing, nor is single neuron based, but instead is based on the synaptic connectivity of the network and on statistical fluctuations due to the spiking activity in the network.
Stochastic Dynamics in the Brain and Probabilistic Decision-Making
41
Another interesting aspect of the model is that the recurrent connectivity, and the relatively long time constant of the NMDA receptors [54], together enable the attractor network to accumulate evidence over a long time period of several hundred milliseconds. This is an important aspect of the functionality of attractor networks. Although the model described here is effectively a single attractor network, we note that the network need not be localized to one brain region. Long-range connections between cortical areas enable networks in different brain areas to interact in the way needed to implement a single attractor network. The requirement is that the synapses between the neurons in any one pool be set up by Hebb-like associative synaptic modification, and this is likely to be a property of connectivity between areas as well as within areas [39,38]. In this sense, the decision could be thought of as distributed across different brain areas. Consistent with this, Romo and colleagues have found neurons related to vibrotactile decisions not only in VPC, but in a number of connected brain areas including the medial prefrontal cortex. In order to achieve the desired probabilistic settling behaviour, the network we describe must not have very high inhibition, and, related to this, may sometimes not settle into one of its attractor states. In a forced choice task in which a decision must be reached on every trial, a possible solution is to have a second decision-making network, with parameters adjusted so that it will settle into one of its states (chosen at chance) even if a preceding network in the decision-making chain has not settled. This could be an additional reason for having a series of networks in different brain regions involved in the decision-making process. The model described here is different in a number of ways from accumulator or counter models which may include a noise term and which undergo a random walk in real time, which is a diffusion process [34,5,54,52]. In accumulator models, a mechanism for computing the difference between the stimuli is not described, whereas in the current model this is achieved, and scaled by f, by the feedback inhibition included in the attractor network. Second, in the current model the decision corresponds to high firing rates in one of the attractors, and there is no arbitrary threshold that must be reached. Third, the noise in the current model is not arbitrary, but is accounted for by finite size noise effects of the spiking dynamics of the individual neurons with their Poisson-like spike trains in a system of limited size. Fourth, because the attractor network has recurrent connections, the way in which it settles into a final attractor state (and thus the decision process) can naturally take place over quite a long time, as information gradually and stochastically builds up due to the positive feedback in the recurrent network, the weights in the network, and the biasing inputs, as shown in Fig. 2. The model of decision-making described here is also different to a model in which it is suggested that the probabilistic relative value of each action directly dictates the instantaneous probability of choosing each action on the current trial [47]. The present model shows how probabilistic decisions could be taken depending on the two biasing inputs (λ1 and λ2 in Fig. 1, which could be equal) to
42
G. Deco and E.T. Rolls
a biased competition attractor network subject to statistical fluctuations related to finite size noise in the dynamics of the integrate-and-fire network. We may raise the conceptually important issue of why the operation of what is effectively memory retrieval is probabilistic. Part of the answer is shown in Fig. 5, in which it is seen that even when a fully connected recurrent attractor network has 4,000 neurons, the operation of the network is still probabilistic. In this network of 4,000 neurons, there were 3,600 excitatory neurons and 360 neurons represented each pattern or decision (that is, the sparseness a was 0.1). The firing rates of the neurons corresponded to those found in VPC, with rates above 20 spikes/s considered to be a criterion for the attractor state being reached, and 40–50 spikes/s being typical when fully in the attractor state. Under these conditions, the probabilistic spiking of the excitatory (pyramidal) cells is what introduces noise into the network. (It is this noise in the recurrent collateral firing, rather than external noise due to variability in the inputs, which makes the major contribution to the probabilistic behaviour of the network [15].) Thus, once the firing in the recurrent collaterals is spike implemented by integrate-andfire neurons, the probabilistic behaviour seems inevitable, even up to quite large attractor network sizes. We may then ask why the spiking activity of any neuron is probabilistic, and what the advantages are that this may confer. The answer suggested [37] is that the spiking activity is approximately Poisson-like (as if generated by a random process with a given mean rate), because the neurons are held close to their firing threshold, so that any incoming input can rapidly cause sufficient further depolarization to produce a spike. It is this ability to respond rapidly to an input, rather than having to charge up the cell membrane from the resting potential to the threshold, a slow process determined by the time constant of the neuron and influenced by that of the synapses, that enables neuronal networks in the brain, including attractor networks, to operate and retrieve information so rapidly [51,39,37,33]. The spike trains are essentially Poisson-like because the cell potential hovers noisily close to the threshold for firing, the noise being generated in part by the Poisson-like firing of the other neurons in the network. The implication of these concepts is that the operation of networks in the brain is inherently noisy because of the Poisson-like timing of the spikes of the neurons, which itself is related to the mechanisms that enable neurons to respond rapidly to their inputs. However, the consequence of the Poisson-like firing is that, even with quite large attractor networks of thousands of neurons with hundreds of neurons representing each pattern or memory, the network inevitably settles probabilistically to a given attractor state. This results, inter alia, in decisionmaking being probabilistic. Factors that influence the probabilistic behaviour of the network include the strength of the inputs (with the difference in the inputs / the magnitude of the inputs as shown here being relevant to decision-making and Weber’s Law); the depth and position of the basins of attraction, which if shallow or correlated with other basins will tend to slow the network; and, perhaps, the mean firing rates of the neurons during the decision-making itself, and the firing rate distribution (see below). In this context, the probabilistic behaviour
Stochastic Dynamics in the Brain and Probabilistic Decision-Making
43
of the network gives poorer and poorer and slower and slower performance as the difference between the inputs divided by the base frequency is decreased, as shown in Fig. 3. The current model of decision-making is part of a unified approach to attention, reward-reversal, and sequence learning, in which biasing inputs influence the operation of attractor networks that operate using biased competition [38,11,37]. The same approach is now seen to be useful in understanding decisionmaking and its relation to Weber’s Law, and in understanding the details of the neuronal responses that are recorded during decision-making.
5
Applications of This Model of Decision-Making
This approach to how networks take decisions probably has implications throughout the brain. For example, the model is effectively a model of the dynamics of the recall of a memory in response to a recall cue. The way in which the attractor is reached depends on the strength of the recall cue, and inherent noise in the attractor network performing the recall because of the spiking activity in a finite size system. The recall will take longer if the recall cue is weak. Spontaneous stochastic effects may suddenly lead to the memory being recalled, and this may be related to the sudden recovery of a memory which one tried to remember some time previously. This framework can also be extended very naturally to account for the probabilistic decision taken about for example which of several objects has been presented in a perceptual task. The model can also be extended to the case where one of a large number of possible decisions must be made. An example is a decision about which of a set of objects, perhaps with different similarity to each other, has been shown on each trial, and where the decisions are only probabilistically correct. When a decision is made between different numbers of alternatives, a classical result is Hick’s Law, that the reaction time increases linearly with log2 of the number of alternatives from which a choice is being made. This has been interpreted as supporting a series of binary decisions each one taking a unit amount of time [55]. As the integrate-and-fire model we describe works completely in parallel, it will be very interesting to investigate whether Hick’s Law is a property of the network. If so, this could be related to the fact that the activity of the inhibitory interneurons is likely to increase linearly with the number of alternatives between which a decision is being made (as each one adds additional bias to the system through a λ input), and that the GABA inhibitory interneurons implement a shunting, that is divisive, operation. Another application is to changes in perception. Perceptions can change ‘spontaneously’ from one to another interpretation of the world, even when the visual input is constant, and a good example is the Necker cube (a drawing showing all the edges of a cube, that is both the front and back edges), in which visual perception flips occasionally to make a different edge of the cube appear nearer to the observer. We hypothesize that the switching between these multistable states is due in part to the statistical fluctuations in the network due to the
44
G. Deco and E.T. Rolls
Poisson-like spike firing that is a form of noise in the system. It will be possible to test this hypothesis in integrate-and-fire simulations. (This may or may not be supplemented by adaptation effects (of the synapses or neurons) in integrateand-fire networks.) The same approach should provide a model of pattern and binocular rivalry, where one image is seen at a time even though two images are presented simultaneously. When these are images of objects or faces, the system that is especially important in the selection is the inferior temporal visual cortex [3,28], for it is here that representations of whole objects are present [37], and the global interpretation of one object can compete with the global interpretation of another object. These simulation models are highly feasible, in that the effects of synaptic adaptation and neuronal adaptation in integrate-and-fire simulations have already been investigated [14,13]. Another potential application of this model of decision-making is to probabilistic decision tasks. In such tasks, the proportion of choices reflects, and indeed may be proportional to, the expected value of the different choices. This pattern of choices is known as the Matching Law [47]. An example of a probabilistic decision task in which the choices of the human participants in the probabilistic decision task clearly reflected the expected value of the choices is described elsewhere [40]. A network of the type described in this chapter in which the biasing inputs λ1 and λ2 to the model are the expected values of the different choices will alter the proportion of the decisions it makes as a function of the relative expected values in a way similar to that shown in Fig. 3, and provides a model of this type of probabilistic reward-based decision-making. Another application of this approach is to the detectability of signals. In a perceptual signal detection task, noise, of which one type is the noise contributed by statistical fluctuations related to spiking dynamics, can help as follows. If we had a deterministic neuron without noise and a fixed threshold above which spikes were emitted, then if the signal was below the threshold there would be no output, and if the signal was above threshold the neuron would emit a spike, and indeed continuous spike trains if the signal remained above threshold. In particular, if the signal was just below the threshold of the neuron, there would be no evidence that a signal close to threshold was present. However, if noise is present in the system (due for example to the afferent neurons having probabilistic spiking activity similar to that of a Poisson process), then occasionally with a signal close to threshold a spike would occur due to the summation of the signal and the noise. If the signal was a bit weaker, then the neuron might still occasionally spike, but at a lower average rate. If the signal was a bit closer to threshold, then the neuron would emit spikes at a higher average rate. Thus in this way some evidence about the presence of a subthreshold signal can be made evident in the spike trains emitted by a neuron if there is noise in the inputs to the neuron. The noise in this case is useful, and may have an adaptive function. A similar effect is useful in electronic engineering, when adding a small amount of noise within the range of the least significant bit that can be discriminated to the input to an analogue-to-digital converter can provide evidence about where the signal is in relation to the least significant bit (smallest value) than can be
Stochastic Dynamics in the Brain and Probabilistic Decision-Making
45
discriminated. If the input is close to the upper value of the range of the least significant bit, then the noise will mean that the analogue to digital converter is usually signalling the higher than the lower level. The source of the noise in the detection mechanism in the brain, and in fact the noise in signal detection theory [20] may be at least in part due to the statistical fluctuations caused by the probabilistic spiking of neurons in networks. We may note that decisions about whether a signal has been detected are not typically taken at the periphery, in that the distribution of false positive decisions etc does not necessarily accurately reflect on a trial by trial basis variations at the periphery, but instead fluctuations in more central brain areas [7]. A good example of this is that the explicit, conscious, recognition of which face was seen is set with a threshold which is higher than that at which information is present in the inferior temporal visual cortex, and at which guessing can be much better than chance [36]. It is also interesting to note that noise can also be beneficial in the context of the selection decision in biological evolution. This is refered as developmental noise and it has been shown that can positively influence selection and accelerate evolution (see [32]). Another application of this type of model is to taking decisions between the implicit and explicit systems in emotional decision-making [35,37], where again the two different systems could provide the biasing inputs λ1 and λ2 to the model. It is of interest that the noise that contributes to the stochastic dynamics of the brain through the spiking fluctuations may be behaviourally adaptive, and that the noise should not be considered only as a problem in terms of how the brain works. Consider for example the choice dilemma described in the medieval Duns Scotus paradox, in which a donkey situated between two equidistant food rewards might never make a decision and might starve. The problem raised is that with a deterministic system, there is nothing to break the symmetry, and the system can become deadlocked. In this situation, the addition of noise can produce probabilistic choice, which is advantageous. We have shown in this Chapter that stochastic neurodynamics caused for example by the relatively random spiking times of neurons in a finite sized cortical attractor network can lead to probabilistic decision-making, so that in this case the stochastic noise is a positive advantage. Probabilistic decision-making can be evolutionarily advantageous in another sense, in which sometimes taking a decision that is not optimal based on previous history may provide information that is useful, and which may contribute to learning. Consider for example a probabilistic decision task in which choice 1 provides rewards on 80% of the occasions, and choice 2 on 20% of the occasions. A deterministic system with knowledge of the previous reinforcement history would always make choice 1. But this is not how animals including humans behave. Instead (especially when the overall probabilities are low), the proportion of choices made approximately matches the outcomes that are available, in what is called the Matching Law [47,40]. By making the less favoured choice sometimes, the organism can keep obtaining evidence on whether the environment
46
G. Deco and E.T. Rolls
is changing (for example on whether the probability of a reward for choice 2 has increased), and by doing this approximately according to the Matching Law minimizes the cost of the disadvantageous choices in obtaining information about the environment. This probabilistic exploration of the environment is very important in trial-and-error learning, and indeed has been incorporated into a simple reinforcement algorithm in which noise is added to the system, and if this improves outcomes above the expected value, then changes are made to the synaptic weights in the correct direction [48,1,37]. In perceptual learning, probabilistic exploratory behavior may be part of the mechanism by which perceptual representations can be shaped to have appropriate selectivities for the behavioural categorization being performed [46,50]. Another example is in food foraging, which probabilistically may reflect the outcomes [26,24], and is a way optimally in terms of costs and benefits to keep sampling and exploring the space of possible choices. Another way in which probabilistic decision-making may be evolutionarily advantageous is in creative thought, which is influenced in part by associations between one memory, representation, or thought and another. If the system were deterministic, i.e. for the present purposes without noise, then the trajectory through a set of thoughts would be deterministic and would tend to follow the same furrow each time. However, if the recall of one memory or thought from another were influenced by the statistical noise due to the random spiking of neurons, then the trajectory through the state space would be different on different occasions, and we might be led in different directions on different occasions, facilitating creative thought. Of course, if the basins of attraction of each thought were too low, then the statistical noise might lead one to have very unstable thoughts that were too loosely and even bizarrely associated to each other, and indeed this is an account that we have proposed for some of the positive symptoms of schizophrenia [37,27]. Similar noise-driven processes may lead to dreams, where the content of the dream is not closely tied to the external world as the role of sensory inputs is reduced in paradoxical (desynchronized) sleep, and the cortical networks, which are active in fast-wave sleep, may move under the influence of noise somewhat freely on from states that may have been present during the day [25,23]. In slow-wave sleep, and more generally in resting states, the activity of neurons in many cortical areas is on average low, and stochastical spiking-related noise may contribute strongly to the states that are found. Although it is not clear whether dreams per se have beneficial effects, we can note that an area where the spiking-related noise in the decision-making process may be evolutionarily advantageous is in the generation of unpredictable behaviour, which can be advantageous in a number of situations, for example when a prey is trying to escape from a predator, and perhaps in some social and economic situations in which organisms may not wish to reveal their intentions [30,31,6]. Indeed, some animals use mixed strategies, in which for example in a given encounter either a dove or a hawk strategy may be selected probabilistically. Conditional strategies may be used when there is some information available, for example to
Stochastic Dynamics in the Brain and Probabilistic Decision-Making
47
be a hawk if the opponent is small, or a dove if the opponent is large. However, even in this case the behaviour selected may be probabilistic, but weighted by the evidence. The model we propose for this is our standard biased competition model of probabilistic decision-making, where whatever information one has (e.g. that the other animal is larger) is just one of what could be a number of biasing inputs. This would provide a mechanism for conditional strategies. If there was no information available, the network would operate in the absence of biasing inputs at chance. The approach described here this provides a unifying account of mixed and conditional strategies (and ‘reasoning’ about what decision to take in the light of the evidence to produce a conditional strategy would not be needed). We note that decisive behaviour is necessary in many of these types of situation: when an aggressive encounter is broken probabilistically, then the decision taken must be firm, and this is achieved as a result of the non-linear aspects of the attractor model of decision-making, where which attractor state is reached is then stable with a high firing rate of the neurons in the attractor. We also note that such probabilistic decisions may have long-term consequences. For example, a probabilistic decision in a ‘war of attrition’ such as staring down a competitor e.g. in dominance hierarchy formation, may fix the relative status of the two individual animals involved, who then tend to maintain that relationship stably for a considerable period of weeks or more [30,31,6]. An interesting comparison can be made between the probabilistic behaviour of brain dynamics caused by the spiking-related noise, and genetic mutation, which is also like a jump or escape over an energy barrier to a new part of the landscape, where the system can then settle into a local energy minimum (using recombination genetically, or attractor settling in a neural network). It is interesting that the probability of mutations can be affected by the position on the chromosome. Correspondingly, the noise in different networks in the brain is likely to be different, and indeed may be optimized in different networks by having different numbers of synapses on each neuron (with large numbers smoothing the statistical fluctuations), different average firing rates on each afferent neuron (with high average rates smoothing the statistical fluctuations), and different sparsenesses of the output representations. Indeed, to investigate the ‘decision-related’ switching of these systems, it may be important to use a firing rate distribution of the type found in the brain, in which few neurons have high rates, more neurons have intermediate rates, and many neurons have low rates [37,18]). It is important to model correctly the proportion of the current that is being passed through the NMDA receptors (which are voltage-dependent), as these receptors have a long time-constant, which will tend to smooth out short-term statistical fluctuations caused by the stochastic firing of the neurons [53], and this will affect the statistics of the probabilistic switching of the network. This can only be done by modelling integrate-and-fire networks with the firing rates and the firing rate distributions found in a cortical area [37,18]. More generally, we can conceive of each cortical area as performing a local type of decision-making using attractor dynamics of the type described. Even memory
48
G. Deco and E.T. Rolls
recall is in effect the same local ‘decision-making’ process. The orbitofrontal cortex for example is involved in decisions about which visual stimulus is currently associated with reward, in for example a visual discrimination reversal task. Its computations are about stimuli, primary reinforcers, and secondary reinforcers [35]. The dorsolateral prefrontal cortex takes an executive role in decision-making in a working memory task, in which information must be held available for in some cases intervening stimuli [37]. The dorsal and posterior part of the dorsolateral prefrontal cortex may be involved in short-term memory-related decisions about where to move the eyes [37]. The parietal cortex is involved in decision-making when the stimuli are for example optic flow patterns [19]. The hippocampus is involved in decision-making when the allocentric places of stimuli must be associated with rewards or objects [37]. The somatosensory cortex and ventral premotor cortex are involved in decision-making when different vibrotactile frequencies must be compared (see above). The cingulate cortex may be involved when action–outcome decisions must be taken [37]. In each of these cases, local cortical processing that is related to the type of decision being made takes place, and all cortical areas are not involved in any one decision. The style of the decision-making-related computation in each cortical area appears to be of the form described in this chapter, in which the local recurrent collateral connections enable the decision-making process to accumulate evidence in time, falling gradually into an attractor that represents the decision made in the network. Because there is an attractor state into which the network falls, this can be described statistically as a non-linear diffusion process, the noise for the diffusion being the stochastical spiking of the neurons, and the driving force being the biasing inputs.
References 1. Barto, A.G.: Learning by statistical cooperation of self-interested neuron-like computing elements, COINS Tech. Rep., University of Massachusetts, Department of Computer and Information Science, Amherst 85-11, 1– (1985) 2. Battaglia, F., Treves, A.: Stable and rapid recurrent processing in realistic autoassociative memories. Neural Computation 10, 431–450 (1998) 3. Blake, R., Logothetis, N.K.: Visual competition. Nature Reviews Neuroscience 3, 13–21 (2002) 4. Brunel, N., Wang, X.J.: Effects of neuromodulation in a cortical network model of object working memory dominated by recurrent inhibition. Journal of Computational Neuroscience 11, 63–85 (2001) 5. Carpenter, R.H.S., Williams, M.: Neural computation of log likelihood in control of saccadic eye movements. Nature 377, 59–62 (1995) 6. Dawkins, M.S.: Unravelling Animal Behaviour, 2nd edn. Longman, Harlow (1995) 7. de Lafuente, V., Romo, R.: Neuronal correlates of subjective sensory experience. Nature Neuroscience 12, 1698–1703 (2005) 8. Deco, G., Rolls, E.T.: Object-based visual neglect: a computational hypothesis. European Journal of Neuroscience 16, 1994–2000 (2002) 9. Deco, G., Rolls, E.T.: Attention and working memory: a dynamical model of neuronal activity in the prefrontal cortex. European Journal of Neuroscience 18, 2374– 2390 (2003)
Stochastic Dynamics in the Brain and Probabilistic Decision-Making
49
10. Deco, G., Rolls, E.T.: A neurodynamical cortical model of visual attention and invariant object recognition. Vision Research 44, 621–644 (2004) 11. Deco, G., Rolls, E.T.: Attention, short term memory, and action selection: a unifying theory. Progress in Neurobiology 76, 236–256 (2005a) 12. Deco, G., Rolls, E.T.: Neurodynamics of biased competition and cooperation for attention: a model with spiking neurons. Journal of Neurophysiology 94, 295–313 (2005b) 13. Deco, G., Rolls, E.T.: Sequential memory: a putative neural and synaptic dynamical mechanism. Journal of Cognitive Neuroscience 17, 294–307 (2005c) 14. Deco, G., Rolls, E.T.: Synaptic and spiking dynamics underlying reward reversal in the orbitofrontal cortex. Cerebral Cortex 15, 15–30 (2005d) 15. Deco, G., Rolls, E.T.: A neurophysiological model of decision-making and Weber’s law. European Journal of Neuroscience 24, 901–916 (2006) 16. Deco, G., Rolls, E.T., Horwitz, B.: ‘What’ and ‘where’ in visual working memory: a computational neurodynamical perspective for integrating fMRI and single-neuron data. Journal of Cognitive Neuroscience 16, 683–701 (2004) 17. Deco, G., Scarano, L., Soto-Faraco, S.: Weber’s law in decision-making: integrating behavioral data in humans with a neurophysiological model. Journal of Neuroscience 27, 11192–11200 (2007) 18. Franco, L., Rolls, E.T., Aggelopoulos, N.C., Jerez, J.M.: Neuronal selectivity, population sparseness, and ergodicity in the inferior temporal visual cortex. Biological Cybernetics 96, 547–560 (2007) 19. Glimcher, P.: The neurobiology of visual-saccadic decision making. Annual Reviews of Neuroscience 26, 133–179 (2003) 20. Green, D., Swets, J.: Signal Detection Theory and Psychophysics. Wiley, New York (1966) 21. Hernandez, A., Zainos, A., Romo, R.: Temporal evolution of a decision-making process in medial premotor cortex. Neuron 33, 959–972 (2002) 22. Hopfield, J.J.: Neural networks and physical systems with emergent collective computational abilities. Proceedings of the National Academy of Sciences USA 79, 2554–2558 (1982) 23. Horne, J.: Sleepfaring: a journey through the science of sleep. Oxford University Press, Oxford (2006) 24. Kacelnik, A., Brito e Abreu, F.: Risky choice and Weber’s Law. Journal of Theoretical Biology 194, 289–298 (1998) 25. Kandel, E.R., Schwartz, J.H., Jessel, T.H. (eds.): Principles of Neural Science, 4th edn. Elsevier, Amsterdam (2000) 26. Krebs, J.R., Davies, N.B.: Behavioural Ecology, 3rd edn. Blackwell, Oxford (1991) 27. Loh, M., Rolls, E.T., Deco, G.: A dynamical systems hypothesis of schizophrenia. PLoS Computational Biology (2007) 28. Maier, A., Logothetis, N.K., Leopold, D.A.: Global competition dictates local suppression in pattern rivalry. Journal of Vision 5, 668–677 (2005) 29. Mattia, M., Del Giudice, P.: Population dynamics of interacting spiking neurons. Physical Review E 66, 051917 (2002) 30. Maynard Smith, J.: Evolution and the Theory of Games. Cambridge University Press, Cambridge (1982) 31. Maynard Smith, J.: Game theory and the evolution of behaviour. Behavioral and Brain Sciences 7, 95–125 (1984) 32. Paenke, I., Sendhoff, B., Kawecki, T.: Influence of plasticity and learning on evolution under directional selection. American Naturalist 170 (2), 1–12 (2007)
50
G. Deco and E.T. Rolls
33. Panzeri, S., Rolls, E.T., Battaglia, F., Lavis, R.: Speed of feedforward and recurrent processing in multilayer networks of integrate-and-fire neurons. Network: Computation in Neural Systems 12, 423–440 (2001) 34. Ratcliff, R., Zandt, T.V., McKoon, G.: Connectionist and diffusion models of reaction time. Psychological Reviews 106, 261–300 (1999) 35. Rolls, E.T.: Emotion Explained. Oxford University Press, Oxford (2005) 36. Rolls, E.T.: A computational neuroscience approach to consciousness. Neural Networks (2008a) (in press) 37. Rolls, E.T.: Memory, Attention, and Decision-Making. A Unifying Computational Neuroscience Approach. Oxford University Press, Oxford (2008b) 38. Rolls, E.T., Deco, G.: Computational Neuroscience of Vision. Oxford University Press, Oxford (2002) 39. Rolls, E.T., Treves, A.: Neural Networks and Brain Function. Oxford University Press, Oxford (1998) 40. Rolls, E.T., McCabe, C., Redoute, J.: Expected value, reward outcome, and temporal difference error representations in a probabilistic decision task. Cerebral Cortex (2007) doi 10.1093/cercor/bhm097 41. Romo, R., Salinas, E.: Touch and go: decision-making mechanisms in somatosensation. Annual Review of Neuroscience 24, 107–137 (2001) 42. Romo, R., Salinas, E.: Flutter discrimination: neural codes, perception, memory and decision making. Nature Reviews Neuroscience 4, 203–218 (2003) 43. Romo, R., Hernandez, A., Zainos, A., Lemus, L., Brody, C.D.: Neural correlates of decision-making in secondary somatosensory cortex. Nature Neuroscience 5, 1217– 1225 (2002) 44. Romo, R., Hernandez, A., Zainos, A., Salinas, E.: Correlated neuronal discharges that increase coding efficiency during perceptual discrimination. Neuron 38, 649– 657 (2003) 45. Romo, R., Hernandez, A., Zainos, A.: Neuronal correlates of a perceptual decision in ventral premotor cortex. Neuron 41, 165–173 (2004) 46. Sigala, N., Logothetis, N.K.: Visual categorisation shapes feature selectivity in the primate temporal cortex. Nature 415, 318–320 (2002) 47. Sugrue, L.P., Corrado, G.S., Newsome, W.T.: Choosing the greater of two goods: neural currencies for valuation and decision making. Nature Reviews Neuroscience 6, 363–375 (2005) 48. Sutton, R.S., Barto, A.G.: Towards a modern theory of adaptive networks: expectation and prediction. Psychological Review 88, 135–170 (1981) 49. Szabo, M., Almeida, R., Deco, G., Stetter, M.: Cooperation and biased competition model can explain attentional filtering in the prefrontal cortex. European Journal of Neuroscience 19, 1969–1977 (2004) 50. Szabo, M., Deco, G., Fusi, S., Del Giudice, P., Mattia, M., Stetter, M.: Learning to attend: Modeling the shaping of selectivity in infero-temporal cortex in a categorization task. Biological Cybernetics 94, 351–365 (2006) 51. Treves, A.: Mean-field analysis of neuronal spike dynamics. Network 4, 259–284 (1993) 52. Usher, M., McClelland, J.: On the time course of perceptual choice: the leaky competing accumulator model. Psychological Reviews 108, 550–592 (2001) 53. Wang, X.J.: Synaptic basis of cortical persistent activity: the importance of NMDA receptors to working memory. Journal of Neuroscience 19, 9587–9603 (1999) 54. Wang, X.J.: Probabilistic decision making by slow reverberation in cortical circuits. Neuron 36, 955–968 (2002) 55. Welford, A.T. (ed.): Reaction Times. Academic Press, London (1980)
Formal Tools for the Analysis of Brain-Like Structures and Dynamics J¨ urgen Jost Max Planck Institute for Mathematics in the Sciences Inselstr. 22, 04103 Leipzig, Germany
1
Introduction
Brains and artificial brainlike structures are paradigms of complex systems, and as such, they require a wide range of mathematical tools for their analysis. One can analyze their static structure as a network abstracted from neuroanatomical results of the arrangement of neurons and the synaptic connections between them. Such structures could underly, for instance, feature binding when neuronal groups coding for specific properties of objects are linked to neurons that represent the spatial location of the object in question. – One can then investigate what types of dynamics such abstracted networks can support, and what dynamical phenomena can readily occur. An example is synchronization. In fact, flexible and rapid synchronization between specific groups of neurons has been suggested as a dynamical mechanism for feature binding in brains [54]. In order to identify non-trivial dynamical patterns with complex structures, one needs corresponding complexity measures, as developed in [51,52,5]. Ultimately, however, any such dynamical features derive their meaning from their role in processing information. Neurons filter and select information, encode it by transforming it into an internal representation, and possibly also decode it, for instance by deriving specific motor commands as a reaction to certain sensory information. The program sketched in this contribution will eventually consist of three layers of analysis: 1. A brain possesses a structure that supports dynamics, investigated by network theory 2. It processes information, leading to neurodynamics 3. Its function is cognition that then needs to be understood in its own terms.
2
Structural Analysis of Networks
Many empirical networks share certain structural properties that lead to functional properties like robustness against local failures, speed of signal transmission, etc. Nevertheless, one should suspect that a neurobiological network is structurally different from other networks, because it fulfills a specific function different from those of networks in other domains. Our research paradigm therefore is to distinguish universal features of biological (gene regulation, proteinprotein interaction, cell-cell interaction, trophic chains,...), technical (internet, B. Sendhoff et al. (Eds.): Creating Brain-Like Intelligence, LNAI 5436, pp. 51–65, 2009. c Springer-Verlag Berlin Heidelberg 2009
52
J. Jost
flight connections, power grids,...) and social networks (friendships, citations, www) from properties that distinguish specific classes of networks. That is, we look for properties that are shared by networks in a given class, but not by other networks. The method that we have developed for this purpose in [6,7,8] is spectral analysis. For that method, we represent a neuronal network as a graph, with its nodes corresponding to the neurons and its edges to the synaptic connections. For simplicity, we suppress the direction of synaptic connections and consider only the underlying undirected graph. When the strength of the connection between i and j is denoted by wij (assumed to be symmetric, that is, wij = wji ), we consider the diffusion operator L, defined by 1 wij (xj (t) − xi (t)). w ik k j
Lxi (t) =
(1)
Here, xi (t) is the state value of the node i at time t, assumed to be a real number. In order to avoid singular cases, we also assume that all wij ≥ 0 and k wik > 0 for every i. As investigated in [6,7,8],1 the spectral density for this operator then yields characteristic features that allow us to distinguish classes of networks from specific domains from those from other domains. Also, in contrast to many formal models like random or scalefree graphs, (neuro-)biological systems seem to have their own natural scale. Even though the size of brains varies considerably between different species, the functional capabilities are not some monotonic function of brain size, but rather emerge from the cooperation of structures that have some intrinsic scale.
3
Dynamical States
The neuronal structure, however, is only the static substrate for the neural dynamics. More precisely, according to the learning paradigm that learning in neuronal networks is accomplished by synaptic weight changes, the structure is static only at a time scale that is short enough that such learning cannot show significant effects. Here, we first look at that short time scale where we can separate the dynamics from their static substrate. The general question for understanding cognitive systems at this temporal scale then is what dynamic patterns correspond to which cognitve processes. This is a difficult question for which at this stage no general answer is available, but as a first step, we can look at those dynamical patterns that can be produced and supported by neuronal networks. Our interest here will be not so much in the response to some external input, but rather in intrinsically generated and sustained dynamical behavior. 1
In fact, in those references, only the case of unweighted graphs, that is, where the wij can take only the values 0 and 1, is considered, but the formal analysis readily generalizes to the situation described here. Frank Bauer, Anirban Banerjee and myself are presently investigating the analysis of weighted and directed networks in detail.
Formal Tools for the Analysis of Brain-Like Structures and Dynamics
53
At the level of abstraction of the present article, we formulate this as a general question about coupled dynamical systems. The question then is about the difference in the dynamical capabilities of a collection of independent dynamical elements, perhaps modelling the behavior of individual neurons or small local networks, and a system consisting of the same elements, but now coupled according to some underlying connection scheme. That is, the state changes of the elements no longer only depend on their own states, but also on the states of those other elements with which they are connected. Let us first address a general principle. In order that the system be capable of richer dynamical behavior than asymptotic convergence to static states, the state changes must not be a monotonic functions of the present states. Otherwise, the states would just grow or decrease towards their limits. Such a non-monotonic update rule could already operate in the individual elements, or it could be introduced through the coupling structure. In the latter case, the coupling must also include inhibitory connections so that increasing state values of certain elements can decrease the state values of other elements. This is the principle in many neural network models. There, the individual elements follow an update rule x(t + 1) = f (x(t)) (2) with a bounded, monotonically increasing function f , for example a sigmoid f (s) =
1 1 + e−κs
(3)
with some positive parameter κ. (Here, we only consider dynamics with discrete time t ∈ N, but the treatment of systems with continuous time can proceed analogously.) The coupled dynamics then is of the form xi (t + 1) = wij f (xj (t)) (4) j
where i, j now are indices for the elements in the network and wij is the strength of the connection from j to i. As explained, wij can assume values of either sign (and can be put to 0 when there is no direct connection from j to i). Negative values then correspond to inhibitory connections, positive values to excitatory ones. There could also be self-coupling, that is, non-zero values of wii . In fact, the uncoupled dynamics (2) then is included as the special case wii = 1 and wij = 0 for i = j. In principle, also the update function f in (4) could depend on the element i, but to keep our discussion simple, we do not introduce that generality here. An alternative consists in assigning the non-monotonic behavior to the individual elements, that is, consider (2) with a non-monotonic update function, like the logistic map f (x) = 4x(1 − x) (0 ≤ x ≤ 1) (5) or the tent map
f (x) =
2x 2 − 2x
for 0 ≤ x ≤ 1/2 for 1/2 ≤ x ≤ 1.
(6)
54
J. Jost
In such a situation, the coupling coefficients wij could be all non-negative, without making the resulting dynamics trivial. In fact, a non-monotonic coupling function like (5) or (6) leads to chaotic behavior of the isolated dynamics (2). That is, small fluctuations in the initial values x(0) can get amplified over time so as to make long-term predictions without full precision of the initial state impossible. (Clearly, this is even a problem for computer simulations of such dynamics as those can only be carried out with bounded precision, but this is not our concern here.) We shall here discuss the case of a non-monotonic coupling function of the type (5) or (6) in place of a monotonic one of the type (2) as in many simplified neuron models. As discussed, we then can generate rich dynamic patterns with purely positive interactions. In terms of neural network models, the elements would then not correspond to individual neurons, but rather to local groups of neurons, and with the combined effects of couplings between neurons in different groups being positive, even though some individual couplings may be negative, that is, have inhibitory effects. The principles of the analysis presented below, however, are also applicable to the other case. In either case, however, the essential point is that the coupled dynamics (4) can produce much richer or more interesting phenomena than the isolated dynamics (2), at least for appropriate parameter constellations. A phenomenon that is perhaps not so dynamically rich, but at least rather surprising is the synchronization of chaotic behavior. That is, the dynamics (4) with an update function of the type (5) or (6) can become synchronized for suitable values of the wij . In any case, synchronization is perhaps the simplest type of coordination arising at the collective level of a system of coupled dynamical elements. Now synchronization has also been proposed in neurobiology as an important mechanism, for the binding of features into a coherent object, see [54]. Therefore, we shall now briefly discuss the underlying dynamical mechanism. Synchronization in non-linear coupled dynamical systems is a phenomenon that apparently was first observed by the Dutch physicist Christiaan Huygens in the 17th century. Since then, it has been studied in much detail, in particular by physicists, and we refer to the monograph [48] for a modern account. The more special phenomenon of the synchronization of chaotic oscillators alluded to above was first studied by Kaneko [41]. For a systematic investigation on the dependence of this phenomenon on the underlying network topology, we refer to [39,49,18,35]. We now present the essential aspects. It is convenient to slightly rewrite the dynamics (4) as 1 wij f (xj (t)), k wik j
xi (t + 1) = (1 − ε)f (xi (t)) + ε
(7)
introducing some normalization and a parameter ε. Since the first term on the r.h.s. reflects the self-coupling, we then put wii = 0 in the second term. (We consider here the case of non-negative connection weights wij , and we assume that k wik > 0 for every i, as otherwise there would be no coupling for such an i.) The crucial assumption needed for the technical results below, however, is the
Formal Tools for the Analysis of Brain-Like Structures and Dynamics
55
symmetry assumption wij = wji for all i, j. In (7), ε = 0 then is the uncoupled case, and this parameter therefore allows us to tune the coupling strength in the network globally. The dynamics (7) then becomes (globally) synchronized when xi (t) = xj (t) for all i, j and t ≥ t0 , for some t0 ∈ N.
(8)
In fact, theoretical results usually only predict asymptotic synchronization, that is, lim (xi (t) − xj (t)) = 0 for all i, j, (9) t→∞
and perhaps only when the initial values xi (t) are sufficiently close to each other, but in computer simulations, one typically finds (8) when the parameter constellations are appropriate as now to be discussed. The mathematical analysis derives a stability condition for the synchronized solution (8), in the sense that after sufficiently small perturbations, the dynamics will return to it asymptotically, as in (9) (for the stability concept employed here, see also [44]). The condition derived and discussed in [39] is 1 − e−μ 1 + e−μ 0 whenever the presynaptic spike shortly precedes the postsynaptic one, i.e., when s is negative and of small absolute value (long term potentiation, LTP). 3. Similarly, Γ (s) < 0 whenever the presynaptic spike occurs shortly after the postsynaptic one, i.e., when s is positive and of small absolute value (long term depression, LTD). 4. For mathematical reasons that will become apparent below, Γ (s) has to be balanced: ∞ Γ (s)ds = 0. (21) −∞
After some transformation of the time axis, we may then assume that Γ (−s) = −Γ (s),
(22)
which we shall henceforth do. While the balancing condition may not be strictly realized physiologically, its conceptual foundation is that the effects that lead to increasing synaptic weights should balance at some level those that decrease weights, in order to prevent unlimited growth or complete decay.
Formal Tools for the Analysis of Brain-Like Structures and Dynamics
61
The learning rule then averages the positive contributions, that is, where the presynaptic spike shortly precedes the postsynaptic one, minus the negative ones, where the spike order is reversed, over some time interval. In formal terms, it convolves an input-output correlation spike-triggered average with the learning window function Γ to obtain a differential equation for the synaptic weight wij from the presynaptic neuron j to the postsynaptic neuron i: ∞ 1 t w˙ ij = δ(τ + s − tj,μ )u(τ )dτ Γ (s)ds (23) −∞ Tl t−Tl μ where the sum extends over all spikes between t and t − Tl and Tl is large compared to the typical interspike interval. is assumed small, in order to achieve a suitable average. (23) is 1 tj,µ −t+Tl = u(tj,μ − s) Γ (s)ds. Tl μ tj,µ −t
(24)
By some approximation, we may extend the integration boundaries for s to ±∞ and obtain then ∞ 1 w˙ ij = u(tj,μ − s) Γ (s)ds (25) −∞ Tl μ 1 ∞ = (u(tj,μ + s) − u(tj,μ − s))(−Γ (s))ds Tl μ 0 because of (22). This expression is seen to receive a positive contribution if u is larger after the incoming spike tj,μ than before. We can also rewrite this as 1 ∞ s = u(t ˙ j,μ + τ )dτ (−Γ (s))ds (26) Tl μ 0 −s which receives a positive contribution if u is increasing between tj,μ − s and tj,μ + s. Thus, we have a mathematically clear rule that shows the desired and experimentally observed features of STDP. Equipped with this rule, we can then also investigate how the input-output correlation changes as the result of learning.
7
Perspectives
In this contribution, we have sketched a general framework for the structural, dynamical, and information theoretic analysis of brains and brainlike systems. The interplay between the dynamics on neuronal networks and the processing of information here is a fundamental question. Not only do we want to understand the dynamics of information processing, see Section 5, but information theory in turn can also be used for analyzing the dynamics itself, see Section 4.
62
J. Jost
Brains are assemblies of neurons that are connected in specific patterns and according to specific rules. These neuronal system support dynamical processes on several different scales, including the one of the on-line processing of sensory information as well as the slower one of learning correlations in input patterns and internal activities by synaptic modifications. These dynamics, in turn, represent the processing of information. On one hand, relevant information is extracted from sensory inputs. On the other hand, inputs and specific features within these inputs are selected according to internal criteria that might be based on hypotheses about the stimulus sets and predictions of its temporal changes. This recurrent interplay and the interaction of different temporal scales lead to the selection of that information that is relevant and meaningful for the system (for a conceptual discussion, see [32]). In fact, brains are parts of animals, and these animals actively explore their environment, instead of just gathering sensory information. By moving around in the environment or even actively shaping their environment, they also select what stimuli they in turn receive from that environment. As a principle guiding that exploration, Der[24,25,29] has proposed homeokinesis. Those stimuli then do not determine an output, but rather affect the internal dynamics of the cognitive system. Dynamical systems theory, as indicated in Section 3, provides some abstract tools for understanding, classifying and analyzing qualitative types of behavior, of neuronal as well as of other systems. These dynamical patterns operate on a substrate that itself changes on a slower time scale, as the result of learning, individual development, and evolution. In Section 2, we have indicated some formal tools for a static analysis of such a substrate. In Section 6, we have discussed a rule for local modifications on the basis of temporal activity relationships. These temporal activities, however, go beyond simple input-output relations, but rather represent the internal activity of the system as affected by inputs. From that point of view, the input might be considered as a perturbation of the otherwise self-contained internal dynamics, see [15] for a general discussion. These internal dynamics, however, in turn reflect a long history of past inputs. The dynamics represent, in a sense to be still clarified, the internal categories, hypotheses about the input, and the memory of the system. The approach described at the end of Section 5 constitutes only a first step in this direction. Acknowledgements. Part of the work described here has been obtained in collaborations with Fatihcan Atay, Nihat Ay, Anirban Banerjee, Frank Bauer, Nils Bertschinger, Wenlian Lu, and Eckehard Olbrich. Some of this has been supported by Stiftung Volkswagenwerk. Discussions with Olaf Breidbach have profoundly shaped some of the views presented in this paper.
References 1. Abbott, L., Nelson, S.: Synaptic plasticity: taming the beast. Nature Neuroscience (suppl. 3), 1178–1183 (2000) 2. Atay, F., Jost, J.: On the emergence of complex systems on the basis of the coordination of complex behaviors of their elements. Complexity 10, 17–22 (2004)
Formal Tools for the Analysis of Brain-Like Structures and Dynamics
63
3. Atay, F., Jalan, S., Jost, J.: Randomness, chaos, and structure. Complexity (to appear) 4. Atay, F., Jost, J., Wende, A.: Delays, connection topology, and synchronization of coupled chaotic maps. Phys. Rev. Lett. 92(14), 144101 (2004) 5. Ay, N., Olbrich, E., Bertschinger, N., Jost, J.: A unifying framework for complexity measures of finite systems. In: Proc. ECCS 2006 (2006) 6. Banerjee, A., Jost, J.: Spectral plots and the representation and interpretation of biological data. Theory Biosci. 126, 15–21 (2007) 7. Banerjee, A., Jost, J.: Graph spectra as a systematic tool in computational biology. Discrete Appl. Math. (in press) 8. Banerjee, A., Jost, J.: Spectral plot properties: Towards a qualitative classification of networks. Networks Het. Med. 3, 395–411 (2008) 9. Bauer, F., Atay, F., Jost, J.: Emergence and suppression of synchronized chaotic behavior in coupled map lattices (submitted) 10. Bertschinger, N., Olbrich, E., Ay, N., Jost, J.: Autonomy: An information theoretic perspective. BioSystems 91, 331–345 (2008) 11. Bertschinger, N., Olbrich, E., Ay, N., Jost, J.: Information and closure in systems theory. In: Artmann, S., Dittrich, P. (eds.) Proc. 7th German Workshop on Artificial Life, pp. 9–21. IOS Press BV, Amsterdam 12. Bi, G.-Q., Poo, M.-M.: Synaptic modification by correlated activity: Hebb’s postulate revisited. Annu. Rev. Neurosci 24, 139–166 (2001) 13. Bi, G.-Q., Poo, M.-M.: Distributed synaptic modification in neural networks induced by patterned stimulation. Nature 401, 792–796 (1999) 14. Bialek, W., Nemenman, I., Tishby, N.: Predictability, Complexity, and Learning. Neural Computation 13, 2409–2463 (2001) ¨ 15. Breidbach, O., Holthausen, K., Jost, J.: Interne Repr¨ asentationen – Uber die ”Welt”generierungseigenschaften des Nervengewebes. Prolegomena zu einer Neurosemantik. In: Ziemke, A., Breidbach, O. (eds.) Repr¨ asentationismus – Was sonst?, Vieweg, Braunschweig/Wiesbaden (1996) 16. Breidbach, O., Jost, J.: On the gestalt concept. Theory Bioscienc 125, 19–36 (2006) 17. Castiglione, P., Falcioni, M., Lesne, A., Vulpiani, A.: Chaos and coarse graining in statistical mechanics. Cambr. Univ. Press, Cambridge (2008) 18. Chen, Y.H., Rangarajan, G., Ding, M.Z.: General stability analysis of synchronized dynamics in coupled systems. Phys. Rev. E 67, 26209–26212 (2003) 19. Cover, T., Thomas, J.: Elements of Information Theory. Wiley, Chichester (1991) 20. Crutchfield, J.P., Young, K.: Inferring Statistical Complexity. Phys. Rev. Lett. 63, 105–108 (1989) 21. Crutchfield, J.P., Feldman, D.P.: Regularities unseen, randomness observed: Levels of entropy convergence. Chaos 13(1), 25–54 (2003) 22. Dayan, P., Abbott, L.: Theoretical Neuroscience. MIT Press, Cambridge (2001) 23. Deneve, S.: Bayesian spiking neurons I: Inference. Neural Comp. 20, 91–117 (2008) 24. Der, R.: Self-organized acquisition of situated behavior. Theory Biosci. 120, 179– 187 (2001) 25. Der, R.: Homeokinesis and the moderation of complexity in neural systems (to appear) 26. Eckhorn, R., et al.: Coherent oscillations: a mechanism of feature linking in the visual cortex? Multiple electrode and correlation analyses in the cat. Biol. Cybern. 60, 121–130 (1988) 27. Grassberger, P.: Toward a quantitative theory of self-generated complexity. Int. J. Theor. Phys. 25(9), 907–938 (1986)
64
J. Jost
28. Gray, C., K¨ onig, P., Engel, A., Singer, W.: Oscillatory responses in cat visual cortex exhibit inter-columnar synchronization which reflects global stimulus properties. Nature 338, 334–337 (1989) 29. Hesse, F., Der, R., Herrmann, J.: Reflexes from self-organizing control in autonomous robots. In: 7th Intern. Conf. on Epigenetic Robotics: Modelling cognitive development in robotic systems. Cognitive Studies, vol. 134, pp. 37–44 (2007) 30. Jalan, S., Amritkar, R.: Self-organized and driven phase synchronization in coupled maps. Phys. Rev. Lett. 90, 014101 (2003) 31. Jalan, S., Atay, F., Jost, J.: Detecting global properties of coupled dynamics using local symbolic dynamics. Chaos 16, 033124 (2006) 32. Jost, J.: External and internal complexity of complex adaptive systems. Theory Biosci. 123, 69–88 (2004) 33. Jost, J.: Dynamical systems. Springer, Heidelberg (2005) 34. Jost, J.: Temporal correlation based learning in neuron models. Theory Bioscienc 125, 37–53 (2006) 35. Jost, J.: Dynamical networks. In: Feng, J.F., Jost, J., Qian, M.P. (eds.) Networks: From Biology to Theory, pp. 35–62. Springer, Heidelberg (2007) 36. Jost, J.: Neural networks: Concepts, mathematical tools, and questions, monograph (in preparation) 37. Jost, J., Bertschinger, N., Olbrich, E.: Emergence (submitted) 38. Jost, J., Bertschinger, N., Olbrich, E., Ay, N., Frankel, S.: An information theoretic approach to system differentiation on the basis of statistical dependencies between subsystems. Physica A 10303 (2007) 39. Jost, J., Joy: Spectral properties and synchronization in coupled map lattices. Phys. Rev. E 65, 16201–16209 (2001) 40. Kahle, T., Olbrich, E., Ay, N., Jost, J.: Testing Complexity Measures on Symbolic Dynamics of Coupled Tent Maps (submitted) 41. Kaneko, K.: Period-doubling of kink-antikink patterns, quasi-periodicity in antiferro-like structures and spatial-intermittency in coupled map lattices – toward a prelude to a field theory of chaos. Prog. Theor. Phys. 72, 480–486 (1984) 42. Kempter, R., Gerstner, W., van Hemmen, J.L., Wagner, H.: Extracting oscillations: Neuronal coincidence detection with noisy periodic spike input. Neural Comput. 10, 1987–2017 (1998) 43. Kempter, R., Gerstner, W., van Hemmen, J.L.: Hebbian learning and spiking neurons. Phys. Rev. E 59, 4498–4514 (1999) 44. Lu, W.L., Atay, F., Jost, J.: Synchronization of discrete-time dynamical networks with time-varying couplings. SIAM J. Math. Anal. 39, 1231–1259 (2007) 45. Lu, W.L., Atay, F., Jost, J.: Chaos synchronization in networks of coupled maps with time-varying topologies. Eur. Phys. J. B 63, 399–406 (2008) 46. Lu, W.L., Atay, F., Jost, J.: Consensus and synchronization in discrete-time networks of multi-agents with Markovian jump topologies and delays (submitted) 47. Markram, H., L¨ ubke, J., Frotscher, M., Sakmann, B.: Regulation of synaptic efficacy by coincidence of synaptic APs and EPSPs. Science 275, 213–215 (1997) 48. Pikovsky, A., Rosenblum, M., Kurths, J.: Synchronization. Cambridge University Press, Cambridge (2001) 49. Rangarajan, G., Ding, M.Z.: Stability of synchronized chaos in coupled dynamical systems. Phys. Lett. A 296, 204–212 (2002) 50. Rieke, F., Warland, D., de Ruyter van Steveninck, R., Bialek, W.: Spikes: Exploring the neural code. MIT Press, Cambridge (1997)
Formal Tools for the Analysis of Brain-Like Structures and Dynamics
65
51. Tononi, G., Sporns, O., Edelman, G.M.: A measure for brain complexity: Relating functional segregation and integration in the nervous system. Proc. Natl. Acad. Sci. USA 91, 5033–5037 (1994) 52. Tononi, G., Sporns, O., Edelman, G.M.: A complexity measure of selective matching of signals by the brain. PNAS 93, 3257–3267 (1996) 53. van Hemmen, J.L.: Theory of synaptic plasticity. In: Moss, F., Gielen, S. (eds.) Handbook of biological physics. Neuro-informatics, neural modelling, vol. 4, pp. 771–823. Elsevier, Amsterdam (2001) 54. von der Malsburg, C., Schneider, W.: A neural cocktail-party processor. Biol. Cybern. 54, 29–40 (1986) 55. Wu, C.W.: Synchronization in networks of nonlinear dynamical systems coupled via a directed graph. Nonlinearity 18, 1057–1064 (2005)
Morphological Computation – Connecting Brain, Body, and Environment Rolf Pfeifer1 and Gabriel G´ omez2 1
Artificial Intelligence Laboratory, Department of Informatics, University of Zurich, Andreasstrasse 15, CH-8050, Zurich, Switzerland
[email protected] http://www.ifi.unizh.ch/ailab/people/pfeifer/ 2 Humanoid Robotics Group, CSAIL, MIT 32 Vassar St. Room 32-380 Cambridge, MA 02139 USA g
[email protected] http://people.csail.mit.edu/g gomez/
Abstract. Traditionally, in robotics, artificial intelligence, and neuroscience, there has been a focus on the study of the control or the neural system itself. Recently there has been an increasing interest in the notion of embodiment not only in robotics and artificial intelligence, but also in neuroscience, psychology, and philosophy. In this paper1 , we introduce the notion of morphological computation and demonstrate how it can be exploited on the one hand for designing intelligent, adaptive robotic systems, and on the other for understanding natural systems. While embodiment has often been used in its trivial meaning, i.e. “intelligence requires a body”, the concept has deeper and more important implications, concerned with the relation between physical and information (neural, control) processes. Morphological computation is about connecting body, brain and environment. A number of case studies are presented to illustrate the concept. We conclude with some speculations about potential lessons for neuroscience and robotics, in particular for building brain-like intelligence, and we present a theoretical scheme that can be used to embed the diverse case studies. Keywords: Embodiment, sensor morphology, material properties, information self-structuring, morphological change, dynamics, systemenvironment interaction.
1
Introduction
The main goal of artificial intelligence and biorobotics is to work out the principles underlying intelligent behavior. These principles will enable us on the one hand to understand natural forms of intelligence (humans, animals), and on the other to design and build intelligent systems (computer programs, robots, other artifacts) for research and application purposes. The overall “philosophy” of our 1
Parts of the ideas presented in this paper have appeared in previous publications; they will be referenced throughout the text.
B. Sendhoff et al. (Eds.): Creating Brain-Like Intelligence, LNAI 5436, pp. 66–83, 2009. c Springer-Verlag Berlin Heidelberg 2009
Morphological Computation – Connecting Brain, Body, and Environment
67
research program is centered around the notion of “embodiment”, the idea that intelligence always requires a body, a complete organism that interacts with the real world. The implications of this concept can hardly be overestimated. They are summarized in [1,2]. Traditionally, in robotics, artificial intelligence, and neuroscience, there has been a focus on the study of the control or the neural system, the brain, itself: In robotics one starts typically with a particular given morphology (the hardware) and then the robot is programmed, to do certain tasks. In other words, there is a clear separation between hardware and software. In computational neuroscience, the focus is essentially on the simulation of certain brain regions. For example, in the “Blue Brain” project [3], the focus is, for the better part, on the simulation of cortical columns - the organism into which the brain is embedded does not play a major role in these considerations. However, recently there has been an increasing interest in the notion of embodiment in all disciplines dealing with intelligent behavior, including psychology, philosophy, artificial intelligence, linguistics, and to some extent neuroscience. In this paper, we explore the far-reaching and often surprising implications of embodiment and argue that if we want to understand the function of the brain (or the control in the case of robots), we must understand how the brain is embedded into the physical system, and how the organism interacts with the real world. While embodiment has often been used in its trivial meaning, i.e. “intelligence requires a body”, there are deeper and more important consequences, concerned with connecting brain, body, and environment, or more generally with the relation between physical and information (neural, control) processes. Often, morphology and materials can take over some of the functions normally attributed to the brain (or the control), a phenomenon called “morphological computation”. It can be shown that through the embodied interaction with the environment, in particular through sensory-motor coordination, information structure is induced in the sensory data, thus facilitating perception and learning, a phenomenon called “information self-structuring” ([4], [5], [6]). It is interesting to note that this phenomenon is the result of a physical interaction with the real world - it cannot be understood by looking at internal brain (or control) mechanisms only. The advantage of using robots is that embodiment can be investigated quantitatively, which is much harder to do with biological systems, because in robots, all the data (sensory stimulation, motor signals, internal state) can be recorded as time series for further analysis. We will present a number of case studies to illustrate the concepts introduced and then attempt to integrate the diverse case studies into a general overarching scheme that captures the essence of embodiment and morphological computation.
2
Case Studies
We start with a case study on sensory morphology which is followed by how morphology and materials can be exploited for grasping using an artificial hand. We then turn to sensory-motor coordination and information self-structuring. This is followed by an example of how to control complex bodies by exploiting
68
R. Pfeifer and G. G´ omez
morphological changes. We then present a few cases that illustrate the subtle relations between morphology, materials, and dynamics in the generation of diverse behavior. We also show how, by capitalizing on the interaction with the environment, a seemingly unsolvable problem, can be resolved (i.e., reach any position in 3D space with only one degree-of-freedom of actuation.) Finally, we will try to integrate what has been said so far into a coherent picture, discuss what has been achieved, and what lessons there might be for robotics research. 2.1
Exploitation of Sensory Morphology
Eyebot: In previous papers we have investigated in detail the effect of changing sensor morphology on neural processing (e.g. [7]; [8];[9]; [1]). Here we only summarize the main results. The morphology of the sensory system has a number of important implications. In many cases, more efficient solutions can be found by having the proper morphology for a particular task. For example, it has been shown that for many objectives (e.g. obstacle avoidance) motion detection is all that is required. Motion detection can often be simplified if the light-sensitive cells are not spaced evenly, but if there is a non-homogeneous arrangement.
(a)
(b)
(c) Fig. 1. Morphological computation through sensor morphology - the Eyebot. The specific non-homogeneous arrangement of the facets compensates for motion parallax, thereby facilitating neural processing. (a) Insect eye. (b) Picture of the Eyebot. (c) Front view: the eyebot consists of a chassis, an on-board controller, and sixteen independently-controllable facet units, which are all mounted on a common vertical axis. A schematic drawing of the facet is shown on the right. Each facet unit consists of a motor, a potentiometer, two cog-wheels and a thin tube containing a sensor (a photo diode) at the inner end. These tubes are the primitive equivalent of the facets in the insect compound eye.
Morphological Computation – Connecting Brain, Body, and Environment
69
For instance, Franceschini and his co-workers found that in the compound eye of the house fly the spacing of the facets is more dense toward the front of the animal ([10]). This non-homogeneous arrangement, in a sense, compensates for the phenomenon of motion parallax, i.e. the fact that at constant speed, objects on the side travel faster across the visual field than objects towards the front: it performs the “morphological computation”, so to speak. Allowing for some idealization, this implies that under the condition of straight flight, the same motion detection circuitry - the elementary motion detectors, or EMDs can be employed for motion detection for the entire eye, a principle that has also been applied to the construction of navigating robots (e.g. [11]). Because the sensory stimulation is only induced when the robot (or the insect) moves in a particular way, this is also called information self-structuring (or more precisely, self-structuring of the sensory stimulation, see section 2.2-Information self-structuring). In experiments with artificial evolution on real robots, it has been shown that certain aims, e.g. keeping a constant lateral distance to an obstacle, can be solved by proper morphological arrangement of the ommatidia, i.e. frontally more dense than laterally without changing anything inside the neural controller ([7]; Fig. 1). 2.2
Exploitation of Morphological and Material Properties
Cheap Grasping: In this case study, we discuss how morphology, materials, and control interact to achieve grasping behavior. The 18 degrees-of-freedom (DOF) tendon driven “Yokoi hand” ([12]; Fig. 2) which can be used as a robotic and a prosthetic hand, is partly built from elastic, flexible, and deformable materials (this hand comes in many versions with different materials, morphologies, sensors, etc.; here we only describe one of them). The tendons are elastic, the finger tips are deformable and between the fingers there is also deformable material. When the hand is closed, the fingers will, because of its anthropomorphic morphology, automatically come together. For grasping an object, a simple control scheme, a “close” is applied. Because of the morphology of the hand, the elastic tendons, and the deformable finger tips, the hand will automatically selfadapt to the object it is grasping. Thus, there is no need for the agent to “know” beforehand what the shape of the to-be-grasped object will be (which is normally the case in robotics, where the contact points are calculated before the grasping action [13]). The shape adaptation is taken over by the morphology of the hand, the elasticity of the tendons, and the deformability of the finger tips, as the hand interacts with the shape of the object. In this setup, control of grasping is very simple, or in other words, very little “brain power” is required. Another way of achieving decentralized adaptation is to add pressure sensors to the hand and bending the fingers until a certain threshold is reached. By placing additional sensors on the hand (e.g., angle, torque, pressure) the abilities of the hand can be improved and feedback signals can be provided to the agent (the robot and the human) which can then be exploited by the neural system for learning and categorization
70
R. Pfeifer and G. G´ omez
(a)
(b)
(c)
Fig. 2. “Cheap” grasping: exploiting system-environment interaction. (a) The Yokoi hand exploits deformable and flexible materials to achieve self-adaptation through the interaction between environment and materials. (b)-(c) Final grasp of different objects. The control is the same, but the behavior is very different.
purposes ([14];[15];[16]). Clearly, these designs only work for a class of simple hand movements; for fine manipulation more sophisticated sensing, actuation, and control are required [17]. For prosthetics, there is an interesting implication. EMG signals can be used to interface the robot hand non-invasively to a patient: even though the hand has been amputated, he or she can still intentionally produce muscle innervations which can be picked up on the surface of the skin by EMG electrodes. If EMG signals, which are known to be very noisy, are used to steer the movement of the hand, control cannot be very precise and sophisticated. But by exploiting the self-regulatory properties of the hand, there is no need for very precise control, at least for some kinds of grasping: the relatively poor EMG signals are sufficient for the basic movements ([18,19]). Physical Dynamics and Information Self-structuring: Imagine that you are standing upright and your right arm is just loosely swinging from the shoulder; only the shoulder joint is slight moved whereas the elbow, the ankle joint, and the finger joints are mostly passive. The brain does not control the trajectories of the elbow: its movements come about because the arm acts a bit like a pendulum that swings due to gravity and due to the dynamics which is set by the muscle tone (which is in turn controlled by the brain). If we look at the trajectory of the hand, it is actually quite complex, a curved movement in 3D space. In spite of this (seeming) behavioral complexity, the neural control is comparatively simply (which is an instance of the notorious frame-of-reference problem (e.g. [2]). While it is nice to be able to say that the neural control (or the control in general) for a complex movement (e.g., loosely swinging arm, grasping) is simple, it would be even nicer if this movement were also useful. What could it be good for? Assuming that evolution typically comes up with highly optimized solutions, we can expect these movements to benefit the organism. The natural movements of the arm and hand are - as a result of their intrinsic dynamics directed towards the front center of the body, in this way, proper morphology and exploitation of intrinsic dynamics, facilitate the induction of information structure (see below).
Morphological Computation – Connecting Brain, Body, and Environment
71
For instance, if the hand is swinging from the right side towards the center and it encounters an object, the palm will be facing left, the object can, because of the morphology of the hand, be easily grasped. Through this act of grasping, sensory stimulation is generated in the palm and on the fingertips. Moreover, the object is brought into the range of the visual field thereby inducing visual stimulation. Because of gravity, proprioceptive stimulation is also induced so that cross-modal associations can be formed between the visual, the haptic, and the proprioceptive modality (a process that is essential for development). Thus, through sensory-motor coordinated behaviors such as foveation, reaching, and grasping, sensory stimulation is not only induced but it also tends to contain information structure, – i.e., stable patters of stimulation characterized by correlations within and between sensory channels, – which can strongly simplify perception and learning as demonstrated by [20]. 2.3
Managing Complex Bodies: Adaptation through Morphological Changes
The Fin Ray Effect: Inspired by observations of the internal structure of fish fins, Bannasch and his colleagues have developed a robot based on the so called “fin ray effect” TM . The basic structure consists of two ribs interconnected by elastic connective tissue that bends towards the direction of an external force, instead of moving away as one would normally expect. This elastic structures provide an equal distribution of pressure in a dynamic system and has been used to build the “Aqua ray robot” by Festo [21]. This robot which resembles a manta ray, has a minimum setup with only three motors, one for each wing and one for the tail. Although each motor is just driven back and forth, the “fin ray effect” provides the “Aqua ray” robot with a perfect and cheap way to bend the wings and exploit the fluid dynamics [22]. Exploiting Self-adaptation: Cockroaches Climbing over Obstacles: Cockroaches cannot only walk fast over relatively uneven terrain, but they can also, with great skill, negotiate obstacles that exceed their body height. They have complex bodies with wings and three thoracic segments, each one with a pair of legs. There is one thoracic ganglion at each segment (i.e., prothoracic, mesothoracic, and metathoracic, see Fig. 4) controlling each pair of legs. Although both the brain and the thoracic ganglion have a large number of neurons, only around 250 neurons – a very small number – descend from the brain to the body [23]. Each leg has more than seven degrees of freedom (legs and feet are deformable to various extents), some joints in the legs are very complicated and there is also an intricate set of joints in the feet. Now, how is it possible to control all these degrees of freedom with such a small number of descending neurons? In what follows, we discuss one potential solution that, although entirely hypothetical, does have a certain plausibility and it further illustrates the idea of morphological computation. Suppose that the animal is walking on the ground using its local neural circuits which account for stable forward locomotion. If it now encounters an
72
R. Pfeifer and G. G´ omez
Fig. 3. Cockroach climbing over obstacle. (a) touching block with antenna. (b) front leg put on block, middle legs pushing down, posture changing to reared-up position. (c) front legs pushing down on block, middle leg fully stretched (picture courtesy Roy Ritzmann, Case Western Reserve University).
obstacle - whose presence and approximate height it can sense by means of its antennae. The brain, rather than changing the movement pattern by evoking a different neural circuit, “re-configures” the shoulder joint (i.e., thoracic-coxa, see Fig. 4) by altering local control circuits in the thoracic ganglion. As a result, the middle legs are rotated downward so that extension now generates a rearing motion that allows it to easily place its front legs on top of the block with typical walking movements (Fig. 3). The local neural circuits continue doing essentially the same thing - perhaps with increased torques on the joints - but because now the configuration of the shoulder joint is different, the effect of the neural circuits on the behavior, the way in which the joints are actuated, changes (See [24];[25]). Interestingly, this change is not so much the result of a different neural circuit, but of an appropriately altered joint morphology. Instead of manipulating the movements in detail, the brain orchestrates the dynamics of locomotion by “outsourcing” some of the functionality to local neural circuits and morphology through an ingenious kind of cooperation with body and decentralized neural controllers. In order to modify the morphology, only a few global parameters need to be altered to achieve the desired movement. This kind of control through morphological computation has several advantages. First, the control problem in climbing over obstacles can be solved efficiently with relatively few neurons. Second, this is a “cheap solution” because much of what the brain would have to do, is delegated to the morphology, thereby freeing it from unnecessary control tasks. Third it takes advantage of the inherent stability of the local feedback
Morphological Computation – Connecting Brain, Body, and Environment
73
Fig. 4. Schematic representation of cockroach anatomy. As the cockroach is climbing over an obstacle, the configuration of the middle (mesothoracic) shoulder joint is reconfigured. The region where the change is taking place is roughly marked with a red rectangle (picture courtesy Roy Ritzmann, Case Western Reserve University).
circuits rather than working in opposition to them. And fourth, it illustrates a new way of designing controllers by exploiting mechanical change and feedback (See the principle of “cheap design”, [6]). 2.4
Exploitation of Dynamics
In this section, inspired by the passive dynamic walkers, we introduce three locomotion robots developed at our lab, the walking and hopping robot “Stumpy”, the quadruped robot “Puppy”, and “crazy bird”. These robots demonstrate the exploitation of materials and dynamics. Passive Dynamic Walkers: The passive dynamic walker which goes back to [26,27], is a robot (or, if you like, a mechanical device) capable of walking down an incline without any actuation and without control. In other words, there are no motors and there is no microprocessor on the robot; it is brainless, so to speak. In order to achieve this task the passive dynamics of the robot, of its body and its limbs, must be exploited. This kind of walking is very energy efficient – the energy is supplied by gravity only – and there is an intrinsic naturalness to it. However, its “ecological niche” (i.e. the environment in which the robot is capable of operating) is extremely narrow: it only consists of inclines of certain angles. Energy-efficiency is achieved because in this approach the robot is - loosely speaking - operated near one of its Eigenfrequencies. To make this work, a lot of attention was devoted to morphology and materials. For example, the robot is
74
R. Pfeifer and G. G´ omez
equipped with wide feet of a particular shape to guide lateral motion, soft heels to reduce instability at heel strike, counter-swinging arms to compensate for the yaw induced by leg swinging, and lateral-swinging arms to stabilize side-to-side lean ([28]). In order to walk over flat terrain, a passive dynamic walker can be augmented with some actuation, for instance adding electrical motors like in the case of “Stumpy” designed by Raja Dravid at the University of Zurich, or adding pneumatic actuators at the hip to move the legs like in the case of “Denise” developed at the Technical University of Delft by Martijn Wisse ([29]) or more recently the biped robot with antagonistic pairs of pneumatic actuators designed at Osaka University by Koh Hosoda [30]).
(a)
(b)
Fig. 5. The walking and hopping robot Stumpy. (a) Photograph of the robot. (b) Schematic drawing (details, see text).
Stumpy: The walking and hopping robot Stumpy. Stumpy’s lower body is made of an inverted “T” mounted on wide springy feet (Fig. 5). The upper body is an upright “T” connected to the lower body by a rotary joint, the “waist” joint, providing one degree of freedom in the frontal plane. Weights are attached to the ends of the horizontal beam to increase its moment of inertia. The horizontal beam is connected to the vertical one by a second rotary joint, providing one rotational degree of freedom, in the plane normal to the vertical beam, the “shoulder” joint. Stumpy’s vertical axis is made of aluminum, while both its horizontal axes and feet are made of oak wood. Although Stumpy has no real legs or feet, it can locomote in many interesting ways: it can move forward in a straight or curved line, it has different gait patterns, it can move sideways, and it can turn on the spot. Interestingly, this can all be achieved by actuating only two joints with one degree of freedom. In other words, control is extremely simple the robot is virtually “brainless”. The reason this works is because the dynamics,
Morphological Computation – Connecting Brain, Body, and Environment
75
given by its morphology and its materials (elastic, spring-like materials, surface properties of the feet), is exploited in clever ways. There is a delicate interplay of momentum exerted on the feet by moving the two joints in particular ways (for more detail, see [31,32]). Puppy: One of the fundamental problems in rapid locomotion is that the feedback control loops can no longer be employed because the response times are too slow. One of the fascinating aspects of “Puppy” is that not only fast but also robust locomotion can be achieved with no sensory feedback [33]. The design of “Puppy” was inspired by biomechanical studies. Each leg has two standard servomotors and one springy passive joint as a very simple kind of artificial “muscle” (Fig. 6).
(a)
(b)
(c)
(d) Fig. 6. The quadruped robot “Puppy”. (a) Picture of the robot. (b) The spring system in the hind legs. (c) Schematic design. (d) Running on a treadmill.
To demonstrate a running gait, we applied a synchronized oscillation based control to the four motors - two in the “hip” and two in the “shoulder” - where each motor oscillates through sinusoidal position control (i.e., amplitude and frequency of the motor oscillation). No sensory feedback is used for this controller ([34]). Even though the legs are actuated by simple oscillations, in the interaction with the environment, through the interplay of the spring system, the flexible spine, and gravity, a natural running gait emerges. Because the robot has no sensors, it cannot distinguish between the stance and the flight phases, and it cannot measure acceleration, or inclination. Nevertheless, the robot maintains a stable periodic gait, which is achieved by properly exploiting its intrinsic dynamics. The behavior is the result of the complex interplay of agent morphology, material properties (in particular the “muscles”, i.e. the springs), control (amplitude, frequency), and environment (friction and slippage, shape of the ground, gravity). Exploiting morphological properties and the intrinsic dynamics of materials makes “cheap” rapid locomotion possible because physical processes are
76
R. Pfeifer and G. G´ omez
fast - and they are for free (for further references on cheap locomotion, see e.g. [35];[36];[37];[38]). Because stable behavior is achieved without control - simply due to the intrinsic dynamics of the physical system - we use the term “selfstabilization”. Now, if sensors - e.g. pressure sensors on the feet, angle sensors in the joints, and vision sensors on the head - are mounted on the robot, structured - i.e. correlated - sensor stimulation will be induced that can potentially be exploited for learning about the environment and its own body dynamics (for more detail, see, e.g. [37]; [2], chapter 5;[39];[40]). Crazy Bird: To investigate how to gain controllability and increase the behavioral diversity of robots like “Puppy”, a quadrupedal robot was built with the TM c new LEGO MINDSTORMS NXT kit (see Fig. 7a).
(a)
(b)
Fig. 7. Controllability and behavioral diversity. (a) the quadrupedal robot. (b) “Crazy bird”.
It only has two motors and no sensors, the two motors are synchronized, turn at the same speed and in the same direction. The front legs are attached eccentrically to the wheels driven by the motors and are physically connected to the hind legs. Four experimental setups were tested with different phase delay between left and right legs (i.e., 0 ◦ , 90 ◦ , 180 ◦, and 270 ◦ ). Despite its simplicity, the robot could reach every point on a table by changing a single control parameter (i.e., the phase delay between its legs). The controller only has to induce a global bias rather than exactly controlling the joint angles or other parameters of the agents movement ([41]). If we keep the controller of the quadrupedal robot as is, but modify the morphology by removing the hind legs and adding two loosely attached rubber feet, which can turn and move a bit, we obtain the “Crazy bird” (see Fig. 7b), that gets its name due to its rich behavioral diversity.
Morphological Computation – Connecting Brain, Body, and Environment
2.5
77
Exploitation of System Environment Interaction
Leg Coordination in Insect Walking: Leg movements in insects are controlled by largely independent local neural circuits that are connected to their neighbors, there is no central controller that coordinates the legs during walking. The leg coordination comes about by the exploitation of the interaction with the environment [42,43]. If the insect stands on the ground and tries to move forward pushing backwards with one of its legs, as an unavoidable implication of being embodied, all the joint angles of the legs standing on the ground will instantaneously change. The insect’s body is pushed forward, and consequently the other legs are also pulled forward and the joints will be bent or stretched. This fact can be exploited to the animal’s advantage. All that is needed is angle sensors in the joints – and they do exist – for measuring the change, and there is global communication between the legs! But the communication is through the interaction of the agent with the environment, not through neural processing. Inspired by the fact that the local neural leg controllers need only exploit this global communication, a neural network architecture called WalkNet has been developed which is capable of controlling a six-legged robots [44]. This instance of morphological computation takes over part of the task that would have to be done by the brain – the communication between the legs and the calculation of the angles on all the joints – is performed by the interaction between the insect and the world. Wanda: The artificial fish, “Wanda”, developed by Marc Ziegler and Fumiya Iida ([45];[46]; Fig. 8) shows how the interaction with the environment can be exploited in interesting ways to achieve moving to any position in 3D space with only one degree-of-freedom (DOF) of actuation. All “Wanda” can do is wiggle its tail fin back and forth. The tail fin is built from elastic materials such that it will on average produce a high forward thrust if properly chosen. It can move forward, left, right, up and down. Turning left and right is achieved by setting the zero-point of the wiggle movement either left or right at a certain angle. The buoyancy is such that
(a)
(b)
(c)
Fig. 8. The artificial fish “Wanda”. (a) With one degree-of-freedom for wiggling the tail fin. (b) The forces acting on its body are illustrated by arrows. (c) A typical sequence of snapshots of an upward movement.
78
R. Pfeifer and G. G´ omez
if it moves forward slowly, it will sink, i.e. move down gradually. The speed is controlled by the wiggling frequency. If it moves fast and turns, its body because of the weight distribution, will tilt slightly to one side which produces upthrust, so that it will move upwards. In other words, through its own movements, it induces turbulences that it can then exploit to move upwards. The fascinating point about this robot is that its high behavioral diversity is achieved through “morphological computation”. If material properties and the interaction with the environment are not properly exploited, one would need more complicated actuation, e.g. additional fins or a flexible spine and thus more complex control.
3
Discussion
We have seen a large variety of case studies. The question that immediately arises is whether there are general overarching principles governing all of them. A recently published scheme [6] shows a potential way of integrating all of these ideas.
Fig. 9. Overview of the implications of embodiment – the interplay of information and physical processes (Adapted from [6]; see text for details)
We will use Fig. 9 to summarize the most important implications of embodiment and to embed our case studies into a theoretical context. Driven by motor commands, the musculoskeletal system (mechanical system) of the agent acts on the external environment (task environment or ecological niche). The action leads to rapid mechanical feedback characterized by pressure on the bones, torques in the joints, and passive deformation of skin tissue. In parallel, external stimuli (pressure, temperature, and electromagnetic fields) and internal physical stimuli (forces and torques developed in the muscles and joint-supporting ligaments, as well as accelerations) impinge on the sensory receptors (sensory
Morphological Computation – Connecting Brain, Body, and Environment
79
system). The patterns induced thus depend on the physical characteristics and morphology of the sensory systems and on the motor commands. Especially if the interaction is sensory-motor coordinated, as in foveation, reaching, or grasping movements, information structure is generated. The effect of the motor command strongly depends on the tunable morphological and material properties of the musculoskeletal system, where by tunable we mean that properties such as shape and compliance can be changed dynamically: During the forward swing of the leg in walking, the muscles should be largely passive, whereas when hitting the ground, high stiffness is required, so that the materials can take over some of the adaptive functionality on impact, which leads to the damped oscillation of the knee joint. All parts of this diagram are crucial for the agent to function properly, but only one part concerns the controller or the central nervous system. The rest can be seen as ”morphological computation”. Let us now go through the case studies, starting with the Eyebot. Given certain behavioral patterns, e.g.moving straight, the robot induces sensory stimulation which, because of the specific morphology of the facet distribution, is highly structured and easy to process for the nervous system (information selfstructuring). This process corresponds to the outer loop from the controller via mechanical system to task environment, back to sensory system and controller. The loosely swinging arm benefits from the mechanical feedback generated through a particular movement (left side of figure 9), internal stimulation (forces generated in the muscles through gravity), and external sensory stimulation whenever the hand encounters and object or touches the agent’s own body; the behavior cannot be understood if only the motor commands are taken into account. Note that the control of the joint angles in the arm is largely passive for this motion. In grasping, the movement is constrained by the mechanical system, which in turn provides mechanical feedback as the fingers wrap around an object. This leads on the one hand to internal physical stimulation (forces in the muscles and torques in the joints of the fingers), as well as external sensory stimulation from the touch sensors on the skin, which closes the loop to the controller. The fin-ray effect is a very direct consequence of mechanical feedback, demonstrating that mechanical feedback can indeed take over non-trivial tasks (such as bending in the direction from where it is poked). In the cockroaches climbing over obstacles, the morphology of the mechanical system is changed as the insect encounters an obstacle which changes the effect of the motor commands. The passive dynamic walker, Stumpy, the quadruped Puppy, and Crazy Bird are all examples demonstrating the exploitation of intrinsic dynamics. Because in their basic versions, sensors are entirely missing, the pathway back to the controller on the right in figure 9 simply does not exist: control is open loop and the stabilization is solely achieved through the mechanical feedback loops shown in the lower left of the figure. In these examples, the feedback is generated through ground reaction forces. Insects, when walking, also exploit mechanical feedback generated through ground reaction forces, but rather than exploiting it for gait stabilization, they capitalize on exploiting the internal sensory stimulation generated in the joint angles as one leg pushes back (thus inducing changes
80
R. Pfeifer and G. G´ omez
in the joint angles of all the other legs that are standing on the ground). This process corresponds to the lower left part of figure 9 and the arrow pointing from the mechanical system to the sensory system. Finally, in the artificial fish Wanda, and presumably in biological fish as well, the mechanical feedback to the agent is provided through the induction of turbulances rather than ground reaction forces, a strategy that is no less effective and leads to natural and energy-efficient movement. There are two main conclusions that can be drawn from these case studies. First, it is important to exploit the dynamics in order to achieve energy-efficient and natural kinds of movements. The term “natural” not only applies to biological systems, but artificial systems also have their intrinsic natural dynamics. Second, there is a kind of trade-off or balance: the better the exploitation of the dynamics, the simpler the control, the less neural processing will be required. Note that all this only works, if the agent is actually behaving in the real world and therefore is generating sensory stimulation. Once again, we see the importance of the motor system for the generation of sensory signals, or more generally for perception. It should also be noted that motor actions are physical processes, not computational ones, but they are computationally relevant, or put differently, relevant for neural processing, which is why we use the term “morphological computation”. Having said all this, it should be mentioned that there is an additional tradeoff. The more the specific environmental conditions are exploited - and the passive dynamic walker is an extreme case - the more the agent’s success will be contingent upon them. Thus, if we really want to achieve brain-like intelligence, the brain (or the controller) must have the ability to quickly switch to different kinds of exploitation schemes either neurally, or mechanically through morphological change.
Acknowledgments Rolf Pfeifer would like to thank the European Project: “ROBOTCUB: ROBotic Open Architecture Technology for Cognition Understanding and Behavior,” Nr. IST-004370. For Gabriel G´omez funding has been provided by the Swiss National Science Foundation through a scholarship for advanced young researchers Nr. PBZH2-118812. The authors would like to thank Edgar K¨ orner and Bernhard Sendhoff for the opportunity to contribute to the workshop proceedings and to Roy Ritzmann for discussing the manuscript.
References 1. Pfeifer, R., Scheier, C.: Understanding Intelligence. MIT Press, Cambridge (1999) 2. Pfeifer, R., Bongard, J.: How the body shapes the way we think. MIT Press, Cambridge (2007) 3. Markram, H.: The blue brain project. Nature Reviews — Neuroscience 7, 153–159 (2006) 4. Lungarella, M.: Exploring principles towards a developmental theory of embodied artificial intelligence. Ph.D. dissertation, University of Zurich, Switzerland (2004)
Morphological Computation – Connecting Brain, Body, and Environment
81
5. Lungarella, M., Pegors, T., Bulwinkle, D., Sporns, O.: Methods for quantifying the information structure of sensory and motor data. Neuroinformatics 3(3), 243–262 (2005) 6. Pfeifer, R., Lungarella, M., Iida, F.: Self-organization, embodiment, and biologically inspired robotics. Science 318, 1088–1093 (2007) 7. Lichtensteiger, L.: On the interdependence of morphology and control for intelligent behavior. Ph.D. dissertation, University of Zurich (2004) 8. Pfeifer, R.: On the role of morphology and materials in adaptive behavior. In: Sixth International Conference on Simulation of Adaptive Behavior (SAB), pp. 23–32 (2000) 9. Pfeifer, R.: Morpho-functional machines: basics and research issues. In: Morphofunctional machines: the new species. Springer, Tokyo (2003) 10. Franceschini, N., Pichon, J.M., Blanes, C.: From insect vision to robot vision. Philos. Trans. R. Soc. London B. 337, 283–294 (1992) 11. Hoshino, K., Mura, F., Shimoyama, I.: Design and performance of a micro-sized biomorphic compound eye with a scanning retina. Journal of Microelectromechanical Systems 9, 32–37 (2000) 12. Yokoi, H., Arieta, A.H., Katoh, R., Yu, W., Watanabe, I., Maruishi, M.: Mutual adaptation in a prosthetics application. In: [47], pp. 147–159 (2004) 13. Molina-Vilaplana, J., Feliu-Batlle, J., L´ opez-Coronado, J.: A modular neural network architecture for step-wise learning of grasping tasks. Neural Networks 20(5), 631–645 (2007) 14. Takamuku, S., G´ omez, G., Hosoda, K., Pfeifer, R.: Haptic discrimination of material properties by a robotic hand. In: 6th IEEE International Conference on Development and Learning, ICDL (in press, 2007) (accepted for publication) 15. G´ omez, G., Hernandez, A., Eggenberger Hotz, P.: An adaptive neural controller for a tendon driven robotic hand. In: Arai, T., Pfeifer, R., Balch, T., Yokoi, H. (eds.) Proceedings of the 9th International Conference on Intelligent Autonomous Systems (IAS-9), Tokyo, Japan, pp. 298–307. IOS Press, Amsterdam (2006) 16. G´ omez, G., Hotz, P.E.: Evolutionary synthesis of grasping through self-exploratory movements of a robotic hand. In: IEEE Congress on Evolutionary Computation (CEC 2007) (2007) 17. Borst, C., Fischer, M., Hirzinger, G.: Calculating hand configurations for precision and pinch grasps. In: IEEE/RSJ Int. Conference on Intelligent robots and Systems (IROS 2002), vol. 2, pp. 1553–1559 (2002) 18. Yu, W., Yokoi, H., Kakazu, Y.: Focus on Robotics Research. In: Liu, J.X. (ed.) An Interaction Based Learning Method for Assistive Device Systems, pp. 123–159. Nova Publishers (2006) 19. Hernandez, A., Katoh, R., Yokoi, H., Yu, W.: Development of a multidofelectromyography prosthetic system using the adaptive joint mechanism. Applied Bionics and Biomechanics 3(2), 101–112 (2006) 20. Lungarella, M., Sporns, O.: Mapping information flow in sensorimotor networks. PLoS Comp. Biol. 2, e144 (2006) 21. http://a1989.g.akamai.net/f/1989/7101/1d/www3.festo.com/ C1256D56002E7B89.nsf/html/Aqua ray en.pdf/FILE/Aqua ray en.pdf 22. http://www.eurekamagazine.co.uk/article/9658/Novel-actuators-copystructures-from-fish.aspx 23. Staudacher, E.: Distribution and morphology of descending brain neurons in the cricket gryllus bimaculatus. Cell Tisues Res. 294, 187–202 (1998)
82
R. Pfeifer and G. G´ omez
24. Watson, J., Ritzmann, R.: Leg kinematics and muscle activity during treadmill running in the cockroach, blaberus discoidalis: I. slow running. J. Comp. Physiol. A 182, 11–22 (1998) 25. Watson, J., Ritzmann, R., Pollack, A.: Control of climbing behavior in the cockroach, blaberus discoidalis. ii. motor activities associated with joint movement. J. Comp. Physiol. A 188, 55–69 (2002) 26. McGeer, T.: Passive dynamic walking. The International Journal of Robotics Research 9(2), 62–82 (1990) 27. McGeer, T.: Passive walking with knees. In: IEEE Conference on Robotics and Automation, vol. 2, pp. 1640–1645 (1990) 28. Collins, S., Ruina, A., Tedrake, R., Wisse, M.: Efficient bipedal robots based on passive dynamic walkers. Science 307, 1082–1085 (2005) 29. Wisse, M.: Three additions to passive dynamic walking: actuation, an upper body, and 3d stability. International Journal of Humanoid Robotics 2(4), 459–478 (2005) 30. Takuma, T., Hosoda, H.: Controlling the walking period of a pneumatic muscle walker. The International Journal of Robotics Research 25(9), 861–866 (2006) 31. Paul, C., R., Dravid, F.: Control of lateral bounding for a pendulum driven hopping robot. In: International Conference of Climbing and Walking Robots, Paris, France. (2002) 32. Paul, C., R., Dravid, F.: Design and control of a pendulum driven hopping robot. In: IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2002, Lausanne, Switzerland (2002) 33. Iida, F., Pfeifer, R.: “cheap” rapid locomotion of a quadruped robot: Selfstabilization of bounding gait. In: Groen, F., Amato, N., Bonarini, A., Yoshida, E., Krse, B. (eds.) Proc. of the 8th Int. Conf. on Intelligent Autonomous Systems (IAS-8), pp. 642–649. IOS Press, Amsterdam (2004) 34. Iida, F., Pfeifer, R.: Sensing through body dynamics. Robotics and Autonomous Systems 54 (8), 631–640 (2006) 35. Kubow, T.M., Full, R.J.: The role of the mechanical system in control: a hypothesis of self-stabilization in hexapedal runners. Philosophical Transactions of the Royal Society B. 354, 849–861 (1999) 36. Blickhan, R., Wagner, H., Seyfarth, A.: Brain or muscles? Rec. Res. Devel. Biomechanics 1, 215–245 (2003) 37. Iida, F.: Cheap design and behavioral diversity for autonomous adaptive robots. Ph.D. dissertation, Faculty of Mathematics and Science, University of Zurich, Switzerland (2005) 38. Buehler, M.: Dynamic locomotion with one, four and six-legged robots. J. of the Rob. Soc. of Japan 20(3), 15–20 (2002) 39. Schmitz, A., G´ omez, G., Iida, F., Pfeifer, R.: Adaptive control of dynamic legged locomotion. In: Concept Learning Workshop, IEEE International Conference on Robotics and Automation (ICRA 2007) (2007) 40. Schmitz, A., G´ omez, G., Iida, F., Pfeifer, R.: On the robustness of simple speed control for a quadruped robot. In: Proceedings of International Conference on Morphological Computation Workshop 2007, pp. 88–90 (2007) 41. Rinderknecht, M., Ruesch, J., Hadorn, M.: The lagging legs exploiting body dynamics to steer a quadrupedal agent. In: International Conference on Morphological Computation, Venice, Italy (2007) 42. Cruse, H.: What mechanisms coordinate leg movement in walking arthropods? Trends in Neurosciences 13, 15–21 (1990)
Morphological Computation – Connecting Brain, Body, and Environment
83
43. Cruse, H., Dean, J., Durr, V., Kindermann, T., Schmitz, J., Schumm, M.: A decentralized, biologically based network for autonomous control of (hexapod) walking. In: Neurothecnology for biomimetic robots, pp. 384–400. MIT Press, Cambridge (2002) 44. D¨ ur, V., Krause, A.F., Schitz, J., Cruse, H.: Neuroethological concepts and their transfer to walking machines. International Journal of Robotics Research 22(3-4), 151–167 (2003) 45. Ziegler, M., Iida, F., Pfeifer, R.: Cheap underwater locomotion: Morphological properties and behavioral diversity. In: IROS 2005 Workshop on Morphology, Control, and Passive Dynamics (2005) 46. Pfeifer, R., Iida, F., G´ omez, G.: Morphological computation for adaptive behavior and cognition. International Congress Series 1291, 22–29 (2006) 47. Iida, F., Pfeifer, R., Steels, L., Kuniyoshi, Y. (eds.): Embodied Artificial Intelligence. LNCS, vol. 3139, pp. 1–26. Springer, Heidelberg (2004)
Trying to Grasp a Sketch of a Brain for Grasping Helge Ritter, Robert Haschke, and Jochen J. Steil Cognition and Robotics Laboratory (CoR-Lab) & Cognitive Interaction Technology Institute (CITEC) Bielefeld University
Abstract. Brain-like behavior is intimately connected with the ability to actively manage a rich set of interactions with the environment. Originating with very simple movements in homogeneous domains, the gradual evolution of movement sophistication endowed animals with an increasing ability to control their environment, ultimately advancing from the physical into the mental object domain with the advent of language-based communication and thinking. Appearing at the high complexity end of the physical movement evolution ladder, the ability of dextrous manipulation seems in the role of a “transition technology”, leading from movement control into the mental capabilities of language use and thinking. We therefore argue that manual actions and their replication in robots are positioned as a “Rosetta stone” for understanding cognition. Using the example of grasping, we contrast the “clockwork building style” of traditional engineering with more holistic, biologically inspired solutions for grasp synthesis and discuss the potential of the research field of “Manual Intelligence” and its speculative connections with language for making progress towards robots with more brain-like behavior.
1
Introduction
We are witnessing unprecedented increases in the storage capacity and computing power of man made chips. The raw power of these devices begins to approach the estimated storage and computing capacities of small brains. As a result, our excuses for not being able to realize brain-like behavior in robots due to a lack of adequate processing power will soon have lost their factual basis. Looking at real brains for realizing brain-like behavior in artificial systems confronts us with many potential levels of analysis: at a micro-level, we can study the morphology of neurons, their interconnections, their activity patterns. On longer time scales, we observe adaptive changes in neural response properties which appear to be correlated with adaptive changes at the behavioral level. From a more abstract dynamics perspective, we have to understand what keeps a highly non-linear and recurrent system stable while admitting a remarkable degree of adaptivity at the same time. Changing to a functional viewpoint, we finally may ask how all these phenomena are connected with the abilities of a brain. To what extent can we decompose the brain into functional modules that can be assigned subtasks? And is it realistic to assume that any such subtasks can be characterized in terms of familiar functional concepts? To what extent B. Sendhoff et al. (Eds.): Creating Brain-Like Intelligence, LNAI 5436, pp. 84–102, 2009. c Springer-Verlag Berlin Heidelberg 2009
Trying to Grasp a Sketch of a Brain for Grasping
85
are paradigms such as computation, information, goal, optimization, homeostasis, and other favorites of familiar disciplines useful and adequate, and where are new paradigms needed? And, finally, can our hypothesis about brain phenomena at the neurophysiological and functional levels lead to quantitative models that can replicate the observed phenomena in some reasonably “deep” sense, for instance, leading to testable predictions, or allowing to replicate cognitive abilities in technical systems such as robots? In view of this daunting pile of questions, it may be helpful to step back for a moment and reconsider what might have been the original “driving force” for endowing organisms with nervous systems and with brains. Put succinctly, “why should a body hire a brain?”. Brains seem always intimately connected with action and change: as soon as evolution started to endow organisms with modestly rich capabilities of motion, we also see the advent of nerve cells that are invoked in the organization of such motion. As the bodies became more sophisticated, the necessary “machinery” to control such movements had to implement complex interaction patterns that can be instantiated in very short time frames, down to fractions of a second. To do this with chemical processes was a big challenge that led evolution to the invention of neurons, axons and synapses, permitting a new degree of speed and flexibility. If the implementation of sophisticated movement was the decisive driving force behind brain evolution, we should have a closer look at the evolution of movement sophistication, hoping that it can give us clues about the major factors that decisively shaped what a brain does and how it achieves it.
2
Evolution of Movement Sophistication
A basic reference point for the evolution of movement sophistication is movement in the absence of any active control. This type of movement is observed, when an inanimate body such as a piece of rock is tumbling down a hill. Contrary to first expectation, already such kind of movement can become remarkably complex from the interplay of Newton’s laws of motion, laws of friction and the possibility of temporary or permanent deformations of the moving bodies. Even nowadays, such motions are highly non-trivial to replicate accurately in simulation; a major reason is that even tiny errors in modeling friction and elasticity can rapidly become amplified by the dynamcis to give rise to macroscopic errors that can make the movement grossly unphysical. Even in the absence of such errors, Newtonian dynamics is sufficiently rich to lead to sometimes amazing “emergent” properties. Well-known manifestations are the formation of patterns on sanddunes (many moving sand grains), or the de-mixing of muesli into different grain sizes under vibration. And the example of the well-known “passive walkers” – entirely passive leg-pair-constructions that can convert potential energy on a ramp into downward-moving periodic walking motions – demonstrate that passive mechanisms can even sustain motion resembling strikingly some nontrivial forms of biological motion.
86
H. Ritter, R. Haschke, and J.J. Steil
Fig. 1. Five stages in the evolution of movement sophistication: inanimate motion (left), control of periodic motions in homogeneous environments (middle left), egomotion in highly variable environments (middle), complex manipulator control (middle right), communication and thinking as control of mental objects (right)
The evolution of nervous systems enabled organisms to modulate purely passive motion in increasingly sophisticated ways. The earliest nervous systems had to deal with swimming motion within water. Such motion strongly resembles “free movement” since the surrounding water offers only smooth viscous forces and a very homogeneous contact situation between body and environment. Neural controllers for this type of motion have been found to act as central pattern generators that can impose much of the required gross motion in a feedforward fashion, with parsimonious use of sensory feedback. When creatures left their aquaous environments, they faced a dramatic complexity increase in the contact situation between their body and the environment due to discontinuous transitions between free motion and various types of rigid body contact. In engineering, a major strategy to cope with such situations is to combine several controllers, each capable for handling one regime, plus suitable “machinery” for detecting or prediction transitions and switching control behaviors suitably. For very rigid bodies, this requires extremely fast responses to keep impact forces within manageable bounds. Since neural response times are much too slow for this, biological solutions are based on the use of elastic structures, which complicate the control problem from an engineer’s point of view, but allow to realize controllers with more slowly reacting components. By and large, all walking motions are about coordinating a number of legs to advance the body across a more or less planar ground roughly perpendicular to the direction of gravity. When the complexity of the ground increases, the control problem becomes more difficult since a simple periodic leg coordination pattern soon becomes inadequate and legs may slip, hit against obstacles or get trapped in holes. To cope with such challenges, the controller must somehow obtain information about non-uniformities and be able to select from a significant repertoire of suitable corrective motions. Even richer variability in the mechanical interaction between a set of actuators and a mechanical counterpart occurs in grasping and manipulation with multifingered hands. One may speculate that the corresponding increased challenge for the control system that enables hands to grasp and manipulate many different kinds of objects paved the way for brains towards thinking and communication.
Trying to Grasp a Sketch of a Brain for Grasping
87
Both of these latter two abilities are characterized by the manipulation of mental (instead of physical) objects; such capability appears as a natural generalization of a manipulation of physical objects and for using them as tools to control other physical objects. We may ask, what are the main innovations that are connected with the above evolutionary steps? From an implementation perspective, we may notice five major levels: – a fixed, rigid body characterizes the motion of many lifeless objects – introducing elasticity and joints leads to articulated bodies and dramatically increases the sophistication of possible movements – addition of actuators enables active movements – sensors for feedback control open richer ways to adjust movements to environmental conditions – finally brains provide highly flexible mappings between sensors and actuators to enable biological motion
3
Why Are Hands Interesting
From a control perspective, we may argue that biological control can be very grossly “clustered” into three main types: Egomotion: Control of one’s own physical state w.r.t. environment Object manipulation: Control of the physical state of some object(s) in the physical environment Communication and thinking: Control of the mental state of oneself and of others. Here, “object manipulation” is a category which besides grasping includes also manipulation categories in which body parts other than hands may be involved. For instance, in birds manipulation is largely achieved through the beak. Due to their high evolutionary importance, the activities of biting, chewing and kicking are also well developed manipulation capabilities in many species and may – from the general control perspective above – be seen as related to grasping. It thus appears that the need to control hands may have contributed a major driving force for the required “transition technology” between stage 1 and stage 3 above. If this assumption is correct, research into hands and their control should have a pivotal role for understanding many aspects that form the basis of cognition. In any case, hands belong to our most important interfaces to the world. At a purely physical level, hands support us with an enormously rich repertoire of complex physical interactions with objects in our environment. Before the advent of machines, pretty much everything that was human-made was made by human hands. Although machinery and modern automation have signficantly altered this picture, this modern “competition” to our hands has only been successful by building highly specialized devices for every required operation; the superb
88
H. Ritter, R. Haschke, and J.J. Steil
generality of our hands (and the control behind them) is still unparalleled even in our most sophisticated robots. It is precisely this generality and flexibility that we have to understand in order to elevate the capabilities of our machines (and in particular, of our robots) from the current level of mere automation to a level that might deserve the term “cognitive interaction”. The route to our highly developed manual capabilities was one of co-evolution between hands and brain and is likely to have posed a major challenge to brain evolution. We may witness a similar co-evolution in today’s robots: the advent of increasingly sophisticated hand designs pushes the development of algorithms to make adequate use of their capabilities; the experiences gained therefrom enable and reinforce the creation of yet better hand hardware. This development puts research questions into our focus that had played only Fig. 2. “Hand homunculus” dea relatively marginal role before humanoid picting body parts in proportion to the required brain effort for their robot hands were available: how to coordicontrol nate the constrained movements of a large number of “interaction surfaces” in order to control the motion of an object in various ways? How to gain and exploit rich contact information during this process? How to explore and learn new physical interaction patterns? How to create “action-based” object representations that are more strongly based on how we can interact with an object instead of how we can passively perceive it? How to recognize “interaction affordances” of other objects in ways that enable us to use them as tools enhancing the versatility of our hands still further? And, finally, how to make manual actions a natural part of cognitive interaction with the environment, including language and gestures. This (highly incomplete) list of questions is strong evidence that research into manual action has the potential of a “Rosetta Stone” for understanding significant parts of cognition, and that hands can provide us with a “window into the brain” that is very different and complementary from the “windows” we are usually focusing on.
4
Grasping Lab
In robotics, the availability of increasingly sophisticated robot hands [2] acts as a strong driving force for studying ways to endow robots with more enhanced capabilities for manual action. While the Utah-MIT hand [21] was a kind of yardstick design for a long time, the recent decade has seen a surge of developments towards lighter and more flexibly useable hands. The characteristics of some major contenders are summarized in Table 1. Systems like these begin to provide us with “output devices” to reach beyond simulation when trying to test
Trying to Grasp a Sketch of a Brain for Grasping
89
Table 1. Specifications of some dextrous robot hands (el.=electrical, pn.=pneumatic actuator type) Model
fin- joints active act. Ref gers DOFs type Shadow 5 24 20 pn. [36] Robonaut 5 22 14 el. [29] GIFU-III 5 20 16 el. [15] DLR-II 4 18 13 el. [6] Utah-MIT 4 16 16 pn. [21] Barrett 3 8 4 el. [37]
Fig. 3. Bimanual system with two Shadow Hands mounted on 7-DOF PA-10 arms for positioning
ideas about the synthesis of manual actions or when aspiring to turn such ideas into practical utility. Since most “natural” hand actions tend to involve bimanual interaction, an ideal setup should comprise a pair of interacting arms. The high effort to set up such systems makes such platforms even nowadays still a scarce resource. Among the few existing bimanual systems with advanced hands, the perhaps most widely known platforms are at DLR [28], NASA [29], and the Dexter system [11] using two non-anthropomorphic Barrett hands. The recently completed Bielefeld research platform is depicted in Fig. 3. Featuring two anthropomorphic Shadow Hands with 20 DOF each, mounted on Mitsubishi PA-10 arms, it comprises a total of 54 independent degrees of freedom. 24 Hall sensors per hand provide accurate joint angle feedback to control the 80
90
H. Ritter, R. Haschke, and J.J. Steil
miniature solenoid on-off valves that adjust air in- and outflow into the pneumatically driven “muscle”-like actuators transmitting their forces via tendons to the fingers. The system is complemented with a 4-DOF mobile camera head for visual monitoring of the workspace. In the final setup each manipulator will be additionally equipped with 186 tactile sensors distributed over the finger pads. Despite still far away from the capabilities of human hands, platforms like these begin to cross the critical threshold beyond which one can begin to study issues of advanced manual action in a robotics setting.
5
Model-Based Approaches to Grasping
A primary task for hands is grasping of objects. Synthesizing and controlling grasps for articulated robot hands has a long history [3,27] While a large part of classical manipulator control focuses on the treatment of contact-free motion, grasping is intrinsically concerned with the rich usage and management of complex contact situations. A suggestive idealization to deal with such situations is to model the interaction between hand and object as a number of point contacts through which the hand imparts forces on the object. This idealization allows to treat a grasp as the determination of a set of suitably positioned contact points on the object plus the assignment of a force to each contact point, subject to the constraints that (i) the geometric positions and the assigned forces must balance the total external force and torque that acts on the object, (ii) the direction of each contact force lies within the “friction cone” that delimits the directions that avoid slipping, and (iii), the chosen locations and forces are realizable for the given hand design. For a long time, this strongly geometry-based formulation of the grasping problem has dominated the research in the field. Its mathematical “cleanliness” has been a major attraction, inviting analysis about the existence of “force closure” grasps allowing the fingers to resist any directional forces or torques exerted on an object as long as their magnitude remains within a specified bound and each contact point is characterized by a known and non-zero friction constant, or the more restrictive “form closure” grasps, which achieve a similar effect even when the object is arbitrarily “slippery”. Other researchers used the paradigm for the formulation of measures of grasp quality that measure the stability of a grasp in terms of the six-dimensional “volume” spanned by all “wrenches” (i.e. force-torque pairs) that a grasp can resist without loosing the object. With such measures, finding a grasp becomes a constrained non-linear optimization problem, requiring a search in the space of contact positions and assigned forces subject to the constraint of kinematic feasability and with the chosen grasp stability measure as the optimization criterion [4]. Initial attempts to solve this problem had to resort to more or less drastic approximations to make the task computational feasable until the discovery that the problem can be formulated as a semi-definite optimization problem, together with the development of good statistical sampling approaches, brought a big thrust in computational efficiency for its solution [5].
Trying to Grasp a Sketch of a Brain for Grasping
91
Despite these success stories, implementations of grasp abilities on real robot hands continues to be strongly lagging behind the development of the theory. One reason for this is that the above-sketched approaches are based – even within their idealizations – on a significant amount of detailed knowledge about the object geometry and the friction conditions at the contact point locations. However, in reality an accurate object model often is not available, or is only known approximately. This is even more true for the friction conditions at the contact points. A second reason is that the underlying idealization is not really realistic. Unfortunately, real finger contacts depart from an ideal point contact in several important ways: the finger tips are deformable, leading to an extended contact surface allowing to impart not only a force but also a torque at each contact. Moreover, the shape of this contact surface is a function of pressure, direction and the surface shape in the vicinity of the contact. Both aspects have strong effects on the way we grasp objects: while a point contact on an edge or on an object corner would be ill-defined in its effect, the elastic properties of our fingers often prefer precisely such “singular” object locations, since they offer us a better control of the contact situation.
6
Avoiding the Clockwork Fallacy
Somewhat paradoxically, although the above-mentioned deviations from the point contact model facilitate the formation of good grasps, their inclusion in a proper algorithmic treatment causes a great deal of additional difficulties [3]. While it is still rather straightforward to extend the contact point model to more complex contact types (such as contact surfaces) as long as these remain rigid, a proper treatment of deformable fingertips still is largely beyond what is computational feasable today. Furthermore, better contact models must also include dynamic friction, which is different from static friction and whose consistent treatment, e.g. for accurate physics-based simulations, is still only partially solved today. When effects that facilitate the solution of a task appear as a major difficulty within the chosen representation framework, we should become wary that the chosen framework is perhaps leading into a wrong direction. The strength of the model-based approach results from a view that sees the world and the embedded processes as a “big clockwork”. As soon as all parts of this clockwork are identified and their properties are known, the operation of the clock can be figured out in great detail. But this may be a characteristic of clockworks in the first place, and at least the physics of the last century has provided us with striking examples that the assumption of “clockwork-ness” for the world can meet unexpected limitations – even within the confines of classical mechanics. This does not mean that we wish to argue for new physical phenomena in robot grasping; however, there may be a fundamental information problem: it may forever remain unfeasable to know the mechanical interaction parameters of most common objects to a sufficient precision to make “clockwork approaches” work (a somewhat related experience has been made in the computer vision
92
H. Ritter, R. Haschke, and J.J. Steil
community more than a decade ago when it turned out the visual input usually does not provide sufficient information for an accurate 3D-reconstruction of the seen object). Evidently, there are also good reasons for adopting “clockwork approaches”. One reason is that there are perfect domains where a “clockwork approach” works best since all the necessary information can be provided. However, an entirely different driving force may result from the nature of our own cognition. Whenever we attempt to solve a problem by conscious analysis, we have to cast it into a format that is communicable. This is an elementary prerequisite for a division of labor within a team and, therefore, deeply engrained in every activity that we do in a social context. However, it also seems very important when we just wish to unfold a problem “for ourself”: we have to find some “explicit format” which we can “put before our mental eyes” in order to perform the analysis. We thus may suspect that our strong bias to “clockwork-approaches” does not only stem from their inherent strength in suitable domains; a very deep cause may lie in the nature of our (conscious) thinking and our (conscious) way of communication, which both heavily rely on “mechanistic” representations involving explicit, symbol-like entities and relations among them.
Fig. 4. Common visual concepts, such as the visual concept of a rose, arise only through the participation of a very substantial number of image elements (left: 200 pixels; middle: 1000 pixels; right: 10k pixels)
Remarkably (and perhaps fortunately!) this “explicit mode” of problem solving is not our only mode of operation: many feats of our brain (including vision, language processing, motor control and – alas – grasping) occur in their major part at a “prerational level” [9] without giving us any awareness of their underlying “inner mechanisms”. When we look at the picture of a rose, we can immediately assemble thousands of pixel elements into the percept of a rose – and we also fail to instantiate the percept of a rose when we try to reduce the pixel number to a range of item numbers that is typical for our mode of explicit communication (e.g. a few ten to hundreds). Therefore it is likely that we face the challenge of developing methodologies of problem solving (a term heavily connected with classical AI!) that can avoid the “clockwork fallacy” and reach beyond the constraints of carefully handcrafted representations that optimize human readability in the first place. To address this challenge, we have at our disposal at least three mutually reinforcing strands of developments: the “connectionist” type of approaches (and
Trying to Grasp a Sketch of a Brain for Grasping
93
their modern generalizations that emphasize the concept of creating and shaping dynamical systems); evolutionary algorithms, and finally the increasing awareness of the role of embodiment. All three strands of developments are characterized by methodologies that create implicit “representations” that can be highly functional, but not necessarily “well to read” for our human mind.
7
A “Topological Approach” to Grasping
Looking for “non-clockwork” approaches to grasping, insights about the characteristics of human grasping behavior can provide useful clues for solutions. There have been a number of taxonomies of human grasps [10,33] the gross topology of the hand shape and the contact situation with the grasped object. They converge in a gross subdivision of grasps into a small number of basic grasp types, such as “power grasp”, “precision grasp” or “pinch grasp”. The importance of the gross hand shape for grasping has also become apparent in behavioral studies of the process of grasping. A major characteristic is an early preshaping of the hand to prepare the final part of the approach phase and the closing of the fingers around the object [7,33] Imitating this strategy for a robot hand offers an attractive alternative approach to grasping that can avoid the “clockwork fallacy” [19,24,30] the biologyinspired approach does not rely on any prior selection and optimization of contact points; it neither requires detailed friction information. Instead, it emphasizes a process view [23] in which the essential phases of the grasp are primarily shaped according to situational features that are largely topological in their nature and, therefore, very robust.
Fig. 5. Hand preshapes used as initial conditions to generate object grasps depicted in Fig. 6
The basic underlying idea is to view grasping as the creation of a “cage” around the object. To this end, a suitable hand preshape must guarantee that the fingers attain a good initial position for the closure phase during which the fingers “wrap around” the object. Closure motion of a finger segment stops when its further movement meets any significant mechanical resistance. Ideally, such resistance should be sensed with tactile sensors attached to the digits and to the palm; in our current set up, we were working without such sensors and instead monitored the progress of the movement through positional feedback from finger joint angle sensors.
94
H. Ritter, R. Haschke, and J.J. Steil
Conceptually, this approach can be viewed as the preparation of a suitable initial condition (the hand preshape, together with its positioning relative to the object) for a subsequent “attractor dynamics” (the finger closure phase) with termination events (object contact) for the simultaneously occuring constituent movements of the individual fingers. Main determinant of the resulting grasping strategy is the hand preshape (together with its positioning) that defines the initial condition for the closing phase. We have found that already a very restricted set of five hand preshapes (all finger precision grasp, two finger precision grasp, power grasp, two-finger pinch grasp and three finger “special” grasp) is sufficient to achieve successful grasping for a wide range of common objects [30,31] Grasping requires that each object is assigned to one hand preshape (this is currently not automated; the “correct” preshape for an object is commanded by a human operator making a corresponding hand gesture in front of a camera) and that the robot hand is positioned in a known distance and orientation to the target object. Therefore, object position and orientation has to be determined, e.g. by a vision front-end.
Fig. 6. Example grasps (left) of the Shadow Hand with the algorithm from [Roet07] for a benchmark collection of 21 common household objects (shown on the right)
To evaluate the performance of the resulting grasps, we carried out a benchmark using a representative set of 21 benchmark objects depicted in Fig. 6. While research into grasping dates more than four decades ago, so far there exists not yet any established benchmark procedure for evaluating grasps and grasp success. The benchmark suite and the procedure (outlined below) to measure grasp success involves only objects that are readily available everywhere and may be a first step towards a more systematic and generally accepted benchmarking procedure for grasp evaluation.
Trying to Grasp a Sketch of a Brain for Grasping
95
For each object, the robot had to make ten grasp attempts, each of which consisted in grasping and lifting the object. Each attempt that resulted in a successful lifting of the object was counted as a success, otherwise as a failure. Table 2 offers a summary of the measured performance and shows that the majority of objects can be grasped with a high success rate; but some objects (such as the bunch of keys or the pencil) are still too demanding for the described strategy: in the case of the keys the many moveable parts preclude reliably grasping, while the pencil is too slim to allow the fingers to attain a stable “force opposition” configuration when picking it from the table. Note that in both cases also the “clockwork-type” approach would be highly unlikely to be successful: neither could it remedy the size problem in the case of the pencil, nor would a grasp point determination on the basis of a detailed modeling of the movements of the keys appear feasable. Table 2. Performance of “topology-based” grasping strategy for 21 benchmark objects depicted in Fig.6. Column “Succ” gives number of successful grasping trials out of 10 attempts; Column “Grasp” indicates best grasp preshape (with less optimal alternatives in brackets, with a sign distinguishing between good (“+”) or less good (“-”) suboptimal performance). Labels “Force”, “Pre” and “III” refer to finger configuration used (power grasp, precision grasp and three-finger grasp, resp.). “Special” is a specially designed grasp configuration tailored to grasp a “propeller-shaped” object. Obj 1 2 3 4 5 6 7 8 9 10
Succ 10 10 10 10 10 10 9 8 III 8 9
Grasp Force (+III) (Special) (+III,-Force) Force Force (+III) Force (+III) Force (-III) (+Force,-Pre,-II) Force Force
Obj Succ Grasp 11 7 Pre 12 6 III 13 7 Force 14 7 III (+Force,-Pre,-II) 15 6 Force (-III) 16 5 III 17 4 Pre 18 3 Pre 19 4 Pre 20 0 III 21 0 Pre
There are also important details to take into account for the described “topology-based approach”. A major factor affecting grasp success turned out to be the proper timing of the finger contacts on the object. Experiments with human subjects have revealed that humans tend to maximize the simultaneity of their finger contacts on the object. This strategy helps to minimize “unwanted” object movements during the finger closure phase. A similar timing could be obtained for the robot grasps by a proper adjusting of the initial finger positions in the pregrasp. The work in [31] shows how a physics-based simulation of the finger closure phase can be used to achieve an appropriate pregrasp optimization for each object, however, at the price of then requiring at least approximate geometry data for each object. A second
96
H. Ritter, R. Haschke, and J.J. Steil
significant factor is the optimization of the thumb opposition. For the sake of brevity, we refer the reader to [31] for the technical details. The resulting grasping strategy turns out to be capable also to generalize to novel objects from the depicted object categories; its generalization ability is further underscored by its portability to a different hand design. While usually most grasping algorithms are only evaluated on a single hand system, we took the availability of an older, three-fingered hand system as an opportunity to test how well the algorithm works with such a more restricted manipulator. While the overall grasp success rate decreases as a result of the reduced “caging” that can be provided with three fingers (instead of five on the Shadow Hand), even the three finger hand was able to successfully grasp the majority of the benchmark objects depicted above.
8
From Grasping towards Manual Intelligence
Grasping an object is only one of a large array of sophisticated capabilities connected with our hands. Once a child has mastered grasping, many other sensori-motor competences come into view: alternating between different grasps according to what an object is used for; coordinating both hands for performing complex operations that are too difficult for a single hand alone; learning the numerous interaction patterns such as guiding, supporting or adjusting when we arrange rigid or non-rigid objects in purposeful fashion; acquiring important action-based concepts such as that of a container that unite interaction patterns with mechanically very different types of objects; and mastering tools such as spoons, knives or keys to enable operations that would be difficult or impossible with our hands alone. This list could be continued much further, and each of its entries denotes a highly non-trivial type of interaction pattern which is in most cases rather routine for human hands, but with very few exceptions well beyond the state of the art in current robotics [29]. This is not only due to a lack of sensing or control at the level of existing hand hardware. In fact, we all know that a human can perform an amazing range of “manual” actions even when equipped with nothing more than a rigid mechanical hook. Examples like this make clear that a very essential ingredient for the scope of manual actions is the cognitive machinery that shapes and binds the low-level interaction patterns together. If our robots had only a tiny percentage of this cognition, they very likely could perform marvellous actions with the robot hands that exist – despite all of their shortcomings [1]. However, the immense scope of the required “manual intelligence” becomes clearer when we envisage the significant time used by children to learn the use of their hands. This time extends over many years and the involved effort appears comparable to that involved in the acquisition of language. It is an intriguing question to what extent the acquisition of language and the manual capabilities may be interrelated [26] (and we will speculate on that issue below), but the case of sign language demonstrates impressively that the control
Trying to Grasp a Sketch of a Brain for Grasping
97
of our hands can be at least as sophisticated as language itself. However, most uses of our hands are not connected with the task of explicit communication, but instead are involved in interaction patterns that we usually never need to speak of (and if we have to do so, we find it very hard to verbalize what our hands are doing). Therefore, we may suspect that the nature of our “manual intelligence” is very different from the rational intelligence that was in the focus of classical AI for a long time. Given the insights that the attempts of classical AI to understand intelligence have provided, this difference may be very encouraging: by its strong grounding in physical interaction patterns while at the same time spanning the enormous “semantic spectrum” that reaches from low-level control to tool use and even to emotional, artistic and linguistic expression, “manual intelligence” may be “just the right kind” of intelligence to embark on for a research program devoted to a better understanding of cognition. Such a research perspective is also well in line with the increasing – although not undisputed – appreciation of the important role of embodiment for cognition.
9
Measuring Manual Intelligence
Envisaging manual intelligence as a research topic as well as an important capability for robots, we should at least find some ways to make it measurable. Currently, there exist practically no generally established benchmarking procedures, even when we restrict ourselves to rather well-defined capabilities such as object grasping. A tentative proposal within the EURON initiative is based on a bimanual Barrett hand system and proposes to evaluate grasp success for a number of (artificial) benchmark objects [25]. A different benchmark, employing the set of 21 widely available household objects (shown in Fig.2), has been suggested in [31] and has been used to compare grasp optimization schemes on two different robot hands [30]. Useful guidance for measuring manual intelligence might be provided from surgery, where the comparison of different training strategies with respect to their impact on the acquisition of manual skills in surgeons is an important issue [18]. For instance, manual skills in using a laparoscope have been successfully modelled as temporal force and torque profiles imparted on the instrument [32]. In the study of child development, a widely accepted procedure for measuring the development stage of motor skills is the Peabody Motor Development Scale [13]. It has a part specifically focusing on grasping skills, featuring 26 different test tasks each of which is ranked on a nominal 3-scale. Another 72 tasks measure visuo-motor coordination. While the majority of these tests are probably still too hard for the level of manual intelligence of today’s robots, they might become useable in the near future when robot hands can do more than now. Until then, these test designs might provide useful inspiration how to design manual skill benchmarks for robots, for instance, embracing instruction by demonstration as a natural part of any performance measurement.
98
10
H. Ritter, R. Haschke, and J.J. Steil
Manual Intelligence and Language: Some Speculations
If we look at the multitude of research issues that have been in the focus of robot manipulation during its inception about forty years ago, we recognize that an overwhelming proportion of this research has been addressing relatively low-level issues of hand control. While this work has significantly contributed to the design of better robot hands, any research dealing with the realization of manual abilities beyond the basic acts of reaching and grasping has been very scarce at best. Reminding us of the enormous abilities in a simple hook when “driven” with the right amount of cognition, we may have been too obsessed with developing the “right clockworks” at the wrong level to make our hardware “tick”. Our hands are very general manipulators. We can arrange our fingers into many specialized “virtual tools” with very different and highly specialized patterns of operation. This makes it very hard to believe that manual skills can be derived from a small set of general principles: it seems that our repository of manual skills contains a large number of “inventions” made through our long experience of hand use. Interestingly, most of these “inventions” seem to be made by all people in a very similar way; however, at least some of the more sophisticated inventions would probably take too long to invent ourselves and are instead taught to us (or we pick them up through observation and imitation of others): tying knots, for instance, or many skills that make up the different craftsmanships. This resembles language: a lot of structure does not seem derivable from any deeper logics. It “just” has evolved to serve the purpose of communication, and the multitude of languages demonstrates impressively that there are many different solutions possible. If we pursue the analogy with language further, we may find that much of our current research is still focused on making our main “articulatory instrument” useable at all. This is very reminiscient of the “babbling phase” during which a child learns to use its vocal tract to articulate a basic repertoire of syllables. It might be interesting to analyse early human manual actions from such a perspective: it would appear as a very reasonable economy principle if the brain used similar strategies (and/or circuits) to acquire phonetic and “manual” babbling abilities. Such an approach is also very much in line in recent insights about the connection between manual gesture and speech, see e.g. [16,20,8]. If we are willing to take the analogy with language development serious, the next challenge would be an identification and mastery of larger basic chunks of manual action with a similar building block role as words in language. There has already been some intriguing work aimed at the identification of “basic actions units” of human movement [34,17] This work has shed some light on our understanding of how skillful movements might become generated from a more limited repertoire of basic primitives; at the same time it has enabled the optimization of the performance level in sports by identifying which features in the mental organization of action units are indicative of expert performance and reinforcing those features through correspondingly focused training procedures [35].
Trying to Grasp a Sketch of a Brain for Grasping
99
A database of human manual actions can become an important cornerstone for setting up an equivalent of a “vocabulary” in language. It would have to represent its entries not only at the trajectory level, but also at more abstract levels, such as in form of idealized physics-based simulation chunks, together with annotations at the task level. It might play a similar role for manual action research as the famous WordNet corpus [12] in linguistic research. In the same manner as “knowing” a vocabulary of a language involves the (very non-trivial!) ability of associating objects, activities and relations by their “names” in the vocabulary, “knowing” a vocabulary of manual actions necessitates the ability of associating actions that “fit” into a given constellation of objects. Recognizing such “affordances” [14] requires to recognize a possibility, calling for a non-trivial extension to the familiar classification paradigm of pattern recognition. Another key aspect is the usage of dictionary entries. Here once more language can offer guidance: using a word requires expectations of its effect on the mind of the receiver. Likewise, manual action units will have to become tied to representations about their effects on the involved objects. This level is very different from the (also very non-trivial) control issues to be solved for physically executing the manual action unit, and we already have pointed out that level might more aptly be seen in analogy to the articulation of the sound patterns of a word. We believe that only such deeply grounded representations of manual action primitives can form the necessary basis for considering longer action sequences (“sentences”). It is well known in linguistics that the words within a sentence are not primarily data, but instead very parsimonious “interaction instruments” to instantiate a desired mental picture in the mind of the receiver. This point has been brought to fore perhaps most succinctly in recent work that views language from the viewpoint of an “interaction game”, using game theory to analyze the utility of different choices for the next interaction step [22]. Since language has evolved to enable talking about actions, it should be of little surprise that there is a close correspondence between sentences about actions, and the structure of the actions themselves. If we consider, e.g., the use of a tool such as a knife, we clearly can distinguish a “subject” (the knife), an “object” (the bread) and a “predicate”, namely the activity of cutting. From an evolutionary perspective, the ability of hands to assemble structures in the physical domain appears as a very natural basis for its extension into the mental domain, enabling an assembly of mental structures when the interaction of hand action sequences with physical objects is replaced by the interaction of sentences with mental objects in the mind of the listener. Using hands in a purposeful fashion to interact with and shape a wide range of differently structured physical contexts would then resemble the linguistic capability of conducting conversations on a wide range of subjects. At this level, manual intelligence would need to invoke many of the cognitive abilities that we so far primarily connect with language: recognizing agentship and roles, planning ahead, associating meaning to object constellations and many more. And
100
H. Ritter, R. Haschke, and J.J. Steil
although we must always commit ourselves to one single thread of real action, we also can imagine action alternatives before our mental eye, leading into analogies with the linguistic formulation of conditionals.
11
Conclusion
Taking serious the view that brain-like behavior is primarily about the active shaping of interaction, the study of manual actions, or, more ambitiously, of Manual Intelligence, should bear a great potential to reveal significant insights about brains and the replication of some of their functions in machines. While traditional robotics viewed most parts of the environment as obstacles and has been biased towards generating movement strategies that kept the robot away from any objects, research into manual action has to embrace contact and interaction as main modes of operation. This will shift our approaches for representing objects, situations and actions in a healthy way and spur novel approaches which have interaction in their center, calling for new ways to detect, recognize, decompose and control patterns that are no longer of a purely input nature. This appears to be a good move, since it is likely to refreshen also the interaction among our ideas and across the disciplines that are involved in the fascinating subject of manual intelligence.
References 1. St. Amant, R., Wood, A.B.: Tool use for autonomous agents. In: Proc. National Conf. on Artificial Intelligence (AAAI), pp. 184–189 (2005) 2. Bicchi, A.: Hands for dexterous manipulation and robust grasping: a difficult road toward simplicity. IEEE Trans. Robotics Autom. 16(6), 652–662 (2000) 3. Bicchi, A., Kumar, V.: Robotic grasping and contact: a review. In: Proceedings ICRA 2000, pp. 348–353 (2000) 4. Borst, C., Fischer, M., Hirzinger, G.: Calculating hand configurations for precision and pinch grasps. In: Proc. IEEE IROS 2002, pp. 1553–1559 (2002) 5. Borst, C., Fischer, M., Hirzinger, G.: Efficient and precise grasp planning for real world objects. In: Barbagli, F., Prattichizzo, D., Salisbury, K. (eds.) Multi-point Interaction with Real and Virtual Objects. Tracts in Advanced Robotics, vol. 18, pp. 91–111 (2005) 6. Butterfass, J., Fischer, M., Grebenstein, M., Haidacher, S., Hirzinger, G.: Design and experiences with DLR Hand II. In: Proc. World Automation Congress, Sevilla (2004) 7. Castiello, U.: The Neuroscience of Grasping. Nat. Rev. Neurosci. 6, 726–736 (2005) 8. Cook, S.W., Goldin-Meadow, S.: The Role of Gesture in Learning. Do Children Use Their Hands to Change Their Minds? J. Cognition and Development 7(2), 211–232 (2006) 9. Cruse, H., Dean, J., Ritter, H. (eds.): Prerational Intelligence – Adaptive Behavior and Intelligent Systems Without Symbols and Logic. Studies in Cognitive Systems, vol. 1-3. Kluwer Academic Publishers, Dordrecht (2000) 10. Cutkosky, M.R.: On Grasp choice, grasp models and the design of hands for manufacturing tasks. IEEE Trans. Robotics and Automation 5(3), 269–279 (1989)
Trying to Grasp a Sketch of a Brain for Grasping
101
11. Dexter - Mechanism, Control and Developmental Programming, http://wwwrobotics.cs.umass.edu/Research/Humanoid/humanoid index.html 12. Fellbaum, C. (ed.): WordNet – An Electronic Lexical Database. MIT Press, Cambridge (1998) 13. Folio, M.R., Fewell, R.R.: Peabody Developmental Motor Scales PDMS-2 Therapy Skill Builders Publishing (2000) 14. Gibson, J.J.: The ecological approach to visual perception. Houghton Miffin, Boston (1979) 15. Mouri, T., Kawasaki, H., Yoshikawa, K., Takai, J., Ito, S.: Anthropomorphic Robot Hand: Gifu Hand III. In: Proc. of Int. Conf. ICCAS 2002 (2002) 16. Gentilucci, M., Corballis, M.C.: From manual gesture to speech. A gradual transition. Neurosci. & Biobehav. Reviews 30(7), 949–960 (2006) 17. Guerra-Filho, G., Ferm¨ uller, C., Aloimonos, Y.: Discovering a Language for Human Activity. In: Proc. AAAI 2005 Fall Symposium (2005) 18. Hamdorf, J.M., Hall, J.C.: Acquiring surgical skills. British Journal of Surgery (87), 28–37 (2000) 19. Hauck, A., Passig, G., Schenk, T., Sorg, M., F¨ arber, G.: On the performance of a biologically motivated visual controlstrategy for robotic hand-eye coordination. In: Proc. IROS 2000, vol. 3, pp. 1626–1632 (2000) 20. Hadara, U., Wenkert-Olenikc, D., Kraussd, R., Sorokerc, N.: Gesture and the Processing of Speech: Neuropsychological Evidence. Brain and Language 62(1), 107– 126 (1998) 21. Jacobsen, C., Iversen, E.K., Knutti, D.F., Johnson, R.T., Biggers, K.B.: Design of the Utah/MIT dexterous hand. In: ICRA Conf. Proceedings, pp. 1520–1532 (1986) 22. J¨ ager, G.: Applications of Game Theory in Linguistics. Language and Linguistics Compass 2, 1749–1767 (2008) 23. Jeannerod: The timing of natural prehension movments. J. Motor Behavior 16(3), 235–254 (1984) 24. Kragic, D., Christensen, H.I.: Biologically motivated visual servoing and grasping for real world tasks IROS 2003. In: Proceedings of IROS 2003, vol. 4, pp. 3417–3422 (2003) 25. Morales, A. (2006), Experimental benchmarking of grasp reliability, http://www.robot.uji.es/people/morales/experiments/benchmark.html 26. MacNeill, D.: Hand and Mind: what gestures reveal about thought. University of Chicago Press (1992) 27. Okamura, A.M., Smaby, N., Cutkosky, M.R.: An overview of dexterous manipulation. In: Proceedings ICRA 2000, pp. 255–262 (2000) 28. Ott, C., Eiberger, O., Friedl, W., Bauml, B., Hillenbrand, U., Borst, C., AlbuSchaffer, A., Brunner, B., Hirschmuller, H., Kielhofer, S., Konietschke, R., Suppa, M., Wimbock, T., Zacharias, F., Hirzinger, G.: A Humanoid Two-Arm System for Dexterous Manipulation. In: 6th Humanoid Robots Conf., pp. 276–283 (2006) 29. Rehnmark, F., Bluethmann, W., Mehling, J., Ambrose, R.O., Diftler, M., Chu, M., Necessary, R.: Robonaut: The Short List of Technology Hurdles. Computer 38, 28–37 (2005) 30. R¨ othling, F., Haschke, R., Steil, J.J., Ritter, H.: Platf orm Portable Anthropomorphic Grasping with the Bielefeld 20 DOF Shadow and 9 DOF TUM Hand. In: IEEE IROS Conference Proceedings (2007) 31. R¨ othling, F.: Real Robot Hand Grasping using Simulation-Based Optimisation of Portable Strategies Dissertation, Faculty of Technology, Bielefeld University (2007)
102
H. Ritter, R. Haschke, and J.J. Steil
32. Rosen, J., Hannaford, B., Richards, C.G., Sinanan, M.N.: Markov modeling of minimally invasive surgery based on tool/tissueinteraction and force/torque signatures for evaluating surgical skills. IEEE Trans. Biomed. Engineering 48(5), 579–591 (2001) 33. Santello, M., Flanders, M., Soechting, J.F.: Patterns of Hand Motion during Grasping and the Influence of Sensory Guidance. Journal of Neuroscience 22(4), 1426– 1435 (2002) 34. Schack, T.: The cognitive architecture of complex movement. Int. J. of Sport and Exercise Psychology 2(4), 403–438 (2004) 35. Schack, T., Mechsner, F.: Representation of motor skills in human long-term memory. Neurosci. Letters 391, 77–81 (2006) 36. Shadow Robot Company, The Shadow Dextrous Hand, http://www.shadow.org.uk/products/newhand.shtml 37. Townsend, W.: The BarrettHand grasper – programmably flexible part handling and assembly Industrial. Robot 27(3), 181–188 (2000)
Learning Actions through Imitation and Exploration: Towards Humanoid Robots That Learn from Humans David B. Grimes and Rajesh P.N. Rao University of Washington, Seattle WA 98195, USA {grimes,rao}@cs.washington.edu http://www.cs.washington.edu/homes/{grimes,rao}
Abstract. A prerequisite for achieving brain-like intelligence is the ability to rapidly learn new behaviors and actions. A fundamental mechanism for rapid learning in humans is imitation: children routinely learn new skills (e.g., opening a door or tying a shoe lace) by imitating their parents; adults continue to learn by imitating skilled instructors (e.g., in tennis). In this chapter, we propose a probabilistic framework for imitation learning in robots that is inspired by how humans learn from imitation and exploration. Rather than relying on complex (and often brittle) physicsbased models, the robot learns a dynamic Bayesian network that captures its dynamics directly in terms of sensor measurements and actions during an imitation-guided exploration phase. After learning, actions are selected based on probabilistic inference in the learned Bayesian network. We present results demonstrating that a 25-degree-of-freedom humanoid robot can learn dynamically stable, full-body imitative motions simply by observing a human demonstrator.
1
Introduction
The dream of achieving brain-like intelligence has been articulated by a number of visionaries in the field of artificial intelligence, going back to Turing [1] and the 1956 Dartmouth AI conference [2]. Two major obstacles that have stymied efforts to create brain-like intelligence in machines have been: (a) lack of mechanisms for rapid learning of new behaviors that allow the machine to adapt to the environment, and (b) lack of the ability to handle uncertainty due to noise, ambiguity in inputs, and incomplete knowledge of the environment. In humans, an important mechanism for rapid learning is imitation. Infants as young as 42 minutes of age have been found to imitate facial acts such as tongue protrusion while older children can perform complicated forms of imitation ranging from learning to manipulate novel objects in particular ways to imitation based on inference of goals from unsuccessful demonstrations (see [3] for a review). In this chapter, we propose imitation as a general-purpose mechanism for rapid learning in robots. We handle the problem of uncertainty by utilizing Bayesian models B. Sendhoff et al. (Eds.): Creating Brain-Like Intelligence, LNAI 5436, pp. 103–138, 2009. c Springer-Verlag Berlin Heidelberg 2009
104
D.B. Grimes and R.P.N. Rao
that maintain full probability distributions over sensory and motor states; these distributions and their parameters are updated in a Bayesian manner based on sensory feedback. There is growing evidence from psychophysical and neurobiological studies that the brain may rely on Bayesian principles for perception and action [4,5]. Our framework for robotic imitation is inspired by these Bayesian models of brain function and by the ability of humans to learn new skills simply by watching another human. Many existing methods for planning robotic actions require an engineer to explicitly model the complex physics of the robot and its environment. This process can be costly, tedious, error-prone, brittle, and inflexible to changes in the environment or the robot. Consider the problem of teaching a robot a nontrivial motor behavior such as kicking a ball or swinging a racket. A programmer may have to spend a large amount of time deciding on exact motor control sequences for every joint in the robot for a pose sequence that only lasts a few seconds. A much more intuitive and robust approach would be for the robot to learn to generate its own motor commands by simply watching an instructor perform the desired action. In other words, the robot should learn to translate the perceived motion of its instructor into appropriate motor commands for itself. This imitation learning paradigm is intuitive because it is exactly how we humans often learn new behaviors [6]. Learning by imitation, long recognized as a crucial means for skill and general knowledge transfer between humans, has only recently become an active research area in the robotics and machine learning communities. Robotics researchers are becoming increasingly interested in learning by imitation (also known as “learning from demonstration”) as an attractive alternative to manually programming robots. Kuniyoshi, Inaba and Inoue termed their approach “learning by watching” [7]. They identified key components necessary for any such imitation learning system including functional units for segmentation and recognition of human actions and motion, as well as an algorithm for constructing an imitative motor plan for a given robotic platform. In their system, information from these components is integrated symbolically. Using pre-specified motion templates, a robotic arm is able to perform a blocks-world manipulation task by imitation. Researchers have since studied imitation learning in robots using a wide array of techniques from many sub-fields of electrical engineering, mechanical engineering, and computer science. Some recent examples on robotic imitation learning include, for example [8,9,10]. Many approaches rely on producing imitative behaviors using nonlinear dynamical systems (e.g., [11]) while others focus on more biologically motivated algorithms (e.g., [12]), or in achieving goal-directed behaviors (e.g., [13,14]). The work of Schaal and Atkeson on “learning from demonstration” [15,16] focused on incorporating imitation into the well known framework of reinforcement learning. The goal of this work is to bootstrap reinforcement learning algorithms such as Q-Learning [17,18]. Schall demonstrates a pole-balancing robot which is able to quickly learn a Linear-Quadratic Regulator (LQR) policy by observing another robot demonstrate successful pole balancing. Essentially this works by
Learning Actions through Imitation and Exploration
105
estimating LQR parameters during demonstration. More recent work of Price [19] on implicit imitation, also explored “priming” reinforcement learning with an imitative component. Inverse reinforcement learning [20] and apprenticeship learning [21] have been proposed to learn controllers for complex systems based on observing an expert and learning their reward function. However, the role of this type of expert and that of a human demonstrator must be distinguished. In the former case the expert is directly controlling the artificial system. In the imitation learning paradigm that we explore, the teacher controls only their own body. Despite kinematic similarities between the human and the humanoid robot, the dynamic properties of the robot and human are very different and must be accounted for in the learning process. Many of the aforementioned approaches do not take uncertainty into account. Uncertainty in imitation arises from many sources including the internal dynamics of the robot, the robot’s interactions with its environment, noisy and incomplete observations of the teacher, etc. Handling uncertainty is especially critical in robotic imitation because executing actions that have high uncertainty during imitation could lead to potentially disastrous consequences. This motivates the probabilistic approach suggested in this chapter. The domain of humanoid robotics is of particular interest in imitation learning [22]. This is due to several factors. First is the intuitiveness of programming a humanoid robot by demonstration, given the intentional kinematic similarity with a human teacher. Secondly, enabling novel behaviors in humanoid robots is known to
Fig. 1. The Fujitsu HOAP-2 humanoid robot. The Humanoid Open Architecture Platform 2 (HOAP-2) from Fujitsu manifests many engineering advances, particularly its servo accuracy and power to weight ratio. The HOAP-2 is capable of such motions as stable walking, kicking a ball, and balancing on one leg. Despite these promising hardware capabilities, the HOAP-2 had only been programmed to perform a handful of hard-coded skills, with development focused largely on more stable, faster walking gaits. Learning based approaches represent the most promising approach to rapid skill acquisition, given the complexity of controlling 25-degrees-of-freedom at 1000Hz.
106
D.B. Grimes and R.P.N. Rao
be a difficult research problem due to the very high number of degrees-of-freedom. When combined with the advent of commercially available humanoid robots such as the HOAP-2 from Fujitsu (shown in Figure 1), imitation-based methodologies have quickly become an important and viable research topic [23]. In this chapter, we describe a Bayesian framework for imitation-based learning in humanoid robots. The proposed method involves learning a predictive model of the robot’s dynamics, represented directly in terms of sensor measurements, solely from exploration. A dynamic Bayesian network is used to probabilistically capture the robot’s dynamics. Actions are selected based on probabilistic inference in the learned Bayesian network. Although the observation of the teacher is informative, there is a high degree of uncertainty in how the robot can and should imitate. The proposed model accounts for many of these sources of uncertainty including: noisy and missing kinematic estimates of the teacher, mapping ambiguities between the human and robot kinematic spaces, and lastly, the large uncertainty due to the effect of physical forces imparted on the robot during imitation. We present results from experiments performed using a Fujitsu HOAP-2 25-degrees-of-freedom humanoid robot and the Webots [24] dynamic simulation software. Our results demonstrate that the robot can learn dynamically stable, full-body imitative motions simply by observing a human demonstrator and performing exploratory learning. 1.1
Physics-Based Modeling and Motion Planning
This section presents some fundamental concepts of physics-based modeling and motion planning relevant to the domain of humanoid robotics. This is useful both as a comparison to the learning-based methods presented by this chapter, and to understand the prior intuition used in constructing the probabilistic dynamic balance model presented in Section 3. With some caveats, one can approximate a humanoid robot as a set of articulated rigid bodies. The principles of dynamics modeling and simulation of such rigid-body systems are well understood and have been applied to a wide variety of robotic systems [25]. Consider a robot with N joints between the N +1 rigid bodies B0 , ··, Bi , ··, BN . The joints are modeled as an open (acyclic) graph structure with the rigid body node B0 at the root. Often the root node models a stationary “base” rigid body, though clearly in a humanoid robot this is not the case. Let π(i) indicate the index of the parent body of the i-th node, where indices are chosen such that π(i) < i. The mass of body i is denoted mi and its position, the center of mass (CoM) is denoted Ci . For simplicity consider all joints to be revolute. A revolute joint connects bodies about a single point, possibly with multiple axes of rotation (degrees of freedom). For ease of notation, here we adopt the convention due to Featherstone to express equations of motion using six-dimensional spatial vectors [25]. The spatial linear velocity υi (of the point Ci ) and angular velocity ωi of rigid body i are described together in a six-dimensional motion vector:
Learning Actions through Imitation and Exploration
vi =
υi . ωi
107
(1)
All quantities presented are defined in a global coordinate frame relative to an arbitrary fixed origin. However, typically in implementation coordinates are computed with respect to frames local to each joint and recursively transformed. The spatial acceleration of rigid body i is defined as: dvi υ˙ i ai = = . (2) ω˙ i dt Let the joint between body i and its parent π(i) have di degrees of freedom. The angular position of the joint is then denoted by θi , a di × 1 vector. Then θ can be defined as a vector of all joint angles (corresponding to each rotation degrees of freedom). ⎡ ⎤ θ1 ⎢ θ2 ⎥ ⎢ ⎥ ⎢ ·· ⎥ ⎢ ⎥ θ=⎢ (3) ⎥ ⎢ θi ⎥ ⎣ ·· ⎦ θN For instance in the HOAP-2 humanoid robot, θ is a 25 × 1 vector. Similarly, ˙ and θ¨ respectively. A joint’s joint angular velocity and acceleration are denoted θ, orientation is described by Hi , a 6×di matrix spanning the space of the rotational axes of joint i. Using these definitions it is straightforward to solve the forward kinematics problem: given a vector of joint kinematics compute the velocities and accelerations (and from these the position) of all rigid bodies. This is described here using a forward recursive method shown below in Eqs. 4 and 5. vi = vπ(i) + Hi θ˙i
(4)
ai = aπ(i) + H˙ i θ˙i + H˙ i θ¨i
(5)
The next step is to consider inertia and forces as is necessary to model and ultimately constrain dynamics. The spatial inertia of each rigid body can be described by 6 × 6 matrix denoted Ii . The Ii matrix incorporates both the mass mi and the moment of inertia matrix Ii∗ . These quantities must be known or estimated. Forces are also denoted in spatial notation: ∗ τ f= (6) f∗ where τ ∗ (a 3 × 1 vector) the coupled torque, and f ∗ (a 3 × 1 vector) the linear force are both expressed as being applied at the CoM point Ci of body i.
108
D.B. Grimes and R.P.N. Rao
The combined Newton-Euler equation of motion for rigid body i can be compactly expressed as: fi + fix = Ii ai + vi × Ii vi (7) where fi is the net internal force (forces applied through its joints), and fix is the net external force. Here fix incorporates all interaction forces between the robot and the environment, and must be known or estimated. π(i) The next step is to compute fi , the force transmitted from parent π(i) of node i (presumably via a controlled actuator). To do so we subtract the force contributions of all children nodes (denoted μ(i)):
π(i) fi = fi − fji (8) j∈μ(i)
Equation 8 can be applied to computing the joint forces recursively starting from at leaf nodes and moving toward the root. Finally one can extract the components of the forces that are coupled through the joint’s DOFs (presumably via an actuator): π(i) τi = HiT fi . (9) Equation 9 forms the basis for solving the inverse dynamics problem: given the desired kinematics of the robot, compute the necessary joint torques. In practice more efficient methods (for examples see [26,27]) exist for solving the inverse dynamics problem than the recursive Newton-Euler method presented here. The relative simplicity of the previous equations obscures the difficulties in their application towards solving real world problems in humanoid robotics. The first is simply the large number of quantities which must be known or accurately estimated. Accurate estimation of the inertial tensor of each rigid body component is a difficult task and requires either a) detailed knowledge and modeling of the internal composition or b) complex equipment for empirical inertial measurement. A second, and perhaps more difficult problem is that the formulation above assumes that all external forces fix are known. For a humanoid robot external forces include the ground reaction force, frictional forces and gravity. Simulation and modeling of collisions and the contact forces between the feet and the ground represents a complex and open research area. Two major approaches to these problems have been proposed a) penalty-based methods and b) analytical methods. Penalty methods pose the problem as one of numerical optimization in a stiff spring-mass system which approximates rigid contacts [28]. The second class of methods formulate the contact forces directly using analytical solutions to simplified contact models [29,30]. Another issue with the application of the physics-based model as outlined here is the assumption that all the bodies are completely rigid, that the joints are purely revolute. In practice often these assumptions are not met. Additionally, the very need for these assumptions places design constraints on the hardware which may not be favorable toward the final desired robotic application.
Learning Actions through Imitation and Exploration
109
Many motion planning algorithms for humanoid robots seek to first find a sequence of kinematic states which are statically stable (relative to some criterion for stability) [31]. An early definition of stability in legged robots is due to McGhee and Frank [32]. They defined a kinematic state to be statically stable if the projection of the robot’s center of gravity onto the ground plane fell within the area of support. The area of support is defined as the convex hull of contact points between the robot and the environment. Another common requirement is that the joint torques τi required to counteract the externally applied forces do not exceed the actuator saturation bounds. The stability criterion can also be defined with respect to the conditions at the contact points. A common example of this type of criterion is the well known zero movement point (ZMP). For a review of the long history of developments related to ZMP see (Vukobratovic, 2004) [33]. ZMP defines dynamic stability for a robot, that is, an entire motion is defined as stable or not rather than a single static pose. In order to be dynamically-stable under the ZMP criterion the designated contact surface at a given time (generally a single foot in a gait) must not rotate with respect to the ground plane (although it can slip in purely translational motion). Thus the ZMP is defined as the point relative to the robot at which the single ground reaction force (the sum of the contact force fix terms above) will cancel out all other forces acting on the robot. It is important to note that ZMP does not prescribe a certain control algorithm and a wide variety of approaches have been developed to apply ZMP to humanoid robots. Many motion planning algorithms are based on controlling ZMP, and typically attempt to move the ZMP to the center of a polygon modeling the contact surface [34,35,36]. ZMP dynamic stability is defined with respect to a model of the contact between robot and environment. However, the use of the term “dynamically balanced” in this chapter is not defined with respect to any specific contact model, and is determined empirically. We define empirical dynamic balance (EDB) as follows. On repeated execution the motion is EDB if it does not cause both of the robot’s feet to loose stationary contact with the ground plane in a way that results in the robot rapidly losing its balance and falling. Because of the complexity involved in relating the joint torques to the ZMP some researchers instead formulate the balance problem as controlling an inverted pendulum [37]. For purely walking motions, the problem of static and dynamic stability can be approached by restriction to a particular gait model (see for example [38]). However, such models of motion are not applicable in the case of learning arbitrary full body motions via imitation. The researchers Yamane and Nakamura have recently proposed the “Dynamics Filter” [39]. Their work applies a physics-based model to the problem of learning full body motions via imitation or from a large motion library. Specifically they derive a set of motion equations which assume “virtual links” between the robot and its environment, such as between the feet and the ground. Through simulation of the contact and collision forces they apply a trial-and-error search for joint torques which meet the dynamics constraints and approximately reproduce the
110
D.B. Grimes and R.P.N. Rao
desired kinematics. While the simulation results appear impressive the authors do not appear to have realized their technique with a humanoid robot. 1.2
Bayesian Approaches to Uncertainty in Imitation Learning
Bayesian networks provide a sound theoretical approach to incorporating prior, yet uncertain information. Thus the problem of finding dynamically balanced imitative motions is posed as a problem of performing learning and inference in a Bayesian network. In this chapter we propose new techniques for imitation learning that explicitly handle uncertainty using a learned probabilistic model of actions and their sensory consequences. The framework proposed adapts flexibly to various degrees of physics-based modeling. The specific application to full body motion learning presented focuses on a largely nonparametric forward model of the world. However, intuition about the physics of the plays a part in constructing the parametric constraint and sensor models. The probabilistic forward model of the imitator’s internal dynamics is then used to infer appropriate actions for imitation using probabilistic inference in a dynamic Bayesian network (DBN) with observations of the human demonstrator as evidence. Results of the approach is then demonstrated on a 25-degrees-of-freedom humanoid robot.
2
A Dynamic Bayesian Network Model of the Imitation Learning Process
This section presents an imitation learning framework for utilizing rich sensory data obtained by an intelligent agent. The learning process allows the agent to rapidly acquire a desired skill while considering egocentric constraints such as dynamic balance. In this chapter we propose methods for learning from the following two sources of information: – Demonstrative: Prior information from observing a skilled teacher perform the desired behavior – Explorative: Information gained by the agent in exploring the environment. In particular in this chapter we propose an inference-based approach to selecting a set of actions based on two types of probabilistic constraints: – Matching: Observations of the teacher’s state during demonstration, – Egocentric: Constraints of the agent’s own state. Dealing with uncertainty is at the heart of learning by imitation and exploration. One source of this uncertainty stems from the fact that observing and estimating the actions of a human demonstrator is an inherently difficult task. Even with complex, multi-view sensors such as commercially available optical motion capture systems estimates of human kinematics remain uncertain. This uncertainty only increases using sensing capabilities internal to the robot such as a monocular camera.
Learning Actions through Imitation and Exploration
111
A second source of uncertainty of the observation and matching process is the inter-trial variance of a human repeatedly performing the desired skill. Given multiple examples, this is evident in variance of the estimated motion which is often much greater than the expected sensor noise distribution. The probabilistic algorithm we propose is able to utilize estimates of the inter-trial covariance by constructing a search space spanning this space. Finally, a crucial aspect of uncertainty arises from the need to predict future states of the agent given some potential control values. Even given an accurate deterministic physical model, uncertainty in initial conditions can cause considerable uncertainty in predictions. As an important theme of this chapter is to avoid the need for complex physical modeling, the approach must handle uncertainty both in input conditions and the unknown model. Note that an alternative hybrid approach recently put forward is to model a complex system using both a first-order physics-based model and use a nonparametric regression model to learn the residual “unmodelled” system behavior [40]. To summarize, this chapter proposes a framework for reasoning under uncertainty while exploiting both prior information and available sensory information. This framework extensively leverages probabilistic methods and Bayesian networks, which present a powerful paradigm for analyzing uncertain information. 2.1
The Imitation Process as a Generative Bayesian Model
Each entity (demonstrator and agent1 ) in the imitation learning paradigm is modeled by a time-varying state process. The state of the entity consists of relevant quantities that characterize it at a sequence of discrete times. The state process is assumed throughout to be a first-order Markovian processes. An action sequence represents values selected by the demonstrator or agent which stochastically determine the visited states. Actions are simply a desired state of a motor process (such as a desired servo angle). The entities observe each other via sensors which produce a sequence of observations. Throughout this chapter we assume that both the state and actions of the teacher are hidden, therefore all information must be extracted from observations. However, assuming that the entropy of the conditional likelihood P (obs|state) is relatively low, information about demonstrator state can be recovered from observations. Such a hidden state variable is commonly termed “partially observable”. Whether or not the agent can directly observe its own state is dependent both on the specific domain and on the definition of the state itself. This chapter assumes the most general situation in which the agent’s state is also partially observable. The first phase of imitation learning is demonstration. In this phase the agent gathers demonstrator observations represented by values of the continuousvalued random variable Otd . Thus we define Otd to be the sensory reading based on the state of the teacher at each time slice t. Throughout this chapter we assume a 1
Throughout this chapter demonstrator is used interchangeably with teacher and instructor. Additionally, agent is interchangeable with imitator and learner.
112
D.B. Grimes and R.P.N. Rao
Fig. 2. Generative imitation learning DBN slice. In the generative model approach proposed, the goal is to infer the posterior distributions over the random variables At which seek to explain the observed data for both demonstrator Otd and agent Ota . The model also incorporates a probabilistic constraint via the conditional probability distribution P (Ct |Sta ). The constraint variable is used to select agent actions At which satisfy a dynamic balance constraint in a humanoid robot.
known, fixed action sequence length T . Thus the agent obtains the multivariate real-valued vectors od0 , od1 , ··, odt , ··, odT , The demonstrator state is modeled by the continuous-valued random process Std . The dimensionality of the demonstrator state is determined by the number of degrees of freedom (DOFs) in the model of the instructor. The demonstrator observation model P (Otd = odt |sdt ) is dictated by the imitator’s sensory system. A central-Gaussian CPM is a reasonable modeling assumption for many systems and can be parameterized by an empirically determined noise variance σo2d : P (Otd = odt |sdt ) = N odt ; sdt , σo2d .
(10)
Finally, demonstrator actions are modeled by the continuous-valued random variables Adt . Demonstrator actions model a desired state thus distinguishing between the actual attained state Std . The second phase of the proposed imitation learning framework consists of a sequence of learning trials or executions by the agent. During execution, the agent state is represented by the continuous-valued random variable Sta . The next section will detail the agent state representation used in our humanoid robot domain. For now, consider the agent’s state as a set of quantities necessary to a) describe a relevant kinematic subspace, and b) represent dynamics variables and constraints. The state space of the variable Sta is denoted S a .
Learning Actions through Imitation and Exploration
113
The state of imitator during execution is guided by a sequence of continuousvalued actions. Our hypothesis is that treating actions as an unknown random variable process affords an efficient and effective solution via the operations of Bayesian inference and learning. Thus we propose the introduction of the random variable Aat , which is treated as any other unobserved node in a Bayesian network. The basic problem which we address is to learn a sequence of imitative actions aa1 , ··, aat , ··, aaT from a sequence of observations of the teacher od1 , ··, odt , ··, odT . Due to differences in embodiment between teacher and imitator the simple mimicry solution aat = adt is not generally applicable. Thus the problem is to find a sequence of actions for the agent which will take into account the agent’s different embodiment, and perform the task within the inherent constraints of the imitator. Imitator actions can be thought of as “motor commands” which may include: target angular positions, velocities, or even torques. In our experiments with the HOAP-2 actions were target angular positions of all 25 joints. The approach we propose is to probabilistically model the relationship between the actions of the demonstrator and agent. One possible method is to introduce an undirected link between the actions aat and adt at each time slice. Note that this assumes an identity temporal mapping between time slices of the demonstration and imitation. Alternatively, dynamic time warping could be use to formulate an arbitrary temporal mapping. The proposed generative model for imitation learning is shown in Figure 2. A single random variable At “generates” both the teacher’s states during demonstration and the agent’s state during imitation. A unified representation for actions (and states) for the full humanoid robotic motion case will be proposed and examined shortly. The goal of imitation is to not merely mimic the demonstrator’s actions, but also consider egocentric constraints of the agent. Such prior knowledge must be incorporated into the action planning algorithm. Fortunately, in the proposed Bayesian network paradigm it is intuitive and straightforward to add additional nodes which influence action selection. An important example is adding a probabilistic constraint on the agent’s state space. Probabilistic constraints on state variables are introduced within the graphical model via the observed random variables Ct . The corresponding agent state constraint model P (Ct = ct |st ) represents the likelihood of satisfying a constraint in state st . A simple and effective approach is to assume a central-Gaussian density: P (Ct = ct |st ) = N (ct − st ; 0, Σ ) . (11) The variance parameter Σ can be set by hand using domain knowledge, or could be learned using a feedback scheme, in which the constraint variance is annealed. Using the DBN slice shown in Figure 2, and the temporal action constraint we can unroll the final DBN for a specific sequence length T . The complete DBN consisting of linked parallel DBNs for demonstrator and learner is shown in Figure 3. This DBN forms the basis for the BABIL algorithm presented in the next section.
114
D.B. Grimes and R.P.N. Rao
Fig. 3. Proposed DBN for imitation learning. A dynamic Bayesian network composed of parallel, linked stochastic processes for teacher and imitator. The two processes are linked via the single action process At which stochastically guides the state sequences.
The links in Figure 3 between At−1 and At introduce an a priori constraint on a sequence of actions. A temporal action constraint is represented by the model P (At = at |at−1 ). The choice of the constraint model is domain dependent, but as with the state constraint model a central-Gaussian CPD of the following form often proves effective at ensuring a “smooth” action sequence by minimizing the difference between consecutive actions: P (At = at |at−1 ) = N (at − at−1 ; 0, Σa ) .
(12)
As the sequence of actions At sequence is unknown, a major component of the framework is to propose a method for inferring distributions over At . In particular the inferred values of At should have high posterior likelihood given the set of observed quantities. 2.2
BABIL Imitation Learning Algorithm
This section presents a novel algorithm for learning via imitation which utilizes the proposed dynamic Bayesian network. The algorithm is referred to as the BABIL (Behavior Acquisition via Bayesian Inference and Learning) imitation learning algorithm. A high-level depiction of the proposed algorithm is shown in Figure 4. The algorithm consists of three distinct phases: 1) observation, 2) initialization, and 3) constrained exploration. The observation phase extracts information from watching the skilled demonstrator. The initialization phase incorporates this information along with prior information about the agent and its constraints. The constrained exploration phase acquires additional information by executing a sequence of imitative trials, which are iteratively refined. The following subsections detail each step of the BABIL algorithm. Step 1: Observe Demonstrator. The first step of the algorithm is to observe the demonstrator perform the desired skill. In passive imitation learning
Learning Actions through Imitation and Exploration
115
Fig. 4. Overview of proposed BABIL imitation learning algorithm
the agent simply records a stream of raw sensor data during demonstration. A newer paradigm, active imitation learning [14,41] proposes methods for the agent to process sensory data in real-time to determine the value of information (VOI). Based on estimated VOI the agent can act to maximize the amount of information obtained during observation. For simplicity, this chapter focuses on the passive approach, though the proposed techniques are orthogonal and therefore should generalize. Thus we assume that the agent has recorded a stream of sensor data during demonstration. Typical examples of such sensor data include a monocular video or a set of marker-traces of the teacher. The next task is to segment, and classify the data into a set of motion classes, each with a number of examples. Here we assume the presence of a known segmentation and class labels. Depending on the sensory modality various techniques can be used to fully automate the process. For instance in the marker-based motion capture domain numerous approaches for segmentation and classification have been proposed (for example see [42,43]). Step 2: Estimate Kinematics. For each example motion, the next step is to compute a vector-valued representation of the physical configuration of the demonstrator at each discrete time step oat . The main goal in selecting the representation is the transferability of the information to the agent. In theory both the dynamics and kinematics of the human teacher can be estimated. However, given the difficulty of relating human dynamics to agent dynamics our work focuses only on representing demonstrator configuration with kinematic quantities. The transferability of dynamics information is difficult for two reasons. First, if somehow human joint torques are known, their relation to agent torques is complex and requires knowledge of the mass and moments of inertia of both human and robot limbs. Due to these differences the torques necessary to
116
D.B. Grimes and R.P.N. Rao
obtain a stable posture differs widely between the human demonstrator and the humanoid robot. Second, and more importantly the transfer of human dynamics assumes a particular form of dynamics representation for the robotic agent which may be difficult to relate to sensor-based quantities of net force such as a gyroscope or foot pressure sensors. An additional reason for using only demonstrator kinematics is that estimating joint torques in humans for arbitrary motion is a difficult problem in Biomechanics [44]. Later we discuss two proposed methods for estimating the kinematics of the human demonstrator, the first based on a marker-based motion capture system, and a second technique which relies solely on monocular vision. Step 3: Initialize Model. The first step of the initialization phase is to instantiate the dynamic Bayesian network shown in Figure 3. Given the motion length of T discrete steps the network is unrolled to T slices (plus the initial t = 0 slice). Additionally the form and parameters of each conditional probability model (CPM) must be initialized. The parameters of the demonstrator observation P (Otd |sdt ), agent observation P (Ota |sat ), agent state constraint P (Ct |sat ), and temporal action constraint (At |at−1 ) models are determined using domain knowledge as previously discussed in Section 2. Estimates of the demonstrator state obtained in Step 2 are used to formulate a low-dimensional latent space L to represent the relevant kinematic sub-space. we propose to learn the embedding parameters for the latent space using a Bayesian approach in which covariance estimates from observed human data are combined with prior knowledge about the imitator. Details of the method will be presented in the context of learning whole body humanoid motion in Section 3.2. Step 4: Infer Bootstrap Actions. During the initial phase of learning or bootstrapping, the algorithm must select actions with limited information about the agent’s forward model. In the fully nonparametric forward model case, this is equivalent to performing inference with the following noninformative uniform CPD: P (Sta = sat |sat , at ) ∝ 1. (13) As this initial forward model is invariant to specific agent state sat , the actions inferred in the original DBN will be identical to those obtained in the simplified bootstrap DBN in which all agent state and constraint nodes are removed. Note that in the case of a hybrid of parametric forward model, the bootstrapping cycle (Steps 4-6) can be omitted altogether, and after initialization (Step 3) begin constrained exploration (Step 7). Step 5: Execute Actions. An inferred action plan is executed in open-loop by simply executing a series of control signals in turn. At each step, sensory observations of the agent state are collected and recorded for use in learning a predictive forward model. Additionally, at this step the sensory observations can be evaluated to test a stopping condition. In general, any boolean function of a sequence of sensory observations can be used. In the context of planning stable,
Learning Actions through Imitation and Exploration
117
whole-body motions we propose using a likelihood threshold with respect to the probabilistic constraint model. Step 6: Learn Predictive Model. The next step is to learn or update the agent’s predictive forward model P (Sta = sat |sat−1 , at ) from the empirical data obtained by executing action plans. Section 2.3 proposes and analyzes methods for forward model learning. Step 7: Infer Constrained Actions. Using the updated (ideally more accurate) forward model learned in the previous step, a new imitative action sequence is obtained via inference. Section 2.4 proposes a novel method for inference-based action selection. 2.3
Predictive Forward Model Learning
This section proposes methods for learning the predictive forward model directly from empirical data obtained via exploration. In particular we investigate and propose algorithms for learning a nonparametric forward model via Gaussian Mixture Regression (GMR) [45]. For simplicity of presentation, this section assumes that the state sequence is directly observed. Expectation-Maximization (EM) can be used in straightforward manner to handle partially-observable states, however at the cost of much greater computational demand per action planning iteration. Assuming that the observation model P (Ot = ot |st ) has relatively low entropy, an efficient and simple approach is to use maximum likelihood estimation (MLE) once per time step: ˆst = argmax P (Ot = ot |st ). (14) st
Additionally, the superscripts indicating the entity are dropped from the state, action and observation variables, as the methods are not specific to either the agent or demonstrator. The well-known Gaussian Mixture Model (GMM) forms the basis of Gaussian Mixture Regression:
p(x|θ) = p(k|θk )p(x|k, θk ) = wk N (x; μk , Σk ) . (15) k
k
The multivariate random variable X is formed via the concatenation of the n univariate random variables X1 , X2 , ··, Xn , such that: ⎡ ⎤ x1 ⎢ x2 ⎥ ⎥ x=⎢ (16) ⎣ ·· ⎦ . xn
118
D.B. Grimes and R.P.N. Rao
The theorem of Gaussian conditioning states that if x ∼ N (μ, Σ) where ⎡ ⎤ ⎡ ⎤ μ1 Σ11 Σ12 ·· Σ1n ⎢ μ2 ⎥ ⎢ Σ21 Σ22 ·· Σ2n ⎥ ⎥ ⎢ ⎥ μ=⎢ (17) ⎣ ·· ⎦ , Σ = ⎣ ·· ·· ·· ·· ⎦ μn Σn1 Σn2 ·· Σnn then the variable Xi is normally distributed given a particular value of Xj = xj : −1 −1 p(Xi = xi |xj ) = N μi + Σij Σjj (xj − μj ), Σii − Σij Σjj Σji . (18) Gaussian mixture regression is derived by applying the result of this theorem to Eq. 15:
p(xi |xj , θ) = wkj (xj ) N (xi ; μkij (xj ), Σkij ) . (19) k
Here, μkij denotes the mean of the i-th random variable (or dimension) given the value of j-th random variable, for the k-th component of the mixture model. Likewise Σkij denotes the covariance between the variables Xi and Xj in the k-th component. Instead of a fixed weight and mean for each component, each mixture is weighted by a weighting function dependent on the conditioning variable xj : wkj (x) =
wk N (x; μkj , Σkjj ) . k wk N (x; μk j , Σk jj )
(20)
Likewise the mean of the k-th conditioned component of xi given xj is a function of xj : −1 μkij (x) = μki + Σkij Σkjj (x − μkj ). (21) As the goal of using GMR is to learn a model nonparametrically, one does not want have to select the number of components K a priori. This rules out the common strategy of using the well known expectation maximization (EM) algorithm for learning a model of the full joint density p(x). Although Bayesian strategies exist [46] for selecting the number of components, as pointed out by Sung et al. [45] a joint density modeling approach rarely yields the best model under a regression loss function. Thus we propose a similar algorithm to the Iterative Pairwise Replace Algorithm (IPRA) [47,45] which simultaneously performs model fitting and selection of the GMR model parameters θ. For an alternate approach to the problem of density learning for regression see Kreutz et al. [48]. First assume that a set of state and action histories have been observed during m m m m m M the M trial histories: {[sm 1 , a1 , s2 , a2 , ··, aT−1 sT ]}m=1 . A GMR forward model is learned in the joint variable space: ⎡ ⎤ s x = ⎣ a ⎦. (22) s Here s denotes the resulting state when executing action a in state s. The time subscript can be dropped as the model is time invariant. The training dataset
Learning Actions through Imitation and Exploration
119
is then simply represented by the matrix Xtr = [x1 , ··, xL ] where L = M (T − 1) represents the total number of training vectors. Model learning and selection first constructs the fully non-parametric representation of the training set with K = L isotropic mixture components centered on each data point μk = xk . The isotropic variance is initialized using a chosen initial bandwidth estimate Σk = σ 2 I. Kernel weights are initialized uniformly: wk = 1/L. The initial GMR model is exact at making predictions at points within the training set, but generalizes extremely poorly. Thus, the algorithm seeks to reduce this bias by merging pairs of components and creating a component which generalizes them. An efficient greedy strategy iteratively merges the two most similar components. Similarity is determined by a symmetric similarity metric defined between any two mixture components. A similarity metric is defined which measures the “distance” between two mixture components. The Hellinger distance metric is used to compute the distance between components i and j: √ H(wi N (x; θi ) , wj N (x; θj )) = wi + wj − 2 wi wj N (x; θi ) N (x; θj ) dx. (23) To perform efficient merging the minimum spanning tree (MST) of all mixture components (based on pairwise distances from Eq. 23) is computed using Prim’s algorithm [49]. Iteratively, the algorithm merges the closest pair in the minimum spanning tree. Merging continues until there is only a single Gaussian component left. Merging each pair of mixtures requires computation of new local mixture parameters (to fit the data covered by each component in the chosen pair). Rather than the “method of moments” (MoM) approach to merging components [45] and then later running expectation maximization to fine-tune the selected model, we found that performing local maximum likelihood estimation (LMLE) updates after each merge to be more effective. Unlike MoM merges, which are based only on the two sets of parameters of the mixtures, LMLE updates the merged parameters to fit the local data. In order to effectively perform model selection, LMLE merges the training data that is randomly partitioned into two sets: one set of “basis” vectors used for computing the minimum spanning tree, and one of model-selection “holdout” vectors. In the experiments presented in Section 3 a random fifth of the training data is used for basis vectors. Model selection is performed by selecting the GMR model which best explains the empirical hold-out data. The model selection likelihood is defined as a function of the current GMR model parameters θ as follows: L(θ, Xtr ) =
L
n
l
log p(xli |xl1,2,i−1,i+1,n , θ).
(24)
i
The model of size K which maximizes this criteria is selected as the forward model.
120
2.4
D.B. Grimes and R.P.N. Rao
Planning via Inference
This section presents a novel approach to selecting a sequence of imitative actions based on Bayesian inference. The approach selects actions based on their posterior likelihood given a Bayesian network and a set of evidence values. We formally define the evidence set E = {o1 , ··, oT , c1 , ··, cT }. Given a set of evidence, action selection is conceptually very simple: pick actions which have high posterior likelihood. Thus actions are treated just as any other unknown variable for which one desires an accurate point estimate. The dynamic Bayesian network developed in Section 2 models the “validity” of actions (and states) by making them explain the observed data. Thus, the most likely actions are also by definition the most valid (at least relative to the model). One must define precisely what is meant by “actions with high posterior likelihood”. One possibility is to consider the maximum a posteriori (MAP) sequence of actions. a∗1 , ··, a∗T = argmax P (A1 = a1 , ··, AT = aT |E) a1 ,··,aT
(25)
The MAP formulation however is provably intractable. Even for polytrees consisting of discrete variables computing the MAP solution is NP-complete [50]. Further, it is possible to show NP-hardness of approximating the MAP solution within any constant factor times the problem size. Additionally, these time complexity results do not factor in the continuous, non-linear Gaussian CPDs, that will ultimately be used to represent unknown CPDs. Thus it is safe to assume that MAP is an intractable approach. The MAP approach is intractable in the proposed DBNs because it must perform combinatorial optimization over the full sequence of length T while also integrating out hidden variables. An alternative approach is to perform maximization over a single variable at a time while still integrating out hidden variables. In this chapter we refer to this approach as the maximum marginal posterior (MMP) sequence. ˆ t = argmax P (At = at |E) a
(26)
ˆ 1 , ··, a ˆT a∗1 , ··, a∗T = a
(27)
at
We now present an algorithm for action selection based on the MMP formulation of Equations 26-27. In principle any algorithm for computing the marginal posterior distributions of the action variables could be used. In the case of directed graphs such as DBNs it is convenient to use Pearl’s original notation and algorithm for belief propagation [51]. Belief propagation (BP) was originally restricted to tree structured graphical models with discrete variables. Recent advances have broadened the applicability to general graph structures [52] and to continuous variable domains in undirected (MRF) graph structures [53]. Here we derive belief propagation for
Learning Actions through Imitation and Exploration
121
the continuous-valued, directed case. The directedness of a BP formulation is largely a semantic convenience, as any Bayesian network can be represented as an MRF, or more generally a factor graph [54]. However, belief propagation formulated directly for a Bayesian network is more convenient in our setting given the natural conditional semantics of the various forward and observation models. The result of performing belief propagation is a set of marginal belief distributions B(x). Marginal beliefs are computed by passing messages along the edges of the graphical model. Messages are in the form of unnormalized (improper) distributions over single variables. With the exception of numerical round-off errors, the lack of normalization does not affect the results. As a final step, marginal beliefs are simply renormalized to proper distributions. In the discrete (finite space) case, messages are easily represented by tabular multinomial distributions. For the case of arbitrary continuous densities, message representation is in itself a difficult problem. To summarize that section, the proposed approach is to represent messages (and beliefs) using an additive mixture of Gaussian kernels. The belief distribution of a random variable X is defined as the product of two sets of messages. B(x) = P (X = x|E) = π(x)λ(x)
(28)
The information from neighboring parent variables of X is denoted π(x). Likewise, information from children variable nodes is denoted λ(x). Message passing operates according to two simple rules: – Upward: Par(X)i = Ui (i-th parent) passes to X the distribution πX (ui ), – Downward: Ch(X)j = Yj (j-th child) passes to X the distribution λYj (x). For simplicity, a “self message” is introduced such that observed and hidden variables can be treated identically. Thus the variable X sends itself an additional message λX (x). For observed variables, λX (x) is represented using a Dirac delta distribution about the observed value x ¯. λX (x) ∼ δ(x − x ¯)
(29)
For unobserved variables, the self-message is defined as an infinite uniform distribution. λX (x) ∼ 1 (30) Messages from all m children (denoted Yj ) of X and the self message are then multiplied: m λ(x) = λX (x) λYj (x). (31) j
Messages from parent variables of X are incorporated by integrating the conditional probability of a particular x over all possible values of the parents times the probability of that combination. The likelihood of any particular parent combination is computed using the messages from the parent nodes.
122
D.B. Grimes and R.P.N. Rao
InferConstrainedActionSeq(M, E) → [ˆ a1:T ] 1. foreach ˆ ct ∈ E R λct (st ) ⇐ st PC (ct |st )λ (ct )dst where λ (ct ) = δ(ˆ ct , ct ) 2. foreach ˆ ot ∈ RE λot (st ) ⇐ st PO (ot |st )λ (ot )dst where λ (ot ) = δ(ˆ ot , ot ) 3. foreach t = 1 : T−1, πst+1 (at ) ⇐ PAp 4. π(s1 ) ⇐ PSp 5. foreach t =R1, 2, ··, T−1 π(st ) ⇐ s ,a PF (st |st−1 , at−1 )πst (st−1 )πst (at−1 )dst−1 dat−1 t−1 t−1 πst+1 (st ) ⇐ π(st )λct (st )λot (st ) 6. λ(sT ) ⇐ λcT (sT )λoT (sT ) 7. foreach t = T, T−1, ··, 2 λ(st−1 ) ⇐ λsRt (st−1 )λct−1 (st−1 )λot−1 (st−1 ) λst (st−1 ) ⇐ st ,a λ(st )PF (st |st−1 , at−1 )πst (at−1 )dst dat−1 t−1 8. foreach t = 1, 2, R ··, T λst+1 (at ) ⇐ st ,s λ(st+1 )PF (st+1 |st , at )πst+1 (st )dst+1 dst t+1 B(at ) ⇐ π(at )λst+1 (at ) ˆ t = argmaxat B(at ) a
Algorithm 1. Action inference algorithm
π(x) =
P (X = x|u1 , ··, un )
n
u1:n
πX (ui ) du1:n
Messages are updated according to the following two equations: n λX (uj ) = λ(x) P (X = x|u1 , ··, un ) πx (ui ) du1:n/j dx x
u1:n/j
πYj (x) = π(x)λX (x)
(32)
i
(33)
i =j
λYi (x).
(34)
i =j
The expressions in Equations 31-34 consist of two operations: integration and multiplication. In the discrete case both multiplication and integration (actually summation) are trivial (linear time complexity in the number of discrete values). However, given nonparametric densities, both operations are the subject of considerable research. Later in this section we propose a novel approach to integration and multiplication of nonparametric densities. We now propose an algorithm for action inference based on the belief propagation equations given in Equations 31-34 applied to the BABIL framework DBN. The inputs to the action inference algorithm are the set of evidence E, and an instance of the dynamic Bayesian network M = {PS , PA , PF , PO , PC }, composed of the prior on the initial agent state, the prior on actions, the learned forward model for the agent, the demonstration observation model, and the probabilistic constraint model respectively.
Learning Actions through Imitation and Exploration
123
In theory, if all messages have been initialized to a noninformative uniform distribution, message passing and updates can happen in any order. However for efficiency it is useful to consider a particular message passing scheme which explicitly constructs a message passing schedule. In the case of the DBN presented, the schedule has easily interpretable phases. Inference proceeds by first “inverting” the evidence from the observations and constraint variables yielding the messages λOtd (at ), λCt (st ). After message initialization from the prior models PS , PA a forward planning pass computes the forward state messages πSta+1 (st ). Similarly a backward planning sweep produces the messages λSta (st−1 ). The algorithm then combines information from forward and backward messages (via Eq. 33) to compute final belief distributions of ˆ t is computed from actions. Finally, the maximum marginal posterior action a the belief distribution using the mode finding algorithm described in [55]. The method for inferring constrained actions is shown in a compact form in Algorithm 1. The Belief Propagation (BP) algorithm used in action selection requires representing messages and beliefs as probability densities. When conditional probability distributions in the DBN model are not models of linear-Gaussian form, nonparametric representations must be used. In Nonparametric Belief Propagation (NBP) methods for representing arbitrary densities in continuous spaces must be studied. This section presents the basis for performing NBP in the case of a Gaussian Mixture Regression CPDs. The most common class of methods for density representation is based on a set of additive density kernels. A classical example of this approach is termed kernel-density estimation [56,57]. An arbitrary distribution P (X = x) can be approximated by a finite set of additive kernel functions φi (x). P (X = x) ∼ αΦ(x) = α
N
wi φi (x)
(35)
i=1
Each kernel (or “basis”) function is weighted by the constant wi , where wi ∈ [0, 1] and N
wi = 1. (36) i=1
As messages in BP take the form of un-normalized or “improper” distribution functions it is necessary to include a normalizing constant α. In particular, the constant α is necessary when normalizing the product of two (or more) densities, as in the fundamental belief equation (Eq. 28). A common choice of basis functions is the multivariate Gaussian probability distribution function (PDF). For a d-dimensional real-valued space the kernel φi is defined as: 1 1 −1 φi (x) = exp − (x − μ ) Σ (x − μ ) (37) i i d 1 i 2 (2π) 2 |Σi | 2
124
D.B. Grimes and R.P.N. Rao
where Σi is the “bandwidth” or variance of the i-th kernel, and μi its “center” or mean. The resulting density representation with Gaussian PDF kernels is of the same form as a Gaussian Mixture Model (GMM): Φ(x) =
N
i=1
1 −1 − (x − μi ) Σi (x − μi ) . d 1 exp 2 (2π) 2 |Σi | 2 wi
(38)
The crucial distinction between a traditional GMM is in the determination of the values θ = {wi , μi , Σi }N i=1 . In the case of learning a GMM from empirical data, the Expectation-Maximization (EM) algorithm is used to fit a model with N mixtures (or kernels) [58]. In NBP, the values θ must be derived from application of the belief propagation equations (Eqs. 31-34). The details are thus dependent on the form of the conditional probability distributions incident on a particular variable. The following sections discuss the computation of the θ values for the GP and GMR cases in turn. The motivation behind selecting GMR is that it allows for closed form evaluation of the integrals found in Eqs 31-34. Thus it allows efficient inference without the need to resort to Monte Carlo (sample based) approximations in inference. Belief propagation requires the evaluation of integrals convolving the conditional distribution of one variable xi given a GMM distribution γ(·, θ ) of another variable xj : p(xi |xj , θ)γ(xj ; θ )dxj
(39)
Fortunately rearranging the terms in the densities reduces the product of the two GMMs to a third GMM, which is then marginalized w.r.t. xi by the integration operator.
3
Learning Stable Full-Body Humanoid Motion via Imitation
This section presents an approach to solving the problem of learning full body motion for a humanoid robot which is empirically dynamically balanced. The approach is based on the dynamic Bayesian network (DBN) shown in Figure 3. In this section we assume that a length T sequence of human kinematic estimates has been observed. The experiments described here use either the markerbased optical motion capture system or the monocular vision method to obtain estimates of human joint angles through inverse kinematics (IK). In both approaches the model of the human was restricted to have the same degrees of freedom as the Fujitsu HOAP-2 humanoid robot. This affords a trivial mapping (adjusting only for zero position and sign) between the two kinematic spaces. 3.1
Extracting Prior Information from Demonstration
A crucial step in imitation learning is to observe the teacher during demonstration of the desired motion or skill. The related problems of people finding,
Learning Actions through Imitation and Exploration
125
figure tracking, and pose estimation for animation represent an active research topic in computer vision and machine learning. This chapter proposes a novel approach to the problem of estimating human motion for subsequent imitation by a humanoid robot. In one set of experiments we use a commercially available, marker-based motion capture setup located at the University of Washington. The system is a Vicon MX Motion Capture System [59], consisting of 12 infrared cameras, a wearable suit an image capture and processing computer, and a graphical interface running on a standard workstation PC. Each camera is carefully calibrated on a steel rig to allow for marker localization via multi-view matching. First, the demonstrators’ kinematics were obtained using a commercially available retroreflective marker-based optical motion capture system based on inverse kinematics (IK). Kinematic mapping is performed using two kinematic tree models. The first kinematic tree model is constructed to accurately model the limbs and degrees of freedom of the human demonstrator. Human limb length parameters are then numerically optimized using the Vicon IQ software. The IK skeletal model of the human is then restricted to have the same degrees of freedom as the Fujitsu HOAP-2 humanoid robot. For example, the shoulders were replaced with three distinct 1-dimensional rotating joints rather than one 3-dimensional ball joint. Additionally, limb proportions are adjusted to match the proportions of the humanoid robot limbs. In doing so however, the scale of the human demonstrator is maintained such that the marker data can be fit without modification. One can think of this approach as matching in marker space rather than angle space. While this step assumes a high degree of similarity between human and robot kinematics, in practice the limb proportion adjustments were relatively minor between several members of the lab and the HOAP-2 humanoid robot. Joint angles for the humanoid are obtained by solving the inverse kinematics problem with the modified skeleton. Thus the IK method itself generates kinematically corresponding joint angles on the robot for each pose. There are limitations to such a technique (e.g, there may be motions where the robot joints cannot approximate the human pose in a reasonable way), but since we are interested only in classes of human motion that the robot can imitate, this method proved to be a very efficient way to generate large sets of human motion data for robotic imitation. The biggest downside to using a motion capture rig in the imitation learning scenario is that training can only be performed in a rigid (and expensive) environment. Also, the motion capture system is restrictive because it does not allow the robot to imitate autonomously. In the next section we demonstrate initial steps in allowing the robot to use its own vision system to extract the 3D pose of its instructor. This would allow one to “close the loop” of the learning process: an autonomous robot which could watch the instructor, estimate kinematics, infer a stable imitation, and execute. This section presents a computer vision technique for converting a monocular video sequence of human poses into stabilized robot motor commands for a humanoid robot. The human teacher wears a multi-colored body suit while
126
D.B. Grimes and R.P.N. Rao
performing a desired set of actions. Leveraging information about the colors of the body suit, the system detects the most probable locations of the different body parts and joints in the image. Then, by utilizing the known dimensions of the body suit, a user specified number of candidate 3D poses are generated for each frame. Using human to robot joint correspondences, the estimated 3D poses for each frame are then mapped to corresponding robot motor commands. An initial set of kinematically valid motor commands is generated using an approximate best path search through the pose candidates for each frame. 3.2
Efficient Kinematic Representation via Dimensionality Reduction
Representing humanoid motion in the full kinematic configuration space is problematic due to the large number of degrees of freedom and the well known curse of dimensionality. Fortunately, with respect to a wide class of motions (such as walking, kicking, bowing), the full number of degrees of freedom (25 in the HOAP-2) is highly redundant. Dimensionality reduction techniques can be
Fig. 5. Latent posture space representation. Using principal components analysis, a high degree of freedom motion (here, a one-legged balance) is embedded in a twodimensional space. The line shows the sequence of postures represented in the lowdimensional space. For selected points along the trajectory, an image of the robot posture is shown using a purely kinematic simulation. For tractability inference is performed in this low-dimensional latent space to find a dynamically stable sequence of actions which imitates an observed behavior.
Learning Actions through Imitation and Exploration
127
profitably used to represent high-dimensional data in compact low-dimensional latent spaces. For simplicity, standard principal components analysis (PCA) is employed but other non-linear embedding techniques (such as the GPLVM [60]) may be worth investigating for representing wider classes of motion using fewer dimensions [61]. Experimentally, we have found that PCA embedding allows for accurately representing the observed kinematics in compact 2-4 dimensional spaces. An illustrative example showing a one legged balancing motion embedded in a two-dimensional space is shown in Figure 5. In the framework we propose a linear embedding to reduce the dimensionality based on a) the distribution of kinematic motion based on several demonstrations of the desired task, and b) the prior distribution of kinematic correspondence between the robot and human. Both distributions are assumed to be multivariate Gaussian distributions in the full posture space of the robot. The embedding methodology is depicted in Figure 6. The covariance of the empirical task kinematics Σo can be estimated from the observed data as follows: Σo = E (odt − E[odt ])(odt − E[odt ]) . (40) The prior correspondence distribution is parameterized by a domain dependent covariance matrix Σc . In the experiments presented Σc is chosen to be a 2 diagonal matrix with entries σii . These values are chosen to represent the degree to which joint i is allowed to differ between robot and human. For example, it may be more important to precisely reproduce arm movements than ankle joint motions. The linear embedding is chosen such that it optimally represents the sum of the two distributions in a reduced latent space of dimensionality L. This can be achieved by diagonalization of the sum of the covariance matrices. The result is a diagonal matrix D of monotonically non-increasing eigenvalues, and the matrix V of corresponding eigenvectors. Σo + Σc = V DV −1
(41)
The embedding matrix C is formed from the first L eigenvectors in V : C = [v1··L ] .
(42)
The distribution of the human demonstrator’s posture odt is modeled as a linear-Gaussian conditioned on the latent action at : odt = Cat + vn ,
vn ∼ N (μn , Σn + Σc ) .
(43)
The additive Gaussian variable vn incorporates uncertainty from both an additive Gaussian noise process parameterized by μm , Σm and the correspondence distribution characterized by Σc . In practice the noise parameters can be estimated using maximum likelihood estimation on calibration data obtained using a calibration rig.
128
D.B. Grimes and R.P.N. Rao
Fig. 6. Graphical model and latent space diagram. Estimated human kinematics are related to the humanoid robot via a Bayesian network and a latent action embedding space L. The latent space is formulated by using prior knowledge about kinematic correspondence Σc and the observed kinematic variance as demonstrated by the teacher Σo . By diagonalization a linear embedding is constructed from the L principal components of the combined covariance. Humanoid motor commands are then reconstructed by linearly projecting into the humanoid’s DOF space.
3.3
Sensor-Based Dynamics Representation
The second major component of the model can be likened to a dynamics constraint. Rather than placing constraints on moments or center of mass, which require complex and precisely tuned physical models, sensors are leveraged which measure quantities closely related to dynamic stability. In this work, observations from a torso gyroscope (gt ) and pressure sensors on the feet (ft ) are utilized. This framework easily generalizes to include other sensors and sources of information such as motion estimates based on visual information and/or proximity sensors. 3.4
Results of Constrained Exploration
The BABIL algorithm for action selection, model learning, and constrained exploration was applied to the problem of full-body dynamic imitation in a humanoid robot. First, for completeness we briefly recap the experimental procedure used. For one set of experiments motion capture data was collected while performing various actions, each with three to five repetitions. Kinematic joint angles were then estimated using inverse kinematics. From a second set of experiments pose estimates were obtained from monocular images.
Learning Actions through Imitation and Exploration
129
10
40
4
0
x 10
−1
sum log likelihood
−2 −3 −4 −5 −6 −7 −8 0
5
15
20 trial #
25
30
35
Fig. 7. Log likelihood of dynamics configuration. The sum log likelihood (over all time steps) of the probabilistic dynamics model P (ct |dt ) is shown as a function of the trial number. Note that this likelihood increases dramatically once a valid forward model (after trial 15) is learned and the dynamics using the probabilistic dynamic balance model are constrained. In this example, the robot was learning to imitate the squatting motion shown in Figure 9.
Kinematic data (from either source) is then used to construct the latent joint configuration space via the principal components basis matrix C. The number of principal components L was found empirically and was guided by striking a balance of several factors. Firstly, the dimensionality chosen should afford accurate reconstruction of the prior kinematic motion. For all motion classes, the data displayed greater than 99% of the variance of the data along the first four principal components. A second factor in selecting the dimensionality is to allow sufficient representational power for finding a stable motion within the latent space of actions. Finally, for reasons of efficiency, it is desirable to keep the number of latent dimensions to a minimum. We experimented with latent space dimensionalities between two and six, and found four to be a good balance of representational freedom and efficiency. Parametric model variances (such as in the observation model, temporal action
130
D.B. Grimes and R.P.N. Rao
70
50 balanced duration
constrained exploration period
random (from prior) exploration period
60
40
30
20
10
0 0
5
10
15
20
25 trial #
30
35
40
45
Fig. 8. Dynamic balance duration over imitation trials. The duration (number of time steps) that the executed imitation was balanced is shown as a function of the trial number. In this example, the robot was learning to imitate the one-legged balance motion discussed in the text and shown in Figure 10. Random exploration was used in trials 1 through 20. Trials 21 through 45 used actions inferred by including contributions from the dynamic constraint distribution. Note that the full motion length is T = 63, which is achieved by the algorithm around the 15th inferred action sequence.
smoothness model, and human input kinematic model) were also set empirically to make sure that the relative values allowed for a compromise between kinematically similar imitations and dynamic stability of the resulting motion. Concatenating the four dimensional kinematic state and three dimensional dynamic state (two gyro directions and a selected foot pressure feature) form the full state representation st . The forward model (of robot kinematics and dynamics) is bootstrapped by first performing random exploration (body babbling) about the instructor’s trajectory. From this execution, the actions as well as the maximum likelihood state estimates are added to the data set D and the kernel matrix is updated. Once sufficient data (here we used 20 trials) has been collected, an initial forward model is learned. Thus the constraint variables are added to the evidence set and included in the computation of the beliefs B(at ) ∝ P (at |od1 . . . odT , c1 . . . cT ).
Learning Actions through Imitation and Exploration
131
Fig. 9. Squatting motion learned via imitation. The first row consists of frames from an IK fit to the marker data during observation. The second row shows the result of performing a kinematic imitation in the simulator. The third and fourth rows show the final imitation result obtained by the constrained exploration algorithm, in the simulator, and on the HOAP-2 robot.
Based on these beliefs, the maximum probability actions ˆ at = argmaxat B(at ) are computed and executed. Using this constraint on dynamics, constrained exploration is performed, until a stable motion is obtained for the HOAP-2 which imitates the human motion. The number of learning trials for the results here was limited to 100. All trials in the motion learning phase used the robotics simulator software package Webots [24]. Webots is capable of providing accurate dynamics simulation of the Fujitsu HOAP-2 robot. Its sensor simulation capability was used to also model the necessary gyroscope and foot pressure sensor signals (to which realistic levels of Gaussian noise were added to help avoid overfitting the physics of the simulator). Finally, the learned motion was applied as open-loop commands to the HOAP2 humanoid robot. Besides the practical application of enabling the new motion in the robot, this is done to make sure the inferred imitative motions were not merely overfitting the simulated physics of Webots. The set of motions obtained via motion capture were: a) squatting, d) a onelegged balance, b) bowing, c) kicking, e) side-stepping, and f) a forward step. The set of motions obtained via monocular vision were: a) “dancing” - waving the arms while shifting weight, and b) a sideways leg lift. Empirical results demonstrate that the BABIL algorithm is able to infer sequences of actions
132
D.B. Grimes and R.P.N. Rao
Fig. 10. One foot balance motion learned via imitation. The first row consists of frames from an IK fit to the marker data during observation. The second row shows the result of performing a kinematic imitation in the simulator. The third and fourth rows show the final imitation result obtained by the constrained exploration algorithm, in the simulator, and on the HOAP-2 robot.
Fig. 11. Side-step motion learned via imitation. The first row consists of frames from a video of the human demonstrator performing the motion in a motion capture suit. The second row shows the result of performing a purely kinematic imitation in the simulator. The third and fourth rows show the final imitation result obtained by our method of constrained exploration, in the simulator, and on the HOAP-2 robot.
which do not cause the robot to lose balance and fall, for seven of the eight motions (all except the forward step). This proves to be the case even if all of the bootstrap iterations are empirically unstable. Figure 7 illustrates that the likelihood of the dynamics sensors increases dramatically once the probabilistic dynamic balance constraint propagated
Learning Actions through Imitation and Exploration
133
(a)
(b)
Fig. 12. Stable full body motion obtained via monocular observation of a human teacher. (a) A “dance” motion in which the instructor waves his arms and shifts his weight accordingly. Note that the robot can be seen imitating this weight shifting behavior. (b) A sideways leg lifting motion similar to what was demonstrated using optical motion capture.
throughout the network. This result indicates that the belief propagation algorithm is able to impose the constraint on the states, and in turn the actions. More importantly, is that the constraints themselves bias the motion towards empirical dynamic stability. Figure 8 demonstrates that the duration the robot remains balanced and not fall quickly increases to the full motion length of 63 time steps, after 20 bootstrap and approximately 15 to 20 constrained trials. Side-by-side images of the demonstrator, simulated robot before and after learning, and the final imitation as performed by the HOAP2 are shown for selected motions obtained via motion capture in the Figures 9, 10, and 11. Motions learned via the onboard monocular camera are shown in Figure 12. With the exception of one motion (the forward step) the BABIL algorithm was able to produce a stabilized imitation in the Webots simulator. In all but one of these stabilized motions, open-loop execution of the inferred imitative motions was found to be empirically dynamically stable. The learned side step motion was found to be empirically stable in the simulator, but not when applied to the humanoid robot. Note that this result was obtained without calibrating the forward model to the actual robot. Each motion was tested empirically for stability at least twenty times. Out of these trials five of the six stabilized motions never lost its balance while executing the learned imitative motion. In the case of the one legged balance learned from motion capture, the robot was dynamically balanced throughout the motion in 16 out of the 20 trials. This suggests that additional learning may be required using the robot itself. Alternatively, this one-legged balance motion may be difficult to balance in a purely open-loop manner.
134
D.B. Grimes and R.P.N. Rao
The side-step motion also proved unstable when executed open-loop on the robot. This is very likely due to differences in the frictional forces computed by Webots and that of the real-world scenarios tested. A surface with similar frictional characteristics to the one simulated was not able to be identified. Thus the feet would also slip out when executing the learned side-step motion on the HOAP-2. Based on these result it appears that the current dynamics features and constraints selected are not robust enough to foot slippage. This seems to indicate that the current model is not able to acquire locomotive motions via imitation.
4
Conclusion
Endowing robots with brain-like intelligence requires methods that allow rapid learning of new behaviors in uncertain real-world environments. In this chapter, we have proposed a probabilistic framework that allows a humanoid robot to learn new behaviors from a human teacher through imitation and exploratory learning. Dynamic Bayesian networks are used to learn the robot’s dynamics and imitative actions are selected via probabilistic inference in learned Bayesian networks. The results show that a 25-degrees-of-freedom humanoid robot can learn stable whole-body motions simply by observing a human demonstrator. The approach we have proposed makes several contributions to robotics research: (1) It suggests a general purpose approach to “programming” a complex humanoid robot through imitation without relying on cumbersome and potentially error-prone physics-based models, (2) It provides a principled approach to handling uncertainty in real-world environments through the use of dynamic Bayesian models, (3) It circumvents intractability due to very high-dimensional state and control spaces (typical in humanoid robotics) via dimensionality reduction techniques, and (4) It introduces nonparametric techniques for tackling the problem of learning and inference with continuous-valued random variables (ubiquitous in robotics). Rapid advances are being made in robotic engineering, machine learning, and probabilistic reasoning. These advances are making it possible to envision a day in the not-too-distant future when robots with human-brain-like intelligence effortlessly interact with us and learn from us. We believe that like their human counterparts, such robots will rely heavily on the ability to learn through imitation and exploration.
References 1. Turing, A.: Computing machinery and intelligence. Mind 59, 433–460 (1950) 2. McCarthy, J., Minsky, M., Rochester, N., Shannon, C.: A proposal for the dartmouth summer research project on artificial intelligence (1955) 3. Meltzoff, A.N.: Elements of a developmental theory of imitation. In: The imitative mind: Development, evolution, and brain bases, pp. 19–41. Cambridge University Press, Cambridge (2002)
Learning Actions through Imitation and Exploration
135
4. Doya, K., Ishii, S., Pouget, A., Rao, R.P.N. (eds.): Bayesian Brain: Probabilistic Approaches to Neural Coding. MIT Press, Cambridge (2007) 5. Rao, R.P.N., Olshausen, B.A., Lewicki, M.S. (eds.): Probabilistic Models of the Brain: Perception and Neural Function, Perception and Neural Function. MIT Press, Cambridge (2002) 6. Rao, R.P.N., Shon, A.P., Meltzoff, A.N.: A Bayesian model of imitation in infants and robots. In: Imitation and Social Learning in Robots, Humans, and Animals. Cambridge University Press, Cambridge (2005) 7. Kuniyoshi, Y., Inaba, M., Inoue, H.: Learning by watching: Extracting reusable task knowledge from visual observation of human performance. Transaction on Robotics and Automation 10(6), 799–822 (1994) 8. Takahashi, Y., Hikita, K., Asada, M.: Incremental purposive behavior acquisition based on self-interpretation of instructions by coach. In: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2003), pp. 686–693. IEEE Computer Society Press, Los Alamitos (2003) 9. Schaal, S., Ijspeert, A., Billard, A.: Computational approaches to motor learning by imitation. The Neuroscience of Social Interaction 1(1431), 199–218 (2004) 10. Inamura, T., Toshima, I., Nakamura, Y.: Acquiring motion elements for bidirectional computation of motion recognition and generation. In: Experimental Robotics VIII, pp. 372–381. Springer, Heidelberg (2003) 11. Ijspeert, A.J., Nakanishi, J., Schaal, S.: Trajectory formation for imitation with nonlinear dynamical systems. In: Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2001), pp. 752–757. IEEE Press, Los Alamitos (2001) 12. Billard, A., Mataric, M.: Learning human arm movements by imitation: Evaluation of a biologically-inspired connectionist architecture. Robotics and Autonomous Systems 37(941), 145–160 (2001) 13. Calinon, S., Guenter, F., Billard, A.: On learning, representing and generalizing a task in a humanoid robot. IEEE Transactions on Systems, Man and Cybernetics, Part B. Special issue on robot learning by observation, demonstration and imitation 37(2), 286–298 (2007) 14. Demiris, J., Hayes, G.: A robot controller using learning by imitation. In: Proceedings of the 2nd International Symposium on Intelligent Robotic Systems (IROS 1994). IEEE Press, Los Alamitos (1994) 15. Schaal, S.: Learning from demonstration. In: Mozer, M.C., Jordan, M.I., Petsche, T. (eds.) Advances in Neural Information Processing Systems 9 (NIPS 1996), vol. 9, p. 1040. MIT Press, Cambridge (1997) 16. Atkeson, C.G., Schaal, S.: Robot learning from demonstration. In: Proceedings of the Fourteenth International Conference on Machine Learning (ICML 1997), pp. 12–20 (1997) 17. Watkins, C.: Learning from Delayed Rewards. PhD thesis, Cambridge University (1989) 18. Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction. MIT Press, Cambridge (1998) 19. Price, B.: Accelerating Reinforcement Learning with Imitation. PhD thesis, University of British Columbia (2003) 20. Ng, A.Y., Russell, S.: Algorithms for inverse reinforcement learning. In: Proceedings of the Seventeenth International Conference on Machine Learning (ICML 2000), pp. 663–670 (2000)
136
D.B. Grimes and R.P.N. Rao
21. Abbeel, P., Ng, A.Y.: Exploration and apprenticeship learning in reinforcement learning. In: Proceedings of the Twenty-first International Conference on Machine Learning (ICML 2005) (2005) 22. Schaal, S.: Is imitation learning the route to humanoid robots? Trends Cognitive Science 3(6), 233–242 (1999) 23. Calinon, S., Guenter, F., Billard, A.: Goal-directed imitation in a humanoid robot. In: Proceedings of the IEEE International Conference on Robotics and Automation (ICRA 2005). IEEE Press, Los Alamitos (2005) 24. Webots: Commercial Mobile Robot Simulation Software, http://www.cyberbotics.com 25. Featherstone, R.: Robot Dynamics Algorithms. Springer, Heidelberg (1987) 26. Luh, J.Y.S., Walker, M.W., Paul, R.P.C.: On-line computational scheme for mechanical manipulators. Dynamic Systems Measurement and Control 102 (1980) 27. Chang, K.S., Khatib, O.: Efficient algorithm for extended operational space inertia matrix. In: Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 1999). IEEE Press, Los Alamitos (1999) 28. Marhefka, D., Orin, D.: Simulation of contact using a nonlinear damping model. In: Proceedings of the IEEE International Conference on Robotics and Automation (ICRA 1996). IEEE Press, Los Alamitos (1996) 29. Lotstedt, P.: Numerical simulation of time-dependent contact friction problems in rigid body mechanics. SIAM Journal on Scientific Statistical Computing 5(2), 370–393 (1984) 30. Stewart, D., Trinkle, J.: An implicit time-stepping scheme for rigid body dynamics with coulomb friction. In: Proceedings of the IEEE International Conference on Robotics and Automation (ICRA 2000). IEEE Press, Los Alamitos (2000) 31. Kuffner, J.J., Nishiwaki, K., Kagami, S., Inaba, M., Inoue, H.: Motion planning for humanoid robots under obstacle and dynamic balance constraints. In: Proceedings of the IEEE International Conf. Robotics and Automation (ICRA 2001), pp. 692– 698. IEEE Press, Los Alamitos (2001) 32. Frank, A.A., McGhee, R.B.: Some considerations realation to the design of autopilots for legged vehicles. Terramechanics 6, 23–25 (1969) 33. Vukobratovic, M., Borovac, B.: Zero-moment point - thirty five years of its life. International Journal of Humanoid Robotics 1(1), 157–173 (2004) 34. Park, J., Rhee, Y.: ZMP trajectory generation for reduced trunk motions of biped robots. In: Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 1998). IEEE Press, Los Alamitos (1998) 35. Huang, Q., Kajita, S., Koyachi, N., Kaneko, K., Yokoi, K., Arai, H., Komoriya, K., Tanie, K.: A high stability, smooth walking pattern for a biped robot. In: Proceedings of the IEEE International Conf. Robotics and Automation (ICRA 1999). IEEE Press, Los Alamitos (1999) 36. Kagami, S., Kanehiro, F., Tamiya, Y., Inaba, M., Inoue, H.: Autobalancer: an online dynamic balance compensation scheme for humanoid robots. In: Proceedings of the International Workshop on Algorithmic Foundation of Robotics, pp. 329–340 (2000) 37. Park, J., Kim, K.: Biped robot walking using gravity-compensated inverted pendulum mode and computed torque control. In: Proceedings of the IEEE International Conf. Robotics and Automation (ICRA 1998). IEEE Press, Los Alamitos (1998) 38. Yamaguchi, Takanishi, A., Kato, I.: Development of a biped walking robot compensating for three-axis moment by trunk motion. In: Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 1993), pp. 561– 566. IEEE Press, Los Alamitos (1993)
Learning Actions through Imitation and Exploration
137
39. Yamane, K., Nakamura, Y.: Dynamics filter - concept and implementation of online motion generator for human figures. IEEE Transactions on Robotics and Automation 19(3), 421–432 (2003) 40. Ko, J., Klein, D., Fox, D., Hahnel, D.: GP-UKF: Unscented Kalman filters with gaussian process prediction and observation models. In: Proceedings of IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS 2007). IEEE Press, Los Alamitos (2007) 41. Shon, A.P., Verma, D., Rao, R.P.N.: Active imitation learning. In: Proceedings of the American Association for Artificial Intelligence (AAAI 2007) (2007) 42. Barbic, J., Safonova, A., Pan, J.Y., Faloutsos, C., Hodgins, J.K., Pollard, N.S.: Segmenting motion capture data into distinct behaviors. In: Proceedings of Graphics Interface (GI 2004), University of Waterloo, Waterloo, Ontario, Canada, Canadian Human-Computer Communications Society, pp. 185–194 (2004) 43. Muller, M., Roder, T.: Motion templates for automatic classification and retrieval of motion capture data. In: Proceedings of the 2006 ACM SIGGRAPH/Eurographics symposium on Computer animation (SCA 2006), Aire-la-Ville, Switzerland, Eurographics Association, pp. 137–146 (2006) 44. Seth, A., Pandy, M.G.: A nonlinear tracking method of computing net joint torques for human movement. In: Proceedings of the 26th Annual International Conference of the Engineering in Medicine and Biology Society (2004) 45. Sung, H.G.: Gaussian Mixture Regression and Classification. PhD thesis, Rice University (2004) 46. Welling, M., Kurihara, K.: Bayesian K-means as a Maximization-Expectation algorithm. In: Proceedings of the SIAM conference on Data Mining (2005) 47. Scott, D., Szewczyk, W.: From kernels to mixtures. Technometrics 43(3), 323–335 (2001) 48. Kreutz, M., Reimetz, A.M., Sendhoff, B., Weihs, C., von Seelen, W.: Structure optimization of density estimation models applied to regression problems with dynamic noise. In: Proceedings of the Seventh International Workshop on Artificial Intelligence and Statistics, pp. 237–242. Morgan Kaufmann, San Francisco (1999) 49. Cormen, T.H., Leiserson, C.E., Rivest, R.L., Stein, C.: Introduction to algorithms. MIT Press, Cambridge (2001) 50. Park, J.D., Darwiche, A.: Complexity results and approximation strategies for map explanations. Journal of Artififical Intelligence Research (JAIR) 21, 101–133 (2004) 51. Pearl, J.: Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. Morgan Kaufmann, San Francisco (1988) 52. Weiss, Y.: Correctness of local probability propagation in graphical models with loops. Neural Computation 12(1), 1–41 (2000) 53. Sudderth, E.B., Ihler, A.T., Freeman, W.T., Willsky, A.S.: Nonparametric belief propagation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2003), pp. 605–612 (2003) 54. Kschischang, F.R., Frey, B.J., Loeliger, H.A.: Factor graphs and the sum-product algorithm. IEEE Transactions on Information Theory 47(2), 498–519 (2001) 55. Carreira-Perpinan, M.A.: Mode-finding for mixtures of gaussian distributions. IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI) 22(11), 1318–1323 (2000) 56. Hwang, J., Lay, S., Lippman, A.: Nonparametric multivariate density estimation: a comparative study. IEEE Transactions on Signal Processing 42(10), 2795–2810 (1994) 57. Silverman, B.W.: Density Estimation for Statistics and Data Analysis. Chapman and Hall, Boca Raton (1986)
138
D.B. Grimes and R.P.N. Rao
58. Dempster, A.P., Laird, N.M., Rubin, D.B.: Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society. Series B (Methodological) 39(1), 1–38 (1977) 59. Vicon: Vicon MX Motion Capture System, http://www.vicon.com 60. Lawrence, N.D.: Gaussian process latent variable models for visualization of high dimensional data. In: Advances in Neural Information Processing Systems 15 (NIPS 2002). MIT Press, Cambridge (2003) 61. Grochow, K., Martin, S.L., Hertzmann, A., Popovic, Z.: Style-based inverse kinematics. In: Proceedings of the ACM Transactions on Graphics, SIGGRAPH 2004 (2004)
Towards Learning by Interacting Britta Wrede1,2 , Katharina J. Rohlfing1,2 , Marc Hanheide1,2 , and Gerhard Sagerer1,2 1 2
Bielefeld University, Applied Computer Science Institute for Cognition and Robotics (CoR-Lab) 33501 Bielefeld, Germany
Abstract. Traditional robotics has treated the question of learning as a one way process: learning algorithms, especially imitation learning approaches, are based on observing and analysing the environment before carrying out the ‘learned’ action. In such scenarios the learning situation is restricted to uni-directional communication. However, insights into the process of infant learning more and more reveal that interaction plays a major role in transferring relevant information to the learner. For example, in learning situations, the interactive situation has the potential to highlight parts of an action by linguistic and nonlinguistic features and thus to guide the attention of the learner to those aspects that the tutor deems relevant for her current state of mind. We argue that learning necessarily needs to be embedded in an interactive situation and that developmental robotics, in order to model engagement in interaction, needs to take the communicative context into account. We further propose that such an approach necessarily needs to take three aspects into account: (1) multi-modal integration at all processing levels (2) derivation of top-down strategies from bottom-up processes and (3) integration of single modules into an interactive system in order to facilite the first two desiderata.
1
Introduction
For many years robotic research has aimed at developing robots that are able to learn from observation or interaction with their environment. In the domain of cognitive robotics it can be seen as an agreement that embodiment is a foundation for any kind of interactive or simply active learning [7,38]. Consequently, many approaches have thereby been motivated by findings from infant development, thus implicitly implying that their goal is to model human cognition. However, most approaches have either applied offline learning to simply enable online recognition or have treated learning and recognition as rather distinct processes. Other approaches exploit the embodiment of the system by enabling the robot to interact with the environment. However, only rarely has the social and cognitive knowledge of the tutor been taken into consideration, although it is widely accepted that a social learning situation itself comprises particular information facilitating the learning process and increase its effectivity. Therefore, we argue in this paper that learning – with a focus on the acquisition of B. Sendhoff et al. (Eds.): Creating Brain-Like Intelligence, LNAI 5436, pp. 139–150, 2009. c Springer-Verlag Berlin Heidelberg 2009
140
B. Wrede et al.
linguistic as well as manipulation-oriented capabilities – needs to be embedded in an interactive situation and that a robotic system, in order to develop and learn over time, needs to be able to make use of the interactive situation by exploiting the closed loop with the human tutor. This view on learning entails that we need new system architectures that facilitate multi-modal fusion in a much more integrated way and that allow to integrate offline with online processing approaches. One important step in this direction is to enable the development of top-down strategies from bottom-up processes at various levels. We are convinced that in order to develop brainlike intelligence also cognition-inspired learning paradigms are crucial. Learning by interaction goes beyond common supervised or unsupervised strategies by taking into account wider feedback and assessments for the learning processes. We argue that this entails to model mechanisms that have the power to generalise and thus develop through experience. As one such mechanism we have identified the process of joint attention which is seen as a necessary capability in order to facilitate learning. We address this mechanism in Section 3 by reviewing findings of the role of joint attention in parent-infant interactions and approaches to model attention in different robot applications. Another mechanism, which is closely related to joint attention but appears to be even more powerful, is the way how interaction is managed. This entails on the one hand issues of turn-taking, that is, at what point in time each participant can make a contribution. However, this also relates to feedback mechanisms that signal to the contribution partner that her message has been received, processed and understood – a process that is commonly being addressed as grounding. We present different approaches of grounding in Human-Robot Interaction (HRI) in the following section and analyse them for their capabilities of scaling to a potentially open-ended learning situation. We will discuss the implications that these desiderata of an interactive learning system have on the further development and research.
2
Background
Traditionally, the ability of learning has been investigated in terms of information storage and representational formats. Fogel critisizes the metaphor of signal and response that is conveyed that way. Instead of “top-down models of action control” [9], recent research on learning promotes the dynamics of interacting systems and their co-regulation. In this vein, our argument starts from the observation that infants need interaction with caregivers in order to develop. The question of how interaction influences the learning process has concerned many researchers. The phonetic acquisition process, for example, has for a long time been assumed to rely purely on statistic analysis of the auditory input, as specified in the Perceptual Magnet Theory [22]. However, more recently it has been observed that while very young infants are able to re-learn non-native phonemic contrasts in interactive situations they will not do so when confronted with the same signals via video in a uni-directional, non-interactive situation [21].
Towards Learning by Interacting
141
These findings indicate that already very young infants make use of information arising from interactive situations. Kuhl [21] suggests two aspects that might help infants to learn in interactive situations: on the one hand she argues that in direct interaction attention and arousal of the infants are higher than in video situations. These factors are known to affect learning in a wide variety of domains [31] and might thus help the infants to learn the non-native contrasts. On the other hand, she argues that the mechanism of joint attention, which is only present in live interactions but not in video settings, might provide the infants with relevant additional information that finally helps the infants to distinguish between relevant phonemic contrasts. However, how in detail can joint attention help to learn things that an infant would not learn in a non-interactive situation? And what exactly is the additional information that is provided in interactive situations but not in a video session? One answer is that in an interactive situation the teacher can guide the learner’s attention to relevant aspects of the entity that is to be learned and can thus establish joint attention in order to achieve a common ground (or mutual understanding). Most interestingly, strategies to guide the attention are highly multi-modal and make use of the temporal relationship between information in the different modalities. There is empirical evidence showing that infants are more likely to attend to a visual event which is highlighted by speech (e.g. [26]). This suggests that in young infants language plays the role of one of many possible perceptual cues. Gogate et al. [10] applied the principle of the intersensory redundancy theory to early acquisition of communication and investigated the speech signal as a help for preverbal infants to remember what they see. The authors found that moving an object in synchrony with a label facilitated long-term memory for syllableobject relations in infants as young as 7 months. By providing redundant sensory information (movement and label), selective attention was affected [10]. Redundant information can include temporal synchrony, spatial co-location, shared rhythm, tempo, and intensity shifts. Gogate and Bahrick argue for the following reference mechanism in learning words (see also [39]). Young infants relate words and objects by detecting intersensory redundancy across the senses in multimodal events. One type of perceptual information can be the sound as it is provided by a person when she or he speaks. Temporal synchrony and movement seem to constitute two perceptual cues that are ecologically valid: When observing mothers interacting with their children, Gogate et al. [10] found that mothers use temporal synchrony to highlight novel word-referent relations, i.e. they introduced a novel label to their children and moved the new object synchronously to the new label. According to the results achieved in different age groups of children, this phenomenon - observed also cross-culturally [11] - seems to be very prominent in preverbal children. This idea that the presence of a sound signal might help infants to attend to particular units within the action stream was originally proposed and termed “acoustic packaging” by Hirsh-Pasek and Golinkoff [14]. Hollich and colleagues [15] report on literature that considerably supports the notion that infants
142
B. Wrede et al.
appear to rely quite heavily on the acoustic system in the phase of perceptual segmentation and extraction. While this segmentation and categorization is going on, Hirsh-Pasek and Golinkoff [14] argue that children can use this “acoustic packaging” to achieve a linkage between sounds and events (see also [39]). Interaction thus establishes a situation where joint attention can continuously be established through the extensive use of multi-modal cues which facilitate learning as specified by the multi-modal redundancy hypothesis. However, in order to yield learning effects, bottom-up strategies – as applied often for attentional processes – need necessarily to be combined with top-down processes. For example, parent-infant interaction can be seen as a combination of both processes with the infant using mainly bottom-up strategies in order to make sense of a situation whereas the care-taker will provide top-down strategies to guide the infant’s attention. By this interlinked process bottom-up and topdown processes are intimately combined and form a dynamic mechanism that provides more and more differentiated learning (and attention) strategies from which the infant can select. In order to model such a complex dynamic process on a robotic system single modules, as developed in current state-of-the-art research, need to be incrementally integrated in a complex system. Far from being trivial, it has been shown that such complex integration tasks necessarily address the fundamental question of a cognitive architecture and have the potential to show the synergetic effects arizing from the combination of single capabilities.
3
Joint Attention and Learning
In order to answer the question of what to be learned, we believe that joint attention is in particular relevant. Hinted by adult-infant studies it will be dicussed in this section why and how joint attention can be considered to be a cornerstone for learning by interacting in robotics. 3.1
Joint Attention in Parent-Infant Interaction
Whether memorizing a face, learning about how to switch the light on or just perceive an object that is on a table, all this is based on attention processes. It is our attention that reduces an enormous amount of information that is available to us through our senses, memory and other cognitive processes, to a limited amount of salient information. The attention decides about selected information that will be further processed by mental conscious and automatic processes [35]. In social learning, which has attracted interests of scholars not only from developmental psychology and cognitive science but also from robotics (e.g. [5]), attention processes are not only controlled by a single system and its reactions to the environment but are also influenced by interaction partners. In studies with newborns, it has been shown that babies are tuned to faces and will more likely follow geometric patterns that resemble faces than other similar geometric patterns [18]. This preference is discussed as adaptive orientation mechanism that ensures that infants will learn the most relevant social stimuli in their environment [4].
Towards Learning by Interacting
143
The fact that a social partner can influence the attention allows the interacting agents to simultaneously engage on one and the same external thing and form a kind of mental focus [1], which is called joint attention. In the literature, two representatives of joint attention have been identified: the ability to follow interlocutor’s direction of gaze and the ability to follow interlocutor’s pointing gesture. Interestingly, in recent research, it has been suggested that the ability of joint attention might develop from infants bottom-up skills and preferences. For example, Hood et al. [16] as well as Farroni and her colleagues [8] could show that infants as young as 4 months develop a rudimentary ability to follow somebody’s gaze and demonstrated this under specific experimental conditions. Farroni and her colleagues further specified that infants show this behavior, because they are cued by the motion of the pupils rather than by the eye gaze. Thus, when there was no apparent movement of the pupils and the averted gaze was presented statically, there was no evidence of 4 month-old infants following the gaze shift. This data reveals that the direction of the eyes per se does not have an effect in directing infants’ attention. Motion or perceptual saliency [12] seems to be an important cue for infants, but Farroni and her colleagues [8] suggest that it may become less important during the development (s. also [32]). For example, while young infants’ gaze-following depends on seeing the adult’s head movement rather than the final head pose [27], older infants use static head pose to top-down infer direction of attention. For the following of the pointing gestures, it was similarly suggested that while infants as young as 4.5 months respond to the direction of movement, no matter whether the movement is done by a pointing finger or the back of the hand. Around 6.5 months, however, infants start to be sensitive to the orientation of the finger, which is a crucial component of a pointing gesture [32]. Therefore, it is plausible to argue that early in their development, children are attracted by perceptual saliency but continue to attend to social cues [12]. 3.2
Modeling Joint Attention in HRI
Models of joint attention in robotics, in contrast, tend to be either purely bottomup, where the interactive character of joint attention is often neglected, or modeldriven. Most approaches only work under highly restricted constraints and are generally not intended to be able to adapt their attentional behavior to the interaction partner’s behavior. Attempts to model human performance in non-interactive situations with respect to attentional processes have been investigated in several implementations of a saliency based attention model [6] [17] [13]. In these approaches attention is reduced to a focus point in a visual scene for each time-frame. While these results are interesting when compared to human performance [34], they can provide little help with respect to the question of how attention can support learning in interactive situations. However, they serve as an important pre-requisite to model joint-attention. A brief overview relating infant development with respect to joint attention and robot’s attention models is given in [19].
144
B. Wrede et al.
Integrating bottom-up strategies for attention in a socially interactive robot system has been shown to yield an active search behavior of the robot with the goal to satisfy internal, top-down driven needs [2]. However, in this approach the top-down needs were pre-defined as well as their relation to the behaviors and visual perceptions. A learning process based on the bottom-up processing results does not take place. Bottom-up strategies for learning joint attention have been successfully demonstrated on a humanoid robot. In [28] [29] results have been reported from a robot that has learned to associate visual cues of the human partner’s head movements with the movements of its own head through a saliency based reward mechanism. Similarly, a robot can learn to find a face and the eyes of a human interaction partner in order to imitate head movement behavior based on bottom-up strategies for vision [33]. These learnig algorithms can be interpreted to learn joint attention in an interaction situation solely through bottom-up features. However, again, no processes taking advantage of the learned association with the goal to establish top-down strategies have been foreseen. Yet, more recently, results have been reported that suggest that semantically relevant aspects of actions as well as preferences for social cues might be found through bottom-up strategies [30] and might thus serve as a bootstrap mechanism to learn top-down strategies. However, in total it can be stated that bottom-up (or even mixed) approaches to joint attention do not take verbal interaction, and thus explicitly specified semantics, into account, which would be necessary in order to bootstrap linguistic learning. On the other hand, robots that are able to interact at a higher, linguistic level with humans and make use of multi-modal information tend to be based on almost purely model-driven joint-attention. Such approaches follow the strategy to track an object of interest (OOI) that has either been specified by a deliberate process and make use of it in order to resolve references from the communication with the user – or that have come in the focus of attention through bottom-up saliency detection processes ([2]). While these approaches allow meaningful, grounding-based interactions they provide only limited means to integrate bottom-up strategies for joint attention in order to adapt the attention strategies to new situations and learn qualitatively new concepts as would be necessary for e.g. language learning. 3.3
Conclusion
Thus, while theories on infant development discuss the possibility that higher level attentional processes, and thus top-down strategies such as joint attention, may develop through bottom-up, saliency-based strategies being applied to meaningful interactive communication, computational models of joint attention mainly focus on one strategy exclusively. Also, the importance of multi-modal information tends to be neglected in these approaches. However, theories about the interplay of different modalities for learning, such as the intermodal redundancy hypothesis, suggest that it is the synchrony of information in different
Towards Learning by Interacting
145
modalities that helps to make sense of the enormous amount of information that reaches the infant’s brain through its sensory system. This means that attentional processes necessarily need to be able to take multi- modal information into account and make use of it in order to find relevant segments over time in the signal. Yet, the modeling of such multi-modal approaches is still in its infancy. The reason for this is that synchronisation and the handling of different processing rates, as they are common for e.g. visual vs audio-processing, are non-trivial tasks that need to be solved beforehand. In order to achieve such an integration, an adequate system architecture is needed that enables (1) early as well as late fusion, (2) the building of new concepts and processing strategies and (3) feeding back of such top-down information from higher to lower levels. In other words, taking multi-modality seriously means that the architecture needs to be adapted to these requirements.
4
Interaction Management and Learning
While joint attention is a process that helps to guide the attention of each interaction partner to a common focus, it does not provide a means to communicate about the common object of interest. What is, thus, needed is a mechanism that facilitates the exchange of information by provding means of feedback and rules of interactivity. One might even argue that such a mechanism is actually a more general kind of joint attention as it also needs to make sure that both interaction partners are focussing on the same thing while providing at the same time a means for exchanging information about this thing. 4.1
Interaction Management in Parent-Infant Interaction
A prerequisite for the development of communication is the ability to participate cooperatively in shared discourse ([20] [25]). Dialogue as a behavioral and communicational form seems to be acquired [20] in that sense that a child has to learn how to time her or his own behavior in order to exchange with a social partner and how to coordinate communication when one has to handle or manipulate objects in parallel. The foundations for these skills seem to be acquired very early and in situations that have seemingly less to do with vocal communication. In a study, in which mothers and their newborns were observed during feeding time, Kaye [20] analyzed burst-pause patterns and their development. A burst was characterized as several successive sucks of the baby. When a baby paused the feeding, her or his mother seemed to interpret this behavior as flagging and was therefore inclined to stimulate the baby’s behavior by jiggling the baby. But interestingly, jiggling - as has been analyzed - does not cause another burst immediately. Instead, the crucial signal was the pause after the jiggling that seemed to let the baby resume the sucking. In the course of the interaction development, jiggling of the mothers became shorter. After two weeks, mothers seemed to change their behavior from ’jiggle’ to ’jiggle-stop’. Kaye [20] interprets
146
B. Wrede et al.
the results in terms of the origin of a dialogue: Both, the mother and the child, learned to anticipate one another’s behavior in terms of a symmetry or contingency meaning that when somebody is doing something, the other will wait. That is the rule of turn taking, and it is learned because certain interactive sequences (that seem to occur naturally) are becoming more frequent. Infants seem to be sensitive to such contingencies and regularities in behavior. According to Masataka [25], when exposed to a contingent stimulation, children are positively affected in their motivation insofar as their performance on a subsequent task was facilitated. Whereas children’s performance was impaired when they were exposed to the non-contingent stimulation before. Young infants enjoy contingent interaction even in the absence of a social partner, but with, e.g. a mobile robot [37]. Csibra and Gergely [4] believe that these early interactions serve “an ultimately epistemic function: identifying teachers and teaching situations and practicing this process”. This way, infants are tuned to specific types of social learning by a specific type of communication. In this type of social learning, generalizable knowledge is conveyed that is valid beyond the ongoing situation [4]. The authors rest their assumption on further preferences in young infants for tutoring situations. Accordingly, infants prefer not only verbal behavior that is specifically addressed to them (known as Motherese or Child-Directed-Speech) but also nonverbal behavior such as object demonstration that is directed towards them (known as Motionese). 4.2
Modeling Interaction Management in HRI
Taking these findings into account one may argue that joint attention serves as a precursor of and necessary ingredient for meaningful interaction. Additionally to joint attention, in order to exchange information, there needs to be a mechanism that enables a bi-directional interaction where information, that is given by one partner, is confirmed by the other. This means that this mechanism does not only need to make sure that both partners are attending to the same object but also that they can exchange information about it. In this sense one might interpret joint attention as a subset, or precursor, of interaction. In HRI such a mechanism is generally modeled as a dialog module which manages interaction in a top-down way. It is often integrated in the system’s architecture at the highest, the deliberative level, which is closely related to the planning processes. Thus, the techniques to model interaction tend to be closely intertwined with the system’s architecture. Standard approaches to dialog modeling in HRI have for a long time been state-based where the internal system states are augmented by dialog states with each dialog step being represented as a new state. In such an approach the dialog is driven by the goal to satisfy the need for information depending on the system state. For example, if in an instruction a crucial information for a task execution is missing this is modeled explicitly by a state representing the system state and the dialog state. This way, each potential interaction has to be modeled a-priori and learning of new interaction steps is very difficult if not impossible.
Towards Learning by Interacting
147
In order to generalise over such states and learn better and new associations between system and dialog states, there exist a range of machine-learning approaches to spoken dialog systems [23]. While such approaches are reported to be able to generalise to a certain degree to unseen events, they are nevertheless tied to the existing system states on the one hand and require a high level analysis of the spoken utterances in terms of dialog or speech acts on the other hand. Learning is thus only possible within a very limited range, where for example new speech acts can not be learned. Also, these approaches are limited to uni-modal speech based interaction systems such as telephone service systems – indicating that dealing with more complicated or even dynamic states, as they are common in embodied and situated systems such as robots, might be difficult for this approach. A relatively new approach in HRI is to base the interaction management on a grounding process [24]. Grounding [3] is a well-known concept that has been used at length to describe and analyse human-human interaction on the one hand, and which has been adapted to human-machine interaction on the other hand. Basically, grounding describes the interactive process of establishing mutual understanding between two interaction partners. This process can be described in terms of adjacency pairs of utterances with a so called presentation initiating such a pair and an acceptance signaling understanding by the listener. If the listener does not understand, he or she will initiate a clarification sequence, which has to be finished before the initial utterance can be answered. While this concept is intuitively clear it raises several questions with respect to the sequencing of such contributions and the termination of the grounding process. Accordingly, there are only very few implementations making use of this principle. However, those applications that make use of a grounding-based interaction management tend to be more flexible when the system is changed and also to allow for much more mixed-initiative interactions [24]. Understanding is thereby often operationalized as meeting an expectation. For example, if the system is asking a question it will expect an answer with a specific type of information. If the answer does not contain this type of information the expectation is not met and the question, thus, not grounded. 4.3
Conclusion
Thus, dialog modeling in HRI is highly model-driven as it pertains to higher levels of communication. However, when taking findings from parent-infant interaction into account it becomes obvious that the principle of grounding might also be applicable to lower or even non-linguistic levels of interaction. While the current models of grounding need linguistic analyses of the spoken utterances, it is yet possible to extend this principle to non-linguistic levels of interaction. We argue that grounding is a mechanism that may serve as a general mechanism for learning higher-level linguistic interaction through non-linguistic, physical interaction. However, in order for such a system to evolve, we argue that three prerequisites need to be available: (1) The mechanism needs to be able to process
148
B. Wrede et al.
multi-modal information in a way that allows to synchronize the information at different levels in order to draw conclusions about co-occurring events and thus segmentation into meaningful units. Multi-modality must thus not only occur at a single point in the overall system – as is often the case – but has to be available to all processing levels. (2) The mechanism needs to provide a possibility to develop top-down strategies from bottom-up data analysis. Also, a mechanism is required that enables bottom-up and top-down processes at the same time. For example, interaction strategies derived from non-verbal interaction from which turn-taking and feedback strategies are derived, need to be able to feed back into the further bottom-up processes involved in the processing of verbal input. These two desiderata entail (3) that the overall system has to be highly integrated. With this we mean that on the one hand, many modules operating at different levels and focussing on different modalities need to work together. On the other hand, this means that a coherent architecture needs to be established that enables all components to have access to multi-modal information and may feed information back to other processes.
5
Conclusion
We have argued that the interaction situation is a necessary pre-requisite in order to enable developmental learning. In this context, learning can not only be seen as the task of assigning symbols to sensorial patterns, but should be modeled as a continuous process driven by social and interactive cues. Hence, we are not advocating any specific method in machine learning but rather calling for a paradigm in learning that combines aspects of supervised and reinforcement learning emphasizing the role of an interaction partner playing the tutor role. This view indeed has an impact on the selection of any particular underlying methodology or algorithm. However, as interaction is inherently a process characterized by multi-modality and mixed top-down and bottom-up schemes, a systemic approach that is functionally highly interlinked is needed. Thus, when taking the interaction situation in a learning scenario into consideration a complex perception-action system needs to be developed in order to enable learning. From software engineering and architecture perspective such a close coupling is always considered as a particular challenge that needs attention in further research. This fact is reflected by the growing attention cognitive architectures put forward in the last years (e.g. [36]). Furthermore, as we are talking about interactive systems, demands regarding ’responsiveness’ and ’liveness’ need to be taken into account during the selection of particular methods. As interaction is always live and can not be stored or processed offline, the system needs to be ’responsive’, that is it needs to be able to react in a short enough time span in order for the user to be perceived as continuing interaction. In order to decompose the general problem of interactive learning, in this paper we identified two major building interaction blocks, namely joint attention and interaction management, that have the potential to evolve through repeated interaction cycles from basic physical interactions to interactions at a linguistic level. To implement systems that actually learn by interaction, existing models
Towards Learning by Interacting
149
for attention in computational system need to grow up towards comprehensive, systemic approaches fusing bottom-up and top-down mechanisms. They have to exploit multi-modality in their perceptual and expressing capabilities. And they demand for a systemic view that allows for close inter-operation of the basic capabilities.
References 1. Baldwin, D.A.: Understanding the link between joint attention and language. In: Moore, C., Dunham, P. (eds.) Joint attention: its origins and role in development, pp. 131–158. Lawrence Erlbaum, Mahwah (1995) 2. Breazeal, C., Scassellati, B.: A context-dependent attention system for a social robot. In: IJCAI 1999: Proc. Int. Joint Conf. on Artificial Intelligence, pp. 1146– 1153. Morgan Kaufmann, San Francisco (1999) 3. Clark, H.H.: Arenas of Language Use. University of Chicago Press (1992) 4. Csibra, G., Gergely, G.: Social learning and social cognition: The case for pedagogy. In: Johnson, M., Munakata, Y. (eds.) Processes of Change in Brain and Cognitive Development. Attention and Performance, XXI. Oxford University Press, Oxford (2005) 5. Dautenhahn, K., Nehaniv, C. (eds.): Imitation and social learning in robots, humans and animals: Behavioural, social and communicative dimensions. Cambridge University Press, Cambridge (2007) 6. Driscoll, J., Peters, R., Cave, K.: A visual attention network for a humanoid robot 7. Duffy, B., Joue, G.: Intelligent robots: The question of embodiment. In: Proc. of the Brain-Machine Workshop (2000) 8. Farroni, T., Johnson, M.H., Brockbank, M., Simion, F.: Infants’ use of gaze direction to cue attention: The importance of perceived motion. Visual Cognition 7, 705–718 (2000) 9. Fogel, A., Garvey, A.: Alive comunication. Infant Behavior and Development 30, 251–257 (2007) 10. Gogate, L.J., Bahrick, L.E.: Intersensory redundancy and 7-month-old infants’ memory for arbitrary syllable-object relations. Infancy 2, 219–231 (2001) 11. Gogate, L.J., Prince, C.: Is ”multimodal motherese” universal? In: The X International Congress of the International Association for the Study of Child Language (IASCL), Berlin, Germany, July 25-29 (2005) 12. Golinkoff, R.M., Hirsh-Pasek, K.: Baby wordsmith. Current Directions in Psychological Science 15, 30–33 (2006) 13. Hashimoto, S.: Humanoid robots in waseda university - hadaly-2 and wabian. In: IARP First International Workshop on Humanoid and Human Friendly Robotics, Tsukuba, Japan (1998) 14. Hirsh-Pasek, K., Golinkoff, R.M.: The Origins of Grammar: Evidence from Early Language Comprehension. MIT Press, Cambridge (1996) 15. Hollich, G., Hirsh-Pasek, K., Tucker, M.L., Golinkoff, R.M.: The change is afoot: Emergentist thinking in language acquisition. In: Anderson, C., Finnemann, N.O.E., Christiansen, P.V. (eds.) Downward Causation, pp. 143–178. Aarhus University Press (2000) 16. Hood, B.M., Willen, J.D., Driver, J.: Adult’s eyes trigger shifts of visual attention in human infants. Psychological Science 9, 131–134 (1998) 17. Itti, L., Koch, C., Niebur, E.: A model of saliency-based visual attention for rapid scene analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI) 20(11), 1254–1259 (1998)
150
B. Wrede et al.
18. Johnson, M.H., Morton, J.: Biology and Cognitive Development: The case of face recognition. Blackwell Scientific Publications, Oxford (1991) 19. Kaplan, F., Hafner, V.V.: The challenges of joint attention. In: Proceedings of the Fourth International Workshop on Epigenetic Robotics, pp. 67–74 (2004) 20. Kaye, K.: Toward the origin of dialogue. In: Shaffer, H.R. (ed.) Studies in motherinfant interaction, pp. 89–119. Academic Press, London (1977) 21. Kuhl, P.: Is speech learning gated by the social brain? Developmental Science 10(1), 110–120 (2007) 22. Kuhl, P.K.: Human adults and human infants show a ‘perceptual magnet effect’ for prototypes of speech categories, monkeys do not. Percept. Psychophys. 50, 93–107 (1991) 23. Lemon, O., Pietquin, O.: Machine learning for spoken dialog systems. In: INTERSPEECH 2007, Antwerp, Belgium, pp. 2685–2688 (2007) 24. Li, S.: Multi-modal Interaction Management for a Robot Companion. Phd, Bielefeld University, Bielefeld (2007) 25. Masataka, N.: The onset of language. Cambridge University Press, Cambridge (2003) 26. McCartney, J.S., Panneton, R.: Four-month-olds’ discrimination of voice changes in multimodal displays as a function of discrimination protocol. Infancy 7 (2), 163–182 (2005) 27. Moore, C., Angelopoulos, M., Bennett, P.: The role of movement in the development of joint visual attention. Infant Behavior and Development 2, 109–129 (1997) 28. Nagai, Y.: Joint attention development in infant-like robot based on head movement imitation. In: Proc. Int. Symposium on Imitation in Animals and Artifacts (AISB 2005), pp. 87–96 (2005) 29. Nagai, Y., Hosoda, K., Asada, M.: Joint attention emerges through bootstrap learning. In: Proc. IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS 2003), pp. 168–173 (2003) 30. Nagai, Y., Rohlfing, K.J.: Computational analysis of motionese: What can infants learn from parental actions? In: Proc. Int. Conf. on Infant Studies (ICIS 2008) (March 2008) 31. Posner, M.I. (ed.): Cognitive neuroscience of attention. Guilford, New York (2004) 32. Rohlfing, K.J.: Cognitive foundations. In: Rickheit, G., Strohner, H. (eds.) Handbook of Applied Linguistics. Communicative competence of the individual, vol. 1. Mouton de Gruyter, Berlin (2008) 33. Scassellati, B.: Imitation and mechanisms of shared attention: A developmental structure for building social skills. In: Proc. Autonomous Agents 1998 workshop Agents in Interaction - Acquiring Competence through Imitation, Minneapolis, MO (August 1998) 34. Shic, F., Scassellati, B.: A behavioral analysis of computational models of visual attention. Int. Journal of Computer Vision 73(2), 159–177 (2007) 35. Sternberg, R.J.: Cognitive Psychology. Wadsworth Publishing (2005) 36. Sun, R.: Desiderata for cognitive architectures. Philosophical Psychology 17(33), 341–373 (2004) 37. Watson, J.S.: Smiling, cooing, and ”the game”. Merrill-Palmer Quarterly 18(4), 323–339 (1972) 38. Ziemke, T.: What’s that thing called embodiment. In: Proceedings of the 25th Meeting of the Cognitive Society (July/August 2003) 39. Zukow-Goldring, P.: Assisted imitation: Affordances, effectivities, and the mirror system in early language development. In: Arbib, M.A. (ed.) Action To Language via the Mirror Neuron System. Cambridge University Press, Cambridge (2006)
Planning and Moving in Dynamic Environments A Statistical Machine Learning Approach Sethu Vijayakumar1 , Marc Toussaint2 , Giorgios Petkos1, and Matthew Howard1 1
School of Informatics, University of Edinburgh, Edinburgh EH8 9AB, UK
[email protected] 2 Technical University of Berlin, 10587 Berlin, Germany
[email protected] Abstract. In this chapter, we develop a new view on problems of movement control and planning from a Machine Learning perspective. In this view, decision making, control, and planning are all considered as an inference or (alternately) an information processing problem, i.e., a problem of computing a posterior distribution over unknown variables conditioned on the available information (targets, goals, constraints). Further, problems of adaptation and learning are formulated as statistical learning problems to model the dependencies between variables. This approach naturally extends to cases when information is missing, e.g., when the context or load needs to be inferred from interaction; or to the case of apprentice learning where, crucially, latent properties of the observed behavior are learnt rather than the motion copied directly. With this account, we hope to address the long-standing problem of designing adaptive control and planning systems that can flexibly be coupled to multiple sources of information (be they of purely sensory nature or higher-level modulations such as task and constraint information) and equally formulated on any level of abstraction (motor control variables or symbolic representations). Recent advances in Machine Learning provide a coherent framework for these problems.
1
Introduction
It is sometimes asked why one should apply statistical or probabilistic methods in robotics when the environment is to a large degree deterministic. Are not classical deterministic methods perfectly suited? A first reply might point out that any realistic mechanical system also includes noise and that statistical approaches will increase robustness. While this is certainly true, we would emphasize another motivation for statistical or probabilistic approaches: the computational or information processing perspective. The control and preparation of movement, the planning of behavior, and the interaction with natural environment – all, to a large extent, incorporate and exploit many pieces of information about the current state. This information either stems from direct observation (sensors), from internal estimations (like B. Sendhoff et al. (Eds.): Creating Brain-Like Intelligence, LNAI 5436, pp. 151–191, 2009. c Springer-Verlag Berlin Heidelberg 2009
152
S. Vijayakumar et al.
the position of a tracked but occluded object, or the self-location in an internal map), or from internally induced constraints (such as goals or costs). In most general terms, decision making or planning involves combining all this information optimally in order to gain some perspective about the consequences of possible decisions. In fact, a most significant characteristic of the central nervous systems (CNS) is that many pieces of information are represented in parallel on many different levels of representations. It seems that much “evolutionary effort” was put in developing such representations including properties such as their topographic organization. Without going into details about how all these pieces of information are processed to get a unified percept, it generally seems clear that they are coupled in the sense of sustaining some consistency and enabling transformations between each other. The discussion whether neural processes can faithfully be abstracted to implement basic information processing mechanisms such as inference is beyond the scope and aim of this paper. Machine Learning (a field that in large parts has developed from the early “Neural Information Processing” approaches) has developed methods that exactly abstract this general kind of information processing. Thus, another answer to the introductory question is that the probabilistic and statistical framework allow us to grasp how to represent information, to fuse and process information, to cope with missing information, or to learn statistical dependencies that provide the basis for processing information and performing inference. Machine Learning approaches explicitly distinguish between separate representations of information (e.g., random variables) and processes of inference. A most crucial question here is what are the representations of information that we should be concerned with that are relevant for a given task; i.e., which (latent) random variables should we introduce. This can be seen in close analogy to the question which representations are inherent in the brain, i.e., what information is represented by different areas. Within this view, we may distinguish four levels of problems: – What are the variables (or representations) that are useful for the current situation, tasks or environment? – How do these depend upon each other? – What are the current goals, targets or constraints? – What are their implications on the free variables – such as the immediate actions or motor commands? Each of these levels requires some methods for: 1. Development/extraction of suitable representations (for e.g., expert knowledge or some algorithmic procedure). 2. Regression/modelling techniques to learn local contingencies (models) between coupled variables. 3. Conditioning and biasing of the random variables 4. Inference.
Planning and Moving in Dynamic Environments
153
The outline of this chapter follows these different levels. In Sec. 2, we first reformulate classical control schemes in terms of Bayesian inference – as an example for steps 3 and 4. This approach can nicely be extended to movement planning problems going beyond classical solutions. In Sec. 3, we first review efficient approaches for statistical learning (step 2) in the context of control. Extending this to the development of ‘internal’ representations under multiple contexts (in Machine Learning terms, the learning of latent variable models) is tackled next. We will demonstrate a basic exemplar for the extraction of a latent context variable from data; indeed, this can be thought of as tackling the scenario of missing information, for e.g., the explicit mass information of a load to be manipulated. Finally, in Sec. 4, we consider the application of statistical learning in the context of imitation or apprentice learning. From demonstrated movement data, we can learn parameters of the movement prior (the so-called null-space potential). By learning this objective function that is implicit in the observed behavior, we can potentially reproduce the behavior on a functional level as opposed to merely copying a motor trajectory and exploit this in order to generalize to novel tasks and previously unseen constraints.
2
Planning and Control
In this section, we will contrast the optimization perspective of movement control and planning to the inference or information processing perspective – technically by relating an objective function to a corresponding probabilistic model. The reader might object that there is no difference anyway because any cost function can surely be translated to a likelihood function somehow such that the optimal solution in the first case is the maximum a posteriori (MAP) solution in the second. However, the optimization and the inference views differ conceptually and technically, as we shall see next. First, the information processing view always computes distributions (posteriors) over the desired variables whereas the optimization view computes only a single solution at every stage. If only the MAP solution is of interest, this makes no difference. However, when some more information becomes accessible after the optimization or computation, then the information processing view can perfectly integrate this new information in the previous calculations whereas the optimization view would have to recompute the solution. In natural environments, the incremental and stochastic nature of information over time can be exploited better in the latter framework. In more general terms, the inference procedure generates a distribution which is open to be coupled to further information or modulated otherwise from a “higher level”; whereas the optimization view generates a single solution for which it is unclear how to couple it to new information, adapt or modulate it. Second, the technical tools used for optimization versus inference differ significantly, for e.g., this might be quadratic programming versus belief propagation. In practice, either paradigms have associated pros and cons with respect
154
S. Vijayakumar et al.
to computational cost, robustness and ease of implementation – as we will explore next. 2.1
Redundant Control as Optimization Problem
It turns out that a very general class of kinematic and dynamic control schemes (e.g., those described in [1]) can be understood as special case solutions to a simple optimization problem. Consider the cost function L = ||x − Jq||2C −1 + ||q||2W + 2hT q
(1)
where ||q||2W = q T W q describes the Mahalanobis norm. The minimum of this function can easily be found by taking the derivative, ∂L = −2(x − Jq)T C −1 J + 2q T W + 2hT = 0 ∂q q = (J T C −1 J + W )−1 (J T C −1 x − h) =W
−1 T
J (JW
−1
T
−1
J + C)
x − [I − W
⇒
(2) (3)
−1 T
J (JW
−1
T
−1
J + C)
J] W
−1
h
The last equation uses the Woodbury identity: J T C −1 J + W )−1 J T C −1 = W −1 J T (JW −1 J T + C)−1 ,
(4)
which generally holds for any positive definite C and W . In Machine Learning this minimization is well-known as ridge regression. In the more typical notation rewritten as “||y − Xβ||2C −1 + λ||β||2 ”, we are given a input data matrix X, an output data vector y with an aim to find the regressor β. The first term of L corresponds to the squared error (under noise covariance C) while the second term is a regulariser (or stabiliser) as introduced by Tikhonov and takes the form of a ridge λ||β||2 . For uniform output covariance C = I and h = 0, (4) tells us that the solution to ridge regression is β ∗ = X T (XX T + λI)−1 y. In the case of redundant kinematic control (resolved motion rate control), one would like to compute joint velocities q˙ which fulfill a lower-dimensional task constraint x˙ = J q˙ while minimizing the absolute joint velocity ||q|| ˙ 2W and following the negative gradient h = −∇H(q). The solution is easily found from (4) for the limit C → 0 (i.e., imposing zero variance in the task constraint): q˙ = JW x˙ + (I − JW J)W −1 ∇H(q)
(5)
where JW = W −1 J T (JW −1 J T )−1 is called the pseudo-inverse of J under the metric W . In hierarchical motion rate control, one has a number of task variables x1 , .., xm with fixed priority. That is, the first constraint x˙ 1 = J1 q˙ needs to be fulfilled exactly, the second constraint x˙ 2 = J2 q˙ only within the nullspace of the first constraint, etc. The classical solution is given in [2,3]. It turns out that this scheme
Planning and Moving in Dynamic Environments
155
can also be derived from the general solution of (4) when taking the cascaded limit C → 0, where C is in block form and each block approaches zero with a different rate. Finally, in dynamic control one would like to compute a control signal u which generates an acceleration q¨ = M −1 (u + F ) such that a general task constraint b = A¨ q remains fulfilled while also minimizing the absolute norm ||u||2W of the control. Reformulating the constraint as b − AM −1 (u + F ) = 0 and taking the limit C → 0 (zero variance in the constraint), we can consult (4) and get the solution: u = JW (b − JF ) − (I − JW J)W −1 h ,
with J = AM −1
(6)
which, for h = 0, is identical to Theorem 1 in [1]. Peters et al. discuss in detail important versions of this control scheme. 2.2
Redundant Control as an Inference Problem
There exists a straight-forward reinterpretation of the above control schemes in a Bayesian setting. At first sight, it might seem that nothing is gained from this reformulation. However, as we mentioned earlier, the information processing view can have significant practical and theoretical difference to the optimization view. For instance, the next section will derive an extension of the Bayesian control scheme to trajectory planning problem which departs from classical trajectory optimization algorithms. Exploiting the fact that all terms in the loss function L = ||x−Jq||2C −1 +||q||2W are quadratic, we can formulate an equivalent likelihood based on Gaussian random variables as follows. Let qt be the n-dimensional robot configuration at time t and xt some ddimensional task variable. When discretizing time, we are interested in computing an actuator motion δq = qt+1 − qt of the robot given a desired task step δx = xt+1 − xt . Instead of considering the deterministic 1st order approximation δx=Jδq, ˙ we assume that, at the particular current state, we have P (δx | δq) = N (δx, Jδq, C)
(7)
where N (x, a, C) is the Gaussian density function over x with mean a and covariance matrix C. In the limit C → 0, this corresponds to the deterministic 1st order approximation; non-zero C allow us to have a tolerance w.r.t. fulfilling the task constraint. We compute the posterior over δq conditioned on the desired step δxi , P (δq | δx) ∝ P (δx | δq) P (δq) .
(8) −1
For this we need to specify a prior that we choose as P (δq) = N (δq, 0, W ). Note that here we identify the precision matrix W directly with the regularizer metric W in (1). Using
156
S. Vijayakumar et al.
P (δx | δq) = N (δx, Jδq, C) = N (Jδq, δx, C) ∝ N (δq, J −1 δx, J −1 CJ −1T ) (9) we derive the posterior over δq as = N (δq, A−1 a, A−1 ) ,
P (δq | δx) with
T
a=J C
−1
δx ,
(10) T
A=W +J C
−1
J
Clearly, the MAP motion = A−1 a = (W + J T C −1 J)−1 J T C −1 δx δq
(11)
coincides with standard redundant control for C → 0 using the Woodbury identity. From this rather trivial result we may conclude the following – The classical regularizer matrix W , which appears in the pseudo-inverse JW , −1 plays the role of the transition prior P (qt+1 | qt ) = N (qt+1 , qt , W ) in the Bayesian view, with covariance matrix W −1 . – The task constraints in classical motor planning can be interpreted as the conditioning of a task variable in the Bayesian view, which happens to be coupled (via the kinematics) to the actuator motions. – Computing an optimal actuator motion in classical motor control corresponds to Bayesian inference of the posterior actuator motion conditioned on the task variables in the latter interpretation.
The example of redundant control is perhaps too minimalistic to appreciate the difference of the inference and the optimization view since, essentially, we only have one unconditioned random variable qt+1 . The Bayesian framework (which we call Bayesian Motor Control or BMC) produces a distribution over δq rather than only a single MAP solution – which is admittedly not crucial when looking only one step ahead. However, the full impact of this formulation becomes evident when considering whole motion trajectories, as we will see in the next section. 2.3
Planning as an Inference Problem
The previous sections considered the classical problem of inverse kinematics within a single time slice. Given the instantaneous desired motions in the task variables δx, we computed the instantaneous posterior motion in actuator space δq. However, in many circumstances, a preparatory movement in nullspace is advantageous to achieve future task constraints. One may call this a nullspace planning problem, where planning is used to determine the current nullspace movement that best allows to fulfill future task constraints. It is straight-forward to extend the Bayesian control scheme to yield Bayesoptimal solutions to this trajectory planning problem: We now have a desired trajectory x(t), rather than only instantaneous desired movement. Then, we compute the posterior over the full actuator trajectory q(t), rather than only
Planning and Moving in Dynamic Environments
157
Fig. 1. Illustration of extended BMC: On m = 3 low-dimensional control variables xi we have desired trajectories and associated tolerances. Here, x3 is constrained to be kept in a constant range, x1 needs to converge to a target. Extended BMC computes the posterior distribution over movements in posture space conditioned on these constraints. xi,1
xi,2
xi,T
x1,1
x1,2
x1,T
q1
q2
qT
fwd-bwd iterations k 1 (reactive controller) 2 1 1 12 2 5 10
|q| ˙ dt E = i ei,t dt 13.0124 0.0873 7.73616 0.1366 7.70018 0.0071 7.68302 0.0027 7.65795 0.0013 7.62888 0.0012
Fig. 2. (Fig) Dynamic Bayesian Network for extended BMC. (Table) Trajectory length and control errors after different number of forward-backward iterations of extended BMC. k = 12 means a single forward pass and corresponds to the reactive forward controller using the single time slice BMC. k = 1 additionally includes a single backward smoothing.
over the instantaneous movement δq (refer Fig. 1). We formalize this in terms of a Dynamic Bayesian Network (DBN) where the computation of the posterior over q(t) leads to a forward-backward inference procedure. Figure 2 shows the DBN for BMC under trajectory constraints. As for the single time slice case, we assume that P (qt | qt−1 ) = N (qt , qt−1 , W −1 ) ,
P (xi,t | qt ) = N (xi,t , φi (qt ), Σi ) .
(12)
We use Belief Propagation (BP) to describe the inference procedure (which could more classically be described as an iterative extended Kalman smoother). This simplifies the description of an iterative forward-backward, which yields more optimal results since the local linearizations of φi makes a single forwardbackward only approximate. We represent the current belief over qt as a Gaussian γt (qt ) = N (qt , ct , Ct ). This belief is factored in three messages: the forward message αt (from qt−1 → qt )
158
S. Vijayakumar et al.
P (qt | qt−1 ) γt (qt−1 ) = N (at , At ) ,
At = W −1 + Ct−1 ,
at = ct−1 , (13)
the backward message βt (from qt+1 → qt ) βt (qt ) = qt+1 P (qt+1 | qt ) γt (qt+1 ) = N (bt , Bt ) ,
Bt = W −1 + Ct+1 ,
bt = ct+1 , (14)
αt (qt ) =
qt−1
and the observation message t (from x1:m,t → qt )
t (qt ) =
m
P (xi,t | qt ) = N (Rt−1 rt , Rt−1 ) ,
i=1
rt = Rt qˆ +
JiT Σi−1 (xi,t − φi (ˆ q )) ,
i
(15) Rt =
JiT Σi−1 Ji .
i
Here, we used a linearization of each φi at qˆ (see below). We first initialize all γ’s to be the uniform density (we use precision matrices), and γ1 = N (q0 , ·10−10 ). The inference procedure then iterates, updating −1 ct ← Ct [A−1 t at + Bt bt + rt ] (16) where α’s, β’s, and ’s are always recomputed using the above expressions, and linearizing at the mean qˆ = cold to compute the observation message (or qˆ = at−1 t in the first forward iteration). The iterations are repeated alternating between forward for t = 2, .., T and backward for t = T−1, .., 2. Next, we summarize some key properties of this framework:
γt ← αt t βt
−1 , i.e. , Ct−1 ← A−1 t + Bt + Rt ,
1. The single time slice BMC is a special case of the extended BMC for T = 2 and if not iterating the belief propagation for γt=2 . Iterating BP makes a difference since the point qˆ of linearization of kinematics moves towards the final mean x2 and thus, the result of BP converges to an exact step rather than the usual 1st order approximation. 2. Extended BMC allows us to continuously adjust the tightness Σi−1 of the control variables depending on time. For instance, we can impose that a desired control trajectory xi (t) is to be followed tightly in the early stage of the movement while only loosely in a later stage or vice versa. 3. Extended BMC solves a generalization of redundant movement planning: Redundant planning implies that a future goal is only determined by an endeffector constraint xT = x∗T rather than a state space goal qT = qT∗ . By adjusting the tightness (precision Σ −1 ) of a control variable to be 0 for t = 1, .., T − 1 but tight in the last time step t = T , we can reproduce the classical redundant planning problem. Our approach generalizes this in that we can have an arbitrary number of control variables, and can adjust the control tightness e.g. to introduce intermediate goals or intermediate boundary conditions. 4. Even when we require tightness in the controls over the full trajectory, the extended BMC resolves the planning problem in the remaining nullspace.
Planning and Moving in Dynamic Environments
159
Fig. 3. Solution to a complex reaching movement under balance and collision constraints. The target for the right finger is illustrated as a black dot. Views from 3 different perspectives.
2.4
Examples for Extended BMC
We now consider the problem illustrated in Fig. 3 of reaching under an obstacle while keeping balance. This time the desired motion is not defined by reactive dynamical systems but rather by trajectories xi,1:T for each control variable xi . We defined these to be linear interpolations from the start state to the target with T = 100, while keeping the precisions ν1:3 = (·103 , ·103 , ·106 ) constant over time. Figure 2(Table) displays the trajectory length and control errors after some forward-backward iterations. Note that the first line k = 12 (only one forward pass) corresponds to the reactive application of the single time-slice BMC and thereby represents a classical forward controller. For instance, if we fix the total computational cost to 3 times that of a simple forward controller (k = 1 21 iterations) we find an improvement of 40.8% w.r.t. the trajectory length and 91.9% w.r.t. control errors of extended BMC versus the forward controller. The reason for these improvements are that the forward controller chooses a non-efficient path which first moves straight towards the obstacle and later need longer motions to circumvent the obstacle. In contrast, the probabilistic smoothing of extended BMC leads to early nullspace movements (leaning to the right) which make the later circumvention of the obstacle more efficient. 2.5
Conclusion
Bayesian methods are often used for sensor processing and fusion. We investigated how Bayesian methods can also be applied on the motor level by abstracting the planning and constraint satisfaction as an optimal information processing paradigm. We proposed BMC to infer a single posterior body motion from multiple concurrent primitive controllers. Single time-slice BMC includes the standard and the prioritized inverse kinematics as a limit. However, we showed that BMC can lead to significant improvements in terms of trajectory length when compared to a strictly prioritized IK and can cope with singularity constraints (where rank(Ji ) < ni ) and conflicting or infeasible constraints (where the nullspace projection Ni and thereby, the nullspace projected Jacobian J¯i become singular). Further, BMC allows us to gradually change the required tightness of a constraint over time and to generate compromises between constraints. Extended BMC uses belief propagation to compute the posterior body trajectory conditioned on desired
160
S. Vijayakumar et al.
control trajectories, and can be related to optimal control theory via Todorov’s duality [4]. The total computational cost of a single backward sweep is equal to that of a single forward control sweep. We have demonstrated that this type of probabilistic smoothing can yield significant improvement in terms of the trajectory length and control errors over a forward controller. In effect, extended BMC generates early nullspace movements which make later motions more efficient. In this paper, we constrained ourselves to Gaussian belief representations, which also implies that there exists a dual quadratic cost minimization formulation. However, using existing inference techniques for other belief representations (e.g., particles, exponential families, or mixture of Gaussians), BMC can be extended to allow for a much wider range of constraint shapes. BMC is one piece in a larger endeavor to apply Machine Learning techniques in the realm of robotics. Robotic control and behavior generation crucially depend on integrating information from as many sources as possible (sensors, prior knowledge, knowledge about constraints and future goals) to decide on current actions. The Bayesian framework is an ideal candidate for this. Our goal is to understand the problem of behavior and motion generation as a single coherent inference process in a structured model containing many sources of information and constraints. By integrating BMC in a larger framework such as Grupen’s control paradigm [5] and combining it with more complex models of motor primitives [6,7,8], we will advance towards a complete, coherent Bayesian framework for behavior and motion generation.
3
Learning Dynamics under Multiple Contexts
In the previous section, we argued that principle problems of movement generation or planning can be framed as information processing problems, given information about goals, targets or constraints, and assuming knowledge about the dependency of random variables. This relationship between variables of interest is often the dynamics of the plant or the robot; a relation that defines the state transitions for various actuation torques. In many situations, exact analytical derivation of the robot dynamics is not feasible. This can be either due to the complexity of the system or due to lack of or inaccurate knowledge of the physical properties of the robot. Moreover, the dynamics of the robot often depend on a varying unobserved external context and exhibit non-stationarity. An example of unobserved external context that results in non-stationary dynamics is the work load of a manipulator: the resultant dynamics of the robot arm change as it manipulates objects with different physical properties, e.g. mass or mass distribution. Adaptive control and learning methods can be used in cases of non-stationary dynamics. However, if the dynamics switch back and forth, e.g. if manipulating a set of tools for executing various tasks, classic adaptive control methods are inadequate since they unlearn the dynamics of the previously experienced contexts and relearn them when they reoccur. Furthermore, there may be large errors and instability during the period of readaptation.
Planning and Moving in Dynamic Environments
τt
161
τt
θt+1
θt
θt+1
θt
(a)
(b)
Fig. 4. Graphical model representing the (a) forward and (b) inverse model
In the next section, we will look at methods from the domain of statistical machine learning to efficiently acquire dynamics from movement data, which are then extended to representing, learning and switching between multiple models, each of which is appropriate for a different context. 3.1
Statistical Learning of Forward and Inverse Dynamic Models
Anthropomorphic robotic systems have complex kinematic and dynamic structure, significant non-linearities and hard to model non-rigid body dynamics; hence, deriving reliable analytical models of their dynamics can be cumbersome and/or inaccurate. We take the approach of learning dynamics for control from movement data. At time step t, let Θt = (qt , q˙t ) be the state of the system (which includes the position and velocity components) and τt the control signal. A deterministic forward model f describes the discrete-time system dynamics as Θt+1 = f (Θt , τt ) .
(17)
Learning a forward model f of the dynamics is useful for predicting the behavior of the system. However, for control purposes, an inverse model is needed. The inverse model g maps from transitions between states to the control signal that is needed to achieve this transition: τt = g(Θt , Θt+1 ) .
(18)
A probabilistic graphical model representation of the forward and inverse model is shown in Fig. 4(left) and Fig. 4(right), respectively. The inverse model shown in Fig. 4(right) can be used in many control settings; one of the most common being to use it as part of a composite controller. Given ∗ a desired trajectory, Θ1:T , the composite control law computes the command as ∗ τt = g(Θt∗ , Θt+1 ) + Kp (qt∗ − qt ) + Ku (q˙t∗ − q˙t ) ,
(19)
where Kp and Ku are gain matrices. This is a combination of a feedforward command that uses the inverse model and a feedback command that takes into account the actual state of the system. The more accurate the inverse model is,
162
S. Vijayakumar et al.
(a)
(b)
Fig. 5. Robotic platforms: (a) SARCOS dextrous arm (b) DLR LWR III arm
the lower the feedback component of the command will be, i.e., the magnitude of the feedback command can be used as a measure of the accuracy of the inverse model. Furthermore, good predictive models allow us to use low feedback gains, resulting in a highly compliant system without sacrificing the speed and accuracy of the movements. Typically, in robotic systems with proprioceptive and torque sensing, at each time step t we “observe” a state transition and an applied torque signal summarized in the triplet (Θt , Θt+1 , τt ), i.e., we have access to the true applied control command (which was generated via composite control). To learn the inverse dynamics, we need a non-linear, online regression technique. We use Locally Weighted Projection Regression (LWPR) [9] – an algorithm which is extremely robust and efficient for incremental learning of non-linear models in high dimensions. An LWPR model uses a set of linear models, each of which is accompanied by a locality kernel (usually a gaussian) that defines the area of validity of the linear model. For an input x, if the output of the k th local model is written as yk (x) and the locality kernel activation is wk (x), the combined prediction of the LWPR model, yˆ, is yˆ(x) =
1 wk (x) yk (x) , W k
W =
wk (x) .
(20)
k
The parameters of the local linear models and locality kernels are adapted online and also local models are added on an as needed basis. Furthermore, LWPR provides statistically sound input dependent confidence bounds on its predictions and employs Partial Least Squares (PLS) to deal with high dimensional inputs. For more details about LWPR, see [9] and for an efficient implementation, refer to [10]. The role of LWPR in the probabilistic inverse model of Fig. 4 can be summarized in the equation:
Planning and Moving in Dynamic Environments
163
P (τ | Θt+1 , Θt ) = N (φ(Θt+1 , Θt ), σ(Θt+1 , Θt )),
(21)
whose φ(Θt+1 , Θt ) is a learned LWPR regression mapping state transitions to torques. Here, we have two options for choosing the variance: (1) we can assume a fixed noise level independent of the context and the input, e.g. a maximum likelihood estimate; (2) we can use the confidence bounds provided by each LWPR model which also depends on the current input (Θt+1 , Θt ), this will give higher noise levels in areas where not much data has been seen. We will test both cases in our experiments. Please see [9] for more details on LWPR and the input dependent variance estimate. 1 Joints 1−3 Averaged
0.9
1
nMSE
−1
0.6
0.5
0.4
200
0.6
150
0.4
100
0.2
50
0.3
0.2
0.1
−2
10
0
0.8
0.7
Tracking error
Ratio of feedback to composite command
0.8
10
250 Joints 1−3 Averaged Deriv.Gains Prop.Gains
Joints 1−3 Averaged
Gains
0
10
10 Iteration
20
0 0
10 Iteration
20
0 0
10 Iteration
20
0
Fig. 6. Results on learning stationary dynamics on a 3 DOF simulated robot arm. Left: test error. Middle: contribution of error-correcting feedback command. Right: Tracking error.
Experiments in Learning Dynamics for Single Context. We verify the ability to learn the inverse model online with LWPR and show that the model can successfully be used for control. We demonstrated this for a simulated 3 DOF robot arm1 as well as on the 7 DOF anthropomorphic SARCOS robot arm (Fig. 5(a))and recently, on the DLR LWR arm (Fig. 5(b)). Here, the statistics are accumulated and shown briefly for the simulated arm. The task of the arm was to follow a simple trajectory planned in joint angle space, consisting of a superposition of sinusoids with different phase shifts for each joint: 1
Simulations performed using ODE and OpenGL.
164
S. Vijayakumar et al.
2π 2π t) + bi cos(βi t) , (22) T T where T = 4000 is the total length of the target trajectory, ai , bi ∈ [−1, 1] are different amplitudes and αi , βi ∈ {1, .., 15} parameterize different frequencies. 20 iterations of the trajectory were repeated: during the first four iterations, pure feedback (PD) control was used to control the arm, while at the next 16 iterations, a composite controller using the inverse model being learned was used. The gains were lowered as training proceeded. The procedure was executed ten times, for ten different contexts, for accumulating statistics. Different contexts are simulated by attaching an object with different mass at the last link of the arm. Figure 6(left) plots the normalized mean squared error between the torques predicted by the LWPR model and the true torques experienced on the test data (i.e., the data that was held out from the training), which shows a quick drop as training proceeds and settles at a very low value averaged over all trials. The contribution of the error-correcting feedback command to the feedforward command (see Fig. 6(middle)) is low, vouching for the accuracy of the learnt model while being used for control. Furthermore, the tracking error (Fig. 6(right)) is very low and improves significantly when we switch to composite control. For the detailed statistics on the online dynamics learning of the 7 DOF SARCOS robot arm and tracking results on a pattern eight task, readers are referred to [9]. θi∗ = ai cos(αi
3.2
Inference of a Discrete Latent Context Variable
The multiple model paradigm copes with the issue of non-stationary dynamics by using a set of models, each of which is specialized to a different context. A schematic of a generic multiple model paradigm is shown in Fig. 7. The observed dynamics of the system are compared to the prediction of each learned model
Control
Learning
Dynamics models Context 1 Context 2
Context Estimates
Commands Context n
Context Estimates
Switch / Mix
State
Dynamics Predictions
Context estimator
State
Applied Command
Fig. 7. Schematic of a multiple model paradigm
System
Planning and Moving in Dynamic Environments
165
to identify the current context. The context estimates are used for selecting the model to use for control and for training. All existing multiple model paradigms roughly follow the same plot. The main issues that have to be tackled for using multiple discrete models for control are: 1. Infer the current context for selecting the appropriate model to use for control. 2. Infer the current context for selecting the appropriate model to train with the experienced data. 3. Estimate the appropriate number of models (possibly using a novelty detection mechanism). Context Estimation. It is appropriate to formulate context estimation in a probabilistic setting to account for inaccuracies of the learnt models as well as handle transitions. Apart from dealing with uncertainty and context estimation in a principled way, the probabilistic formulation can be useful for novelty detection. That is, if experienced data is not likely for any of the learned models, it can be classified as novel and used to train a new model. While in most multiple model approaches, context estimation is performed by comparing the predictions of a forward model to the dynamic behaviour of the system, in the absence of redundant actuators, one can use the inverse model as well – an approach we will follow here. The graphical model in Fig. 8(a) represents a set of inverse models corresponding to a specific number of contexts. The hidden contextual variable ct is discrete and indexes the different models. The inverse model in this formulation can be written as: P (τ | Θt+1 , Θt , ct = i) = N (φ(i) (Θt+1 , Θt ), σ (i) (Θt+1 , Θt )) ,
(23)
where φ(i) is the command predicted by the LWPR model corresponding to the ith context and σ (i) is some estimate of the variance, which again can be either set to a predetermined constant or based upon the input dependent confidence bounds provided by LWPR. Also, we can either assume knowledge of the prior probability of contexts or we can assume that different contexts have equal prior probabilities P (ct ). Under this probabilistic formulation, context estimation is just inferring the posterior of ct given a state transition and the command that resulted in this transition: P (ct = i | Θt , Θt+1 , τt ) ∝ P (τt | Θt , Θt+1 , ct = i) P (ct = i).
(24)
Context estimates are very sensitive to the accuracy of the inverse models. They can be improved by acknowledging that contexts do not change too frequently. We can introduce a temporal dependency between contexts P (ct+1 | ct ) with an appropriate transition probability between contexts that reflects our prior belief of the switching frequency to achieve much more robust context estimation. The graphical model can be reformulated as the Dynamic Bayesian Network shown in Fig. 8(b) to achieve this. Application of standard Hidden Markov Model (HMM) techniques is straightforward by using (24) as the observation likelihood in the HMM, given the hidden state ct = i. A low transition probability penalizes too frequent transitions and using filtering, smoothing or Viterbi alignment produces more stable context estimates.
166
S. Vijayakumar et al.
ct
ct
ct+1
τt
τt
τt+1
θt
θt+1
θt+1
θt
(a)
(b)
Fig. 8. Multiple models and hidden contexts: (a) Graphical representation of hidden latent context within the dynamics (b) DBN to capture the temporal dependencies between the latent contexts
Data Separation for Learning. The problem of bootstrapping the context separation from context-unlabeled data is very similar to clustering problems using a mixture of Gaussians. In fact, the context variable can be interpreted as a latent mixture indicator and each inverse model contributes a mixture component to give rise to the mixture model of the form P (τt | Θt , Θt+1 ) = P (τ | Θ , t t Θt+1 , ct = i) P (ct = i). Clustering with mixtures of Gaussians is i usually trained using Expectation-Maximization (EM), where initially the data are labeled with random responsibilities (are assigned randomly to the different mixture components). Then every mixture component is trained on its assigned (weighted) data (M-step) and afterwards the responsibilities for each data point is recomputed by setting them proportional to the likelihoods for each mixture component (E-step). Iterating this procedure, each mixture component specializes on different parts of the data and the responsibilities encode the learned cluster assignments. We apply the EM algorithm for separating the data and learning the models. In our case, the likelihood of a data triplet (Θt , Θt+1 , τt ) under the ith inverse model is P (τt | Θt , Θt+1 , ct = i), which is a Gaussian that could have either fixed variance or variance given by LWPR’s confidence bounds. Learning the transition probabilities from a sequence of observations is straightforward using EM. In particular, the probabilities p(ct , ct+1 | Θ1...T ) for t = 1...T − 1 need to be calculated (E-step), a straightforward problem in HMM inference and from these, the relative frequencies p(ct+1 | ct ) for any t can be easily estimated (M-step). As usual, the procedure is iterated a few times: i.e. p(ct , ct+1 | ΘT ) is computed using some values for the transition probabilities (one could initially set all transitions to be equally probable), then p(ct+1 | ct ) is estimated, then p(ct , ct+1 | Θ1...T ) is computed again using the estimated transition probabilities and so on until either a maximum number of iterations is reached or some other criterion is met, e.g. the likelihood of the observed data stops increasing. We will see examples of this procedure in the next section. Experiments With Multiple Discrete Models. The context estimation, transition probability learning and separation of experience methods suggested
Planning and Moving in Dynamic Environments
167
in the previous sections were tested on the simulated arm. Here different contexts are simulated by varying the mass on the last link of the manipulator. Random switches between six contexts were performed in the simulation, where at every time step we switch to a random context with probability .001 and stay in the current context otherwise. We have two classes of experiments, one is where we are not using HMM filtering of the contextual variable and the other is where we use it. Also, we have two choices for the variance of the observation model, one is where we use a constant (set at the MSE on the test data) and the other is where we use the more principled confidence bounds provided by LWPR. For the moment we assume that for the temporal model, we know the correct transition probabilities. The simulation was run for 10 iterations. We run 5 different runs of the simulation, switching between six different contexts each time and we start by examining the accuracy of our probabilistic context estimation methods while not using the context estimates for control (a PD controller was used). The percentage of accurate online context estimates for the four cases, averaged over the five trials, can be seen in Fig. 9(a)(error bars are obtained from the five different trials). Figure 9(b) gives an example of how the best context estimation method that we have, the HMM filtering using LWPR’s confidence bounds, performs when used for online context estimation and control. Sometimes the context estimation lags behind a few time steps when there are context switches, which is a natural effect of online filtering (as opposed to retrospect smoothing). The context estimates were then used online for selecting the model that will provide the feed-forward commands. Figure 9(c) shows the percentage of accurate online context estimates for the four case. Results are again averaged over the five trials. The performance of online context estimation and control is close to the control performance we achieved for the single context displayed in Fig. 6. Furthermore, we investigate the bootstrapping of data separation / model learning. We will use the temporal model as it has been shown to give far superior results for context estimation and control and thus we also investigate learning of transition probabilities. Again, when generating the data, we switched between two different contexts with probability .001 at each time step, however we now do not use the correct transition probabilities in either inference (E-step) or learning (M-step). We first collected a batch of context-unlabeled data from 5 cycles through the target trajectory where the arm was controlled by pure feedback PD control. The EM procedure for data separation and learning of transition probabilities (Sec. 3.2) was applied. As mentioned before, using the confidence bounds for the noise estimates of the observation (inverse) model from the beginning of the EM procedure does not usually work and makes all models collapse into the same model. Using the maximum likelihood estimates works much better, however, this still does not give very good results. The problem lies in the fact that a local learning method is used: data seem to be separated correctly locally but not globally, across the input space. This is demonstrated in Fig. 10(a). A possibility would be to try to regroup the local models to maximize the smoothness of the learned model, however, we show here that it is possible to
168
S. Vijayakumar et al.
No HMM constant variance No HMM with conf. bounds HMM constant variance HMM with conf. bounds
No HMM constant variance No HMM with conf. bounds HMM constant variance HMM with conf. bounds
0.9
0.8
0.7
0.6
0.5
1
Accuracy (%) of context estimates
Accuracy (%) of context estimates
1
0.9
0.8
0.7
0.6
0.5
0.4
0.4
0.3
0.3
0.2
0.2
Fig. 9. Online context estimation without using the context estimates for control. (a) Context estimation accuracy using different estimation methods. (b) Example of random context switches and its estimate using HMM filtering over time. (c) Online context estimation using the context estimates for control.
achieve perfect data separation by modifying the EM algorithm slightly. First, we note that if we manage to increase the noise only on the areas where there is mixing between local models that actually belong in different contexts, the posterior of the datapoints in that area will switch slower and less frequently in that region. This increase of noise can be achieved using LWPR’s confidence bounds: the confidence bounds increase when there is a sudden change in the model’s prediction. Thus, we run the EM procedure using a maximum likelihood estimate for the inverse model noise until the data is well separated locally and then switch to using the confidence bounds. This trick has been sufficient to solve the problem in some cases but not always. If at the point that we start to use the confidence bounds, there are very large or too many areas that need to be swapped between LWPR models, then the procedure may still get stuck. The way to solve this problem is to also switch from using smoothed estimates in the E-step, to using filtered estimates. This effectively, together with the increased noise levels on edges of the areas that need to be swapped, makes the areas that are not grouped correctly narrower and narrower in each iteration of EM. This modified EM procedure was tried with perfect data separation always being achieved. Figure 10(b) displays a typical evolution of the data separation. Switching to using the confidence bounds and the filtered instead of the smoothed estimates happens at iteration 20. The transition probabilities are also estimated during the EM procedure: the estimated probability are very close to the actual ones, i.e. 0.999 staying in the same context and 0.001 switching.
Planning and Moving in Dynamic Environments
169
4
x 10
0.5
Datapoints
1
1.5
2
2.5
Initial random
1
5
20
(a)
30 40 Iteration
50
60
Correct
(b)
Fig. 10. (a) The solid lines show the predictions of the inverse models for the first joint on the training data if the models had been trained with perfectly separated data. The dotted lines show the predictions of the models generated by the automated separation procedure. Data separation seems to work locally but not globally. (b) The evolution of the data separation from unlabeled data over some iterations of the EM-procedure. The first column displays the initial random assignment of datapoints to contexts. The last column displays the correct context for each datapoint. The columns in between display the most likely context for each datapoint according to the currently learned models for some iterations of the EM. procedure.
3.3
Augmented Model for Continuous Contexts
The multiple model paradigm has several limitations. First of all, the right number of models needs to be known or estimated. Estimating the number of contexts only from data (using some model selection procedure) is a non-trivial problem. Realistically, novel contexts appear quite often and to cope with this, a novelty detection mechanism is needed. However, even with a very robust novelty detection mechanism, we may end up with a very large number of models, since for most scenarios, possible contexts are infinite. Moreover, we would like to generalize between contexts and most multiple model paradigms do not cope well with this. All these issues can be circumvented if the set of models is replaced with a single model that takes as additional input appropriate continuous hidden contextual variables, i.e., instead of a set of gi s corresponding to different contexts, a single inverse model G is used: τt = G(Θt , Θt+1 , ct ) .
(25)
Here, ct is not a discrete variable that indexes different models but a set of continuous variables that provides information about the context. The probabilistic model of the inverse dynamics would then be:
170
S. Vijayakumar et al.
P (τ | Θt , Θt+1 , ct ) = N (G(Θt , Θt+1 , ct ), σ(Θt , Θt+1 , ct )) .
(26)
A possibility for learning the augmented model is to follow the same procedure as in the discrete case for learning the models, i.e., apply an EM like procedure. Using the same temporal dependency formalization, this results in a state space model. Learning in nonlinear state space models is discussed in [11], [12] and [13]. However, the relationship of the contextual variables to the output of the augmented model could be arbitrary, making learning in such a setting a very difficult task. It is imperative to exploit any prior knowledge about the relationship of the inverse model to appropriate contextual variables. For the case of manipulation of objects with a robot arm (see schematic of the robot arm link with load in Fig. 11), we can take advantage of the fact that the dynamics of a robot arm have a linear relationship to the inertial properties of the links. In other words, the inverse dynamics can be written in the form: τ = Y (q, q, ˙ q¨)π
(27)
where q, q˙ and q¨ denote joint angles, velocities and accelerations respectively. This relationship can be derived based on fundamentals of robot dynamics [14,15] as shown in Table 1. This equation splits the dynamics in two terms. Say the manipulator has n joints, then Y (q, q, ˙ q¨) is a n × 10n matrix that depends on kinematics properties of the arm such as link lengths, direction of axis of rotation of joints and so on. This is a complicated and non-linear function of joint angles, velocities and accelerations. The term π is a 10n-dimensional vector containing the inertial parameters of all links of the arm (see Table 1). Now, let’s examine how this can be used to acquire the augmented model for the scenario of changing loads. The important thing to note is that the kinematics dependent term Y does not change as different objects are manipulated. Only the inertial parameters of the last link of the arm change, i.e. the last 10 elements of the vector π.
Fig. 11. Schematic of the load and inertial parameters involved in manipulator dynamics
Planning and Moving in Dynamic Environments
171
Table 1. Linearity of the dynamics model in the inertial parameters If T is the kinetic energy, U is the potential energy of the system and we define a Lagrangian L = T − U, the dynamics of the system is given by d ∂L ∂L − = τi dt ∂ q˙i ∂qi
(28)
where q1 , q2 ...qn is a set of generalized coordinates (here, the joint angles) and τ1 , τ2 ...τn denote the so called generalized forces associated with the corresponding joint angles qi . The generalized force τi is the sum of joint actuator torques, joint friction torques and other forces acting on the joint (e.g. forces induced by contact with the environment). The total kinetic energy T and the total potential energy U is just the sum of the kinetic energy and potential n energies of all the links of the manipulator respectively, i.e., T = n j=1 Tj , U = j=1 Uj The kinetic and potential energy of the j th link is given by: 1 1 mj p˙ Tj p˙ j + mj lj p˙ Tj S(ωj ) + ωjT Ij ωj , Uli = −mj g0T pj − mj g0T lj (29) 2 2 where mj is the total mass of link j, pj is the position vector of the origin of frame j expressed in the base frame, ωj is the rotational velocity of link j, S(ωj ) is a 3 × 3 skew-symmetric matrix that depends on ωj , lj is the position vector of the center of mass of the link from the origin of the frame of the link, g0 is the gravity acceleration vector, Ij is the inertia tensor of link j measured at the origin of the reference frame of the link. Substituting (29) in the Lagrangian and with some rearrangement, we can see that the Lagrangian has a linear relationship to the set of inertial parameters: Tj =
π = [m1 , m1 l1x , m1 l1y , m1 l1z , I1xx , I1xy , ..., mn , mn lnx , mn lny , mn lnz , Inxx , ..., Inzz ] (30) In short, the Lagrangian can be written in the form: L = g(q, q)π ˙
(31)
Since the inertial parameters in π do not depend on time or q˙ then the dynamics equation for joint i is: d ∂g(q, q) ˙ ∂g(q, q) ˙ π −π = τi (32) dt ∂ q˙i ∂qi Thus, the dynamics can be written in the form τi = yi (q, q, ˙ q¨)π
(33)
It is worth noting that we didn’t take into account the dynamics of the motor attached to each link. In that case, another element per link is added to the vector π, giving a total of 11 inertial parameters per joint. For more details see [14]. We will ignore the motor dynamics, however, our general arguments can easily be extended in order to include the dynamics of the motor.
172
S. Vijayakumar et al.
Torque
Reference Dynamics
Interpolated Dynamics
State transitions Context (Inertial parameters)
Fig. 12. Learning the augmented model. The dots are training data for the different contexts. The solid lines are the learned models for each context, the red dotted lines show the interpolation of the augmented model predictions from a set of learned models and the dashed lines show the global augmented model for some new context.
Denoting the 10 inertial parameters of the union of the last link and manipulated object as πo and using these as the contextual variables, the augmented model G(Θt , Θt+1 , ct ) can be written as: τt = G(Θt , Θt+1 , ct ) = A(q, q, ˙ q¨) + B(q, q, ˙ q¨)πo
(34)
Here, the matrix B(q, q, ˙ q¨) consists of the last 10 columns of the matrix Y (q, q, ˙ q¨) and A(q, q, ˙ q¨) is the n−dimensional vector given by multiplying the array consisting of the first (n − 1) × 10 columns of Y (q, q, ˙ q¨) with the vector consisting of the inertial parameters of the first n − 1 links. Note that state transitions have been appropriately replaced by joint angles, velocities and accelerations. This is more compactly written as: G(Θt , Θt+1 , ct ) = Y˜ (q, q, ˙ q¨)π˜o = τ
(35)
where π˜o denotes the vector [1 πoT ]T and Y˜ (q, q, ˙ q¨) denotes the n × 11 matrix [A(q, q, ˙ q¨) B(q, q, ˙ q¨)].
Planning and Moving in Dynamic Environments
173
To acquire the model, essentially means to estimate Y˜ (q, q, ˙ q¨). If we have an appropriate number of learned models (that is, at least as many as the cardinality of π˜o ) and the corresponding πo labels, we can simply estimate Y˜ (q, q, ˙ q¨) using least squares due to the linearity property. Say, we have learned a set of reference models g 1 (q, q, ˙ q¨), g 2 (q, q, ˙ q¨)...g l (q, q, ˙ q¨) corresponding to manipulation of objects which result in the last link of the arm having known inertial parameters πo1 , πo2 ...πol , one can just evaluate Y˜ (q, q, ˙ q¨) as: ˜ T (Π ˜Π ˜ T )−1 Y˜ (q, q, ˙ q¨) = T (q, q, ˙ q¨)Π
(36)
Where, T (q, q, ˙ q¨) is a matrix with the reference models’ predictions as its columns ˜ is a matrix with the reference models’ inertial parameters. and Π The augmented model can be used both for control and context estimation purposes. For control purposes, say we have an estimate of πo at time t, given the desired transition for the next time step, we can easily compute Y˜ (q ∗ , q˙∗ , q¨∗ ) and hence, the feedforward command. For robust context estimation, we can use temporal dependencies, similar to the principles used in the multiple model scenario. However, since we now have a set of continuous hidden variables as opposed to a single discrete context variable, the inference is slightly more involved (refer to Table 2). A schematic for acquisition of the augmented model is displayed in Fig. 12. The dots are data belonging to different contexts. A model is fit to the data belonging to each of the contexts (the solid lines) and then, we can use the predictions of the learned models together with the known corresponding inertial parameters to do a least squares estimate and acquire the augmented inverse model for any point of the input space (the dotted lines). Computing the augmented model for any point of the input space gives the dynamics model of any other context (dashed lines). It is important to note that to acquire the augmented inverse model, the regression coefficient matrix Y˜ (q, q, ˙ q¨) has to be evaluated at all relevant points in the input space. However, this is not as computationally expensive as it might seem at first. All that is needed is to reevaluate the predictions of the reference inverse models at each point in input space and multiply by the pseudoinverse ˜ T (Π ˜Π ˜ T )−1 . This pseudoinverse of the reference inertial parameters matrix Π needs to be evaluated once and no further matrix inversion is needed to solve the inverse problem at all points in the input space. The previous discussion implies that, ideally, if we have the prerequisite number of ’labeled’ context models (at least eleven independent models), then, one can deal with manipulation of any object. In practice, however, since learned dynamic models will not be perfect and due to the presence of noise in the sensor measurements, a larger number of ‘context models’ may be necessary to give accurate estimates and control. Experiments with the Augmented Model. The augmented model proposed for extracting the continuous context/latent variable was empirically evaluated. In our experiments, both the center of mass of the last link ln and the load lo are constrained to lie on the y axis of the last link’s reference frame, so that
174
S. Vijayakumar et al. Table 2. Inferring the hidden continuous context in the temporal model
In our probabilistic setting, the augmented inverse model is τt = G(Θt , Θt+1 , ct ) = A(Θt , Θt+1 ) + B(Θt , Θt+1 )ct + η
(37)
where A(Θt , Θt+1 ) and B(Θt , Θt+1 ) are estimated from the models used for forming the augmented model and η = N (0, Σobs ). Σobs is estimated from the confidence bounds of the inverse models that form the augmented model. Also, the transition model for the context needs to be defined. Since we believe that the context does not change too often, this is set to: ct+1 = ct + ζ (38) where ζ = N (0, Σtr ) with Σtr set to a very small value. Based on the defined model, we can write down the inference for the temporal Bayesian network using the augmented inverse model. For control, only filtered estimates (a la Kalman filtering) can be used. We want to compute p(ct | τ1:t+1 , Θ1:t+1 ) using the estimate at the previous time step p(ct−1 | τ1:t , Θ1:t ) and the new evidence τt+1 and Θt+1 . The previous estimate p(ct−1 | τ1:t , Θ1:t ) is defined as: p(ct−1 | τ1:t , Θ1:t ) = N (μt−1 | t , Σt−1 | t )
(39)
Estimates for the next time step p(ct | τ1:t+1 , Θ1:t+1 ) are obtained in a recursive way in two steps. The first is the prediction step where, p(ct | τ1:t , Θ1:t ) is computed using the filtered estimate on the previous time step and the transition model p(ct+1 | ct ), without taking into account evidence at time t + 1: p(ct | τ1:t , Θ1:t ) = N (μt | t , Σt | t )
(40)
where μt | t = μt−1 | t and Σt | t = Σt | t + Σtr . Then, the filtered estimate modifies the predicted estimates using the observation at the time t + 1 as (dependency of A and B on the state transition is omitted for compactness): p(ct | τ1:t+1 , Θ1:t+1 ) = N (μt | t+1 , Σt | t+1 )
(41)
μt | t+1 = μt | t + Σt | t B T (BΣt | t B T + Σobs )−1 (τt+1 − A − Bμt | t )
(42)
Σt | t+1 = Σt | t − Σt | t B T (BΣt | t B T + Σobs )−1 BΣt | t
(43)
where,
the center of mass of their union ´ln also lies on the y axis (refer Fig. 11). The problem was constrained in a way that, for the three degrees of freedom robot arm of our experiments, only three out of the ten inertial parameters of the last link could be estimated and thus, three contextual variables were needed to describe the augmented model. These inertial parameters are the mass, the mass × the y − position of the center of mass and the moment of inertia around the z axis . This was achieved by choosing the coordinate system attached to the last link such that the center of mass both of the link and the object lie on the y axis
Planning and Moving in Dynamic Environments
175
(see Fig. 11). Thus, mass × the x − position and mass × the z − position are zero. Furthermore, the off-diagonal elements of the inertia tensor are zero and only the moment of inertia around the z axis has significant contribution to the dynamics. Unlike before, now both the mass and shape of the manipulated object change randomly and can take any value in a specific range. Again, we start by not using the context estimates for control, i.e. we apply PD control. We then repeated the same experiments but using the context estimates for control to see if the accuracy of our continuous context estimates is sufficient for motor control. Experiments were executed for both cases, five times, using different reference models for each of the runs. Figure 13(a) shows the estimation accuracy of the three context variables for the no control / control cases. The error measure used is the nMSE on the target variable. We can see that the relative accuracy is not significantly dfferent. Figure 13(b-d) show a snapshot of the actual and estimated contextual variables. The mass and y-position of the center of mass × the mass were more accurately estimated than the moment of inertia around the y axis. For the case that the context estimates were used for control, the average ratio of feedback to composite command was 0.1428 with standard deviation between the runs 0.0141. 3.4
Discussion
We have described a method of using a learned set of models for control of a system with non-linear dynamics under continuously varying contexts. In addition , we have refined the multiple model paradigm to be able to simultaneously deal with learning dynamic models, use them for online switching control and also efficiently bootstrap data separation for context unlabeled data. An important component of this work is the ability to infer the continuous hidden context that contains dynamic properties of the manipulated object, e.g. the mass of the object as illustrated in the experiments.
4
Imitation Learning of Transition Priors
Recall the basic steps we discussed in the introduction: (1) What are the variables? (2) How do these depend upon each other? (3) What are the current goals and constraints? (4) What posterior does this imply on actions? While Sec. 2 addressed the fourth point of inference, the previous section addressed the second point of learning dependencies between variables from data. Extending this, we have also seen how statistical learning techniques can be used to extract latent or hidden context variables of the problem - which addresses the first point. In this section, we address a problem which is related to the third point that of specifying the goals and constraints of the problem. Usually one would consider this to be completely specified externally, e.g., by the engineer. However, often we only have a certain desired behavior vaguely in mind and it becomes
176
S. Vijayakumar et al. 0.8
7 MOMENT OF INERTIA AROUND THE X AXIS
No control Control
0.6
Correct Estimated Rererence
6.5
0.4
Mass
nMSE
6 MASS X CENTER OF MASS
MASS
0.2
5.5 5 4.5
0
4
Context variables
0
2000
4000 6000 Datapoint
Mass x center of mass
5.5
Correct Estimated Rererence
5 4.5 4 3.5 3 2.5 2
0
2000
4000 6000 Datapoint
(c)
10000
8000
10000
(b) Moment of inertia around the x axis
(a)
8000
8000
10000
7
Correct Estimated Rererence
6 5 4 3 2 1
0
2000
4000 6000 Datapoint
(d)
Fig. 13. (a) nMSE of the three contextual variables while using and not using the context estimates for control (b-d) Actual and estimated context variables, along with the values of the context variables of the reference models that were used for deriving the augmented model
rather cumbersome to rigorously define an objective function which will lead to this desired behavior. Recently there has been rising interest in an alternative to human designed objective functions (alternately, goals and constraints). The idea is to extract, from observed (teacher’s) behavior, appropriate qualities that can be utilized as goals or constraint definitions. In our investigations, we focus on what is classically called the nullspace potential. As we have seen in the section on redundant control, a given task constraint does not completely constrain the state trajectory but leaves some unresolved degrees of freedom. This redundancy is most easily resolved by regularization: either by using the W -pseudo inverse in classical control or, in the Bayesian view, by putting a prior P (q) ˙ ∝ exp{− 21 q T W q} on the joint motion. However, more complex motions require better priors acting on joint motion, e.g., to avoid joint limits. As we have seen, classically this often realized by adding a nullspace movement h = −αW −1 ∇H(q) along the negative gradient of the potential H(q) – which has the Bayesian equivalent in putting a prior P (q) ˙ ∝ exp{− 21 q T W q + H(q)} on the joint motion (and linearizing H(q)
Planning and Moving in Dynamic Environments
177
around q). In this section, we discuss learning this potential–aka the transition prior–from observed behavior and hence, being able to extract control policies that generalize over multiple constraints. 4.1
Policies and Potential Functions
A common paradigm for the control of multibody systems such as high DOF manipulators and humanoid robots is to frame the problem as a constrained optimal control problem [16,17,18]. In this paradigm control tasks are formulated as constraints on the system such that some desired behaviour is achieved. In simple systems such as anthropomorphic arms, these constraints often take the form of constraints on the end-effector. For example, the constraints may require that the effector follows a certain trajectory or applies some given force to an object [19]. In a more generic setting, constraints may take a much wider variety of forms. For example in walking or reaching, the constraint may be on the center of mass or tilt of the torso of the walker to prevent over-balancing. Alternately, in contact control problems such as manipulation or grasping, the constraint may require that effectors (such as fingers) maintain a given position on the surface of an object [20]. Also in systems designed to be highly competent and adaptive, such as humanoid robots, behaviour may be subject to a wide variety of constraints [21], usually non-linear in actuator space, and often discontinuous. Consider pouring a cup of water from a bottle; initially constraints will apply to the orientation of the two hands as the water is poured. However once the bottle is empty, the constraint on the orientation of the bottle can be released, representing a discontinuous switch in the constraints on the policy. The focus of this section is on modelling control policies subject to generic constraints on motion, with the aim of finding policies that can generalise between different constraints. In general, learning (unconstrained) policies from constrained motion data is a formidable task. This is due to (i) the non-convexity of observations under different constraints, and; (ii) under any given set of constraints there is degeneracy in the set of possible policies that could have produced the constrained movements under observation [22]. However we will show that despite these hard analytical limits, for a certain class of policies, it is possible to find a good approximation of the unconstrained policy given observations under the right conditions. We take advantage of recent work in local dimensionality reduction [23] to show that it is possible to devise a method that (i) given observations under a sufficiently rich set of constraints, reconstructs the fully unconstrained policy; (ii) given observations under an impoverished set of constraints, learns a policy that generalizes well to constraints of a similar class, and; (iii) given ‘pathological’ constraints will learn a policy that at worst reproduces behaviour subject to the same constraints. Constrained Policies. Following [24], we consider the learning of autonomous kinematic policies q˙ = π(q(t), α) (44)
178
S. Vijayakumar et al.
(a)
(b)
(c)
Fig. 14. Illustration of two apparently different behaviours from the same policy: (a) unconstrained movement (b) movement constrained such that the fingertip maintains contact with a surface (black box) (c) the unconstrained (red) and constrained (black) policy over two of the joints of the finger
where q is some appropriately chosen state-space, q˙ is the desired change in state and α is a vector of parameters determining the behaviour of the policy. The goal of direct policy learning is to approximate the policy (44) as closely as possible [24]. It is usually formulated as a supervised learning problem where it is assumed that we have observations of q(t), ˙ q(t) (often in the form of trajectories), and from these we wish to learn the mapping π. In previous work this has been done by fitting parameterised models in the form of dynamical systems [25,26], non-parametric modelling [18], and probabilistic Bayesian approaches [27,28]. An implicit assumption found in direct policy learning approaches to date is that the data used for training comes from behavioural observations of some unconstrained or consistently constrained policy. By this it is meant that policy is observed either with no constraints on motion, or where constraints exist, these are static and consistent over observations. For example, consider the learning of a simple policy to extend a jointed finger. In Fig. 14a) the finger is unconstrained and the policy simply moves the joints towards the zero (outstretched) position. On the other hand, in Fig. 14b), an obstacle lies in the path of the finger, so that the finger is constrained to move along the surface of this obstacle. The vector field representation of the two behaviours is shown in Fig. 14c). In standard approaches to direct policy learning [24,25,26], these two apparently different behaviours would lead to the learning of two separate policies for extending the finger in the two settings. However, the fact that the goals of the two policies are similar (‘extend the finger’) suggests that in fact the movement stems from the same policy under different constraints. Viewed like this, instead of learning two separate policies we would rather learn a single policy that generalizes over observations under different constraints.
Planning and Moving in Dynamic Environments
179
A constrained policy is one for which there are hard restrictions on the movements available to the policy. Mathematically, we say given a set of constraints A(q, t)q˙ = 0
(45)
the policy is projected into the nullspace of those constraints q(t) ˙ = N(q, t)πuc (q(t))
(46)
where N(q, t) ≡ (I−A(q, t)A(q, t)† ) is in general a non-linear, time-varying projection operator, A(q, t) is some matrix describing the constraint, A† is the pseudoinverse and I is the identity matrix. Constrained policies (46) are commonly used for control of redundant degrees of freedom (DOFs) in high-dimensional manipulators [16,17,18], however the formalism is generic and extends to a wide variety of systems, such as team coordination in mobile robots [29]. In this view, the best policy representation of the movements in Fig. 14 is the unconstrained policy πuc , since this is the policy that gives maximal information about the behaviour. Given πuc we can reproduce behaviours such as in Fig. 14 (b) simply by applying the same constraints. Furthermore, if we can find a good approximation of πuc we can even predict behaviour in situations where novel constraints, unseen in the training data, apply. However, learning the unconstrained policy from observations of constrained movement is a non-trivial task. For example we may not know exactly what constraints were in force at the time of observation. Furthermore there are several analytical restrictions on what information we can hope to recover from constrained motion data [22]. Despite this, we can efficiently uncover the unconstrained policy for the important class of conservative policies. In the next section, we characterise these analytical restrictions and show how these can be side-stepped in the case of conservative policies. Conservative Policies. Learning nullspace policies from constrained motion data is in general a hard problem due to non-convexity of observations and degeneracy [22]. The non-convexity problem comes from the fact that between observations, or even during the course of a single observation, the constraints may change, resulting in inconsistencies in the training data. For example consider the policy shown in Fig. 14c). In any observation, the observed motion vector q(t) ˙ may come from the set of constrained (black) or unconstrained (red) set of vectors. At any given point in the state space we may have multiple observations under different constraints resulting in a set of q(t) ˙ at that point. In standard supervised learning algorithms this causes problems since directly training on these observations may result in models that average over the observations. The non-convexity problem then is how to reconcile these multiple conflicting observations to give a consistent policy. The second problem is degeneracy in the data. This is the problem that for any given set of observations projected into the nullspace of the constraints,
180
S. Vijayakumar et al.
there may be multiple candidate policies that could have produced that movement. This is due to the fact that the projection matrix projects the policy onto a lower dimensional manifold so that motion orthogonal to that manifold is effectively ‘zeroed out’. This means that the component of πuc in this direction is undetermined by the observation. In effect the problem is ill-posed in the sense that we are not given sufficient information about the unconstrained policy to guarantee the true policy is learnt. However, in recent work it was shown [19,22] that for the important special case of conservative policies it is possible to use data efficiently to uncover the underlying policy. A conservative policy is a policy that can be described by taking the gradient of a potential function H(q) π(q) = −∇q H(q).
(47)
Conservative policies can be thought of as policies that greedily optimise the potential function at every time step [30]. Such policies encode attractor landscapes where their minima correspond to stable attractors; in the finger example, the q = 0 point would correspond to such a minimum. Conservative policies are commonly used in control of redundant DOFs in manipulators [16,17,18]. 4.2
Learning Nullspace Policies through Local Model Alignment
If the policy under observation is conservative, an elegant solution to solving the non-convexity and degeneracy problems is to model the policy through its potential function [19,22] rather than modelling it directly. The advantage of this is twofold. Firstly, due to the local linearity of the projection operator N(q, t) the conservative policy (47) remains locally conservative in the lower dimensional nullspace. We can use numerical line integration to estimate the form of the potential along the trajectories [19,22]. Secondly, the potential function is a scalar function and thus gives a compact representation of the policy. Crucially, this means that the problem of reconciling conflicting n-dimensional vector observaˆ tions is reduced to finding a function H(q) where the (1-dimensional) prediction is consistent at any given point q. Next, we propose a method for modelling the potential on a trajectory-wise basis and for consolidating models from multiple trajectories. Estimating the Potential along Single Trajectories. A method to model the potential along trajectory is to use an integration scheme such as the Euler integration, which involves the first order approximation H(qt+1 ) ≈ H(qt ) + (qt+1 − qt )T N(qt )π(qt )
(48)
Please note that for steps qt → qt+1 that follow the projected policy, (qt+1 − qt ) = N(qt )π(q), so we can actually write H(qt+1 ) ≈ H(qt ) − | qt+1 − qt | 2 .
(49)
Planning and Moving in Dynamic Environments
181
ˆ i ) of the potential along We use this approximation to generate estimates H(q ˆ 1 = H(q ˆ 1 ) to any given trajectory q1 , q2 . . . qN in the following way: We set H ˆ ˆ an arbitrary value and then iteratively assign Hi+1 := Hi − | qi+1 − qi | 2 for the remaining points in the trajectory. Note that an arbitrary constant can be added to the potential function without changing the policy. Therefore, ‘local’ potentials that we estimate along different trajectories need to be aligned in a way that their function value matches in intersecting regions. We’ll turn to this problem next. Constructing the Global Potential Function. Let us assume we are given K trajectories Qk = (qk1 , qk2 . . . qkNk ) and corresponding point-wise estimates ˆ k1 , H ˆ k2 . . . H ˆ kN ) of the potential, as provided from the Euler integraˆ k = (H H k tion just described. In a first step, we fit a function model fk (q) of the potential ˆ ki . To keep things simple, we choose ˆ k ), such that fk (qi ) ≈ H to each tuple (Qk , H a nearest-neighbour regression model, i.e., fk (q) = Hki∗
,
i∗ = arg min | q − qki | 2 . i
(50)
Since we wish to combine the models to a global potential function, we need to define some function for weighting the outputs of the different models. For the nearest-neighbour algorithm, we choose to use a Gaussian kernel 1 2 wk (q) = exp − 2 min | q − qki | . (51) 2σ i From these weights we can calculate responsibilities wk (q) rk (q) = K (52) i=1 wi (q) and a (naive) global prediction f (q) = K k=1 rk (q)fk (q) of the potential at q. However, as already stated, the potential is only defined up to an additive constant, and most importantly this constant can vary from one local model to another. This means that we first have to shift the models by adding some offset to their estimates of the potential, such that all local models are in good agreement about the global potential at any number of states q. We follow the methodology of non-linear dimensionality reduction [23] as used to align multiple local PCA models into a common low-dimensional space. In analogy to the PCA-alignment method [23], we augment our local potential models fk (·) by a scalar offset bk and solve for the corresponding objective function: E(b1 . . . bK ) =
M K K 1 rk (qm )rj (qm ) ((fk (qm ) + bk ) − (fj (qm ) + bj ))2 , 2 m=1 j=1 k=1
(53) or, in a slightly shorter form, 1 2 E(b) = rkm rjm (fkm + bk − fjm − bj ) . 2 m,k,j
(54)
182
S. Vijayakumar et al.
Here, m denotes a summation over the complete dataset, that is, over all points K from all trajectories (M = k=1 Nk ). Solving the above objective function for the optimal shift bopt yields the alignment necessary for global learning. For details of the solution for detecting the optimal alignment offset, readers are referred to [31]. Since we restrict ourselves to using simple nearest neighbor (NN) regression for the local potential models in this paper, the only open parameter of our algorithm is σ 2 , i.e., the kernel parameter used for calculating the responsibilities (51). A too large choice of this parameter will over-smooth the potential, because the NN regression model basically predicts a locally constant potential, but at the same time trajectories will have relatively high responsibilities for even far apart points x in state space. On the other hand, a too small value of σ 2 might lead to weakly connected trajectories: If a particular trajectory does not make any close approach to other trajectories in the set, the quick drop-off of its responsibility implies that it will not contribute to the alignment error (based on pairs of significant responsibility), which in turn implies that its own alignment – the value of its offset – does not matter much. We again refer the reader to [31] for details of the detecting and eliminating such outlier trajectories. Learning the Global Model. After calculating optimal offsets bopt and cleaning the dataset from outliers, we can learn a global model f (q) of the potential using any regression algorithm. Here, we choose Locally Weighted Projection Regression (LWPR) [9] because it has been demonstrated to perform well in cases where the data lies on low-dimensional manifolds in a high-dimensional space, which matches our problem of learning the potential from a set of trajectories. As the training data for LWPR, we use all non-outlier trajectories and their estimated potentials as given by the Euler integration plus their optimal offset, that is, the input-output tuples
ˆ kn + bopt ) | k ∈ K, n ∈ {1 . . . Nk } , (qkn , H (55) k where K denotes the set of indices of non-outlier trajectories. Once we have learned the model f (q) of the potential, we can take derivatives to estimate the unconstrained policy π ˆ (q) = −∇q f (q) or use the potential function directly as described in the beginning of Sec. 4. 4.3
Experiments in Direct Policy Learning
To explore the performance of the algorithm, we perform experiments on data from autonomous kinematic control policies [24] applied to three simulated plants, including a physically realistic simulation of the 27 DOF humanoid robot ASIMO [21]. However, to illustrate the key concepts involved, we first discuss results from two simplified problems2 controlled according to the same generic framework. 2
In fact even these ‘simplified’ problems are relevant to constrained policy learning in low dimensional task space representations, for example in end-effector space of an arm.
Planning and Moving in Dynamic Environments
183
Selection of Smoothing Parameter. For simplicity, in all our experiments we used the same heuristics for selecting the smoothing parameter σ 2 to match the scale of typical distances in the datasets. In particular, we first calculated the distances between any two trajectories k, j ∈ {1 . . . K} in the set as the distances between their closest points dkj = min | qkn − qjm | 2 | n, m ∈ {1 . . . N } , (56) and also the distances to the closest trajectory dmin = min {dkj | j = k} . k
(57)
We then consider three choices for σ 2 , which we refer to as ‘narrow’, ‘wide’ and ‘medium’: 2 σnar = median dmin | k ∈ {1 . . . K} (58) k 2 σwid = median djk | j, k ∈ {1 . . . K}, j = k (59)
2 2 σ2 . σmed = σnar (60) wid Toy Example. The toy example consists of a two-dimensional system with a quadratic nullspace potential subject to discontinuously switching task constraints. Specifically, the potential function is given by H(q) = qT Wq
(61)
where W is some square weighting matrix which we set to 0.05I. Data was collected by recording trajectories generated by the policy from a start state distribution Q0 . During the trajectories, the policy was subjected to random 1-D constraints: A(q, t) = (α1 , α2 ) ≡ α (62) where the α1,2 were drawn from a normal distribution, αi = N (0, 1). The constraints mean that motion is constrained in the direction orthogonal to the vector α in state space. To increase the complexity of the problem, the constraints were randomly switched during trajectories by re-sampling α twice at regular intervals during the trajectory. This switches the direction in which motion is constrained as can be seen by the sharp turns in the trajectories. Figure 15 shows an example of our algorithm at work for a set of K = 40 trajectories of length N = 40 for the toy system. The raw data as a set of trajectories through the two-dimensional state space is shown in panel (a), whereas panel (b) additionally depicts the local potential models as estimated from the Euler integration prior to alignment. Each local model has an arbitrary offset against the true potential so there are inconsistencies between the predictions from each local model. Figure 15(c) shows the trajectories after alignment, already revealing the structure of the parabola. At this point, the outlier detection scheme has identified three trajectories as being weakly connected to the
184
S. Vijayakumar et al.
2
0.6
1.5
0.5
x
2
0.35 0.3
0.4
0.25
Φ
1
0.3 0.5
0.2 Φ
0.2
0.15
0
0.1 −0.5
0.1
0 2
0.05
−1
0 2
2
1 1
0
−1.5
x −1.5
−1
−0.5
0 x
0.5
1
1.5
2
−1 2
1
x
−2
0
1
−1
x2
−1 −2
(b)
(a)
0.4
0.4
0.35
0.35
0.3
0.3
0.25
0.25
0.2 0.15
x1
−2
(c)
Φ
Φ
1
0
−1 −2
2
1
0 −2 −2
0.2 0.15
0.1
0.1
0.05
0.05
0 2
0 2 2
1
2
1
1
0
1
0
0 −1 x
2
0 −1
−1 −2
−2
(d)
x1
x
2
−1 −2
−2
x1
(e)
Fig. 15. Top: a) Toy data trajectories (2-D) and contour of true potential. Estimated potential along the trajectories before (b) and after (c) alignment. Trajectories detected as difficult to align ‘outliers’ are shown by light crosses. Bottom: Learnt (d) and true (e) potential function after training on the aligned trajectories.
remaining set. In Fig. 15(a), we can see that the outliers are indeed the only trajectories that do not have any intersection with neighboring trajectories. At the ‘narrow’ length scale determined by the smoothing parameter (58), they are hard to align properly, and need to be discarded before learning the global model. Finally, Fig.15(d) shows the global model f (q) of the potential that was trained on the aligned trajectories, which is clearly a good approximation of the true parabolic potential shown in Fig.15(e). For a more thorough evaluation, we repeated this experiment on 100 datasets and evaluated – the nMSE of the aligned potential, which measures the difference between ˆ kn + bk and the true potential H, H – the nMSE of the learnt potential, measuring the difference between f (·) and H(·), – the normalised unconstrained policy error (UPE), depending on the difference between π ˆ = ∇f and π = ∇H,
Planning and Moving in Dynamic Environments
185
Table 3. Error and outlier statistics (nMSE over 100 data sets) for the experiment on 2-D toy data Setup
σ2
Parabola K=40 N =40 Sinusoidal K=40 N =40 Sinusoidal K=100 N =100
narrow medium wide narrow medium wide narrow medium wide
Alignment (nMSE) 0.0039 0.0178 0.3494 0.0012 0.0634 0.6231 0.0006 0.0100 0.6228
Potential (nMSE) 0.0044 0.0168 0.3091 0.0022 0.0623 0.5653 0.0014 0.0097 0.5367
UPE (nMSE) 0.0452 0.0798 0.5177 0.1132 0.1359 0.8397 0.0550 0.0678 0.6972
CPE Outliers (nMSE) discarded (%) 0.0211 19.15 0.0186 0.25 0.0930 0 0.0482 52.67 0.0343 0.77 0.2538 0 0.0270 27.80 0.0255 0.15 0.2304 0
– the normalised constrained policy error (CPE), which is the difference between Nˆ π and Nπ, and finally – the percentage of trajectories discarded as outliers. We did so for our three different choices of σ 2 given in (58-60). We also repeated the experiment using a sinusoidal potential function Hs (q) = 0.1 sin(q1 ) cos(q2 )
(63)
with the same amount of data, and using K = 100 trajectories of length N = 100 for each dataset. Table 3 summarises the results. Firstly, we can see that the ‘wide’ choice for σ 2 leads to large error values which are due to over-smoothing. Using the narrow σ 2 , we retrieve very small errors at the cost of discarding quite a lot of trajectories3 , and the medium choice seems to strike a reasonable balance especially with respect to the UPE and CPE statistics. Secondly, when comparing the results for the parabolic and sinusoidal potentials, we can see that the latter, more complex potential (with multiple sinks) requires much more data. With only 40 trajectories and 40 points each, most of the datasets are too disrupted to learn a reasonable potential model. While at the narrow length scale (4th row), on average more than half of the dataset is discarded, even the medium length scale (5th row) over-smooths the subtleties of the underlying potential. Finally, the constrained policy error (CPE) is always much lower than the UPE, which follows naturally when training on data containing those very movement constraints. Still, with a reasonable amount of data, even the unconstrained policy can be modelled with remarkable accuracy. 3
Please note that we also discard the outliers for evaluating the error statistics – we can hardly expect to observe good performance in regions where the learnt model f (q) has seen no data.
186
S. Vijayakumar et al.
Table 4. Error and outlier statistics for the three link arm (TLA) and whole body motion (WBM) controller Plant, σ 2 TLA, narrow TLA, medium WBM, narrow WBM, medium
Alignment (nMSE) 0.2149 0.6029 0.0007 0.0016
Potential (nMSE) 0.2104 0.6012 0.0007 0.0016
UPE CPE Outliers (nMSE) (nMSE) discarded (%) 0.3908 0.0526 18.07 0.6526 0.0386 0 0.0896 0.0816 31.15 0.1424 0.0778 0
Three Link Arm. The two goals of our second set of experiments were (i) to characterise how well the algorithm scaled to more complex, realistic constraints and (ii) to characterise how well the learnt policies generalised over different constraints. For this we used a planar three-link arm (TLA) with revolute joints and unit link lengths. Using the TLA allowed us to set up a much richer variety of constraints, such as constraints on kinematic variables for example the endeffector (hand) position and orientation. Here we report results for a set of intuitively appealing constraints, that is a set of planar constraints on the hand. This kind of constraint occurs in contact-based behaviour [20], for example in writing, where the hand must maintain contact with a planar surface4 such as a table top. Data was collected by recording K = 100 trajectories of length N = 100 from a random distribution of start states. For ease of comparison with the 2-D system, the nullspace policy was chosen to optimise the same quadratic potential (61). The policy was constrained through the matrix A(q, t) = n ˆT Jhand (q)
(64)
where n ˆ is a unit vector normal to the hand-space plane and Jhand (q) is the hand Jacobian. The constraints (64) are highly nonlinear in the joint space where the policy is operating. Finally, to simulate observations under different constraints, the orientation of the hand-space plane was changed for each trajectory by drawing n ˆ from a uniform random distribution of two-dimensional unit vectors Dnˆ . We then used our algorithm to learn the nullspace potential. The results of learning are shown in Table 4. Generalising over Unseen Constraints Our first test was to look at the performance of the algorithm in finding a policy that generalises over unseen constraints. To do this we defined two new ‘test’ constraints and evaluated the CPE using these constraints in place of the training data constraints. The test constraints chosen were (i) an unseen planar constraint on the hand (i.e. we set n ˆ=n ˆtest where n ˆ test is drawn from Dnˆ , and; (ii) constraining the hand orientation during motion. This latter constraint is 4
Such constraints also need not be linear, and can be generalised to any shaped surface.
Planning and Moving in Dynamic Environments
187
Table 5. Constrained policy nMSE for unseen constraints on the three-link arm. Values are mean±s.d. over 100 data sets. Constraint Training Unseen hand-space plane Hand Orientation Unconstrained
CPE 0.0526 ± 0.0192 0.0736 ± 0.0492 0.1053 ± 0.0524 0.3908 ± 0.2277
qualitatively similar to the training data constraints (i.e., a 1-D constraint on the hand) but produces visibly different behaviour, in terms of hand- and jointspace trajectories. Table 5 gives a comparison of the normalised policy error evaluated on the unconstrained policy, the constrained policy, and the policy subject to the two test constraints over 100 data sets. The first thing to note is that the algorithm shows good performance for the CPE evaluated on the training data constraints, indicating a minimum guarantee on performance, namely that the learnt potential will at worst be consistent with the training data. Secondly, the progression of error over the two unseen constraints coincides with the extent to which they are similar to the constraints in the training data. This confirms our intuition that though we can generalise over different constraints, this becomes increasingly difficult as these depart from those observed. Finally, the unconstrained policy error indicates that for some reason the algorithm is having problems finding the fully unconstrained policy in this case. We investigate this issue more closely in the next section. Unconstrained Policy Error. The reason for the poor performance of the algorithm in predicting the unconstrained policy (ref. Table 5) becomes clear if we analyse the effect of the constraints on the movement of the arm. Fig 16(a) shows the training data trajectories through the three joints of the arm. It is clear that owing to the constraints on the arm, the policy no longer reaches the point attractor at q = 0 , but instead reaches a line in joint space (shown in black). This ‘line attractor’ represents the minimum of the potential that can be reached without breaking the constraints. Furthermore, it seems that away from this line, there are few points where trajectories come close to one another or intersect. This means that the algorithm gets little or no information about how the potential changes in this direction. This is confirmed by comparing how the UPE and CPE changes as we move along the attractor, and radially outward from it. To demonstrate this, we evaluated the potential nMSE, UPE and CPE on data contained within different regions of the state space. Firstly, we looked at how the error changed on data points contained between two planes normal to the line at distance d from the point attractor q = 0 (Fig 16(b), dashed lines), and plotted it with increasing d (Fig 16(d)). We can see that close to q = 0, the potential nMSE and UPE start low but increase rapidly for large d. On the other hand the CPE stays relatively constant over the entire set.
S. Vijayakumar et al.
2
0
X
X3
0.5
1
1
0.5
0.5
2
1
0
0
X
188
−0.5
−0.5
−0.5 −1 1 1
0.5
−1
0.5
0
−1
−0.5
0 X
0 −0.5 −1
X2
0.5
−1
1
−1
−0.5
0 X
0.5
1
1
1
−0.5 −1
(c)
(b)
X1
(a) 0.7 0.8 0.6 0.6
π
0.4
nMSE
nMSE
0.5
π Η
Η
0.3 0.2
0.4
0.2
0.1
Νπ 0.2
0.4
0.6 0.8 Radius (rad)
(d)
1
Νπ 0.2
0.4
0.6 0.8 Radius (rad)
1
(e)
Fig. 16. (a) Trajectories in state-space for the TLA subject to random planar constraints on the hand. (b) and (c) show projections onto the first two joints of the arm, and also indicate the line attractor (solid black line). We sampled the nMSE at increasing distances along the line (b) and radially outward from it (c). Plots (d) and (e) depict the cumulative nMSE of the potential H, policy π, and constrained policy (Nπ) as a function of the distance measures from (b) and (c), respectively.
Secondly, we looked at how the errors change as we move radially outward. For this, we evaluated errors on data contained within a cylinder of radius l centred on the line attractor (Fig 16(c), dashed lines). Fig 16(e) shows the change in error with increasing radius l. Again the CPE remains constant. This time, however, the potential nMSE and UPE are high even at small l. This indicates that the points at the two ends of the line are contributing most of the error. We can therefore say that the seemingly poor performance of our algorithm on this problem is due to the adverse constraints in the training data. The constraints do not permit motion in the direction of the line attractor, so we cannot hope to recover the potential function along that direction. However, the good generalisation of the learnt policy over unseen constraints indicates that the algorithm is performing reasonably well despite these adverse conditions. ASIMO Data. Using a realistic simulation [21] of the humanoid robot ASIMO (refer Fig. 17), we tested the scalability of our approach for learning in high dimensions. We collected data from the nullspace policy subject to a mix of constraints, including random planar constraints (in hand-space) on the two
Planning and Moving in Dynamic Environments
189
(c) (a)
(b)
Fig. 17. (a) The Humanoid ASIMO, (b) front view and (c) top view of a realistic VRML simulation of the robot with full kinematics and dynamics
hands of the robot as in (64), and constraints that fixed the position of the hands in hand-space. The latter occurs in a variety of behaviours. For example in cooperative or bi-manual manipulation tasks, one of the hands may be constrained to hold the manipulated object in position, while the nullspace policy acts to move the rest of the system into a comfortable posture [21]. Table 4 shows the learning performance of the algorithm subject to these constraints. The potential and the unconstrained policy errors are remarkably good and even out-perform those of the lower dimensional systems. We attribute this remarkable performance to the constraints on motion being much lower dimensional than the 27 DOFs of the policy. This means that there is a high chance that many of the trajectories reach the point attractor of the policy, which simplifies the alignment and the learning of the potential. 4.4
Conclusion
In this section, we demonstrated a novel approach to direct learning of conservative policies from constrained motion data. The method is fast and data-efficient, and scales to complex constraints in high-dimensional movement systems. The core ingredient is an algorithm for aligning local models of the potential, which leads to a convex optimisation problem. Ultimately, the ability to learn the nullspace potential depends on the constraints. Given a pathological set of constraints, one can never hope to recover the potential. However, we suggest a paradigm whereby motion data under different constraints can be combined to learn a potential that is consistent with the observations. With a reasonably rich set of constraints, one can recover the nullspace potential with high accuracy, and then, use this to generalise and predict behaviour under different constraints.
190
S. Vijayakumar et al.
Acknowledgements. This work was supported by the Microsoft/Royal Academy of Engineering Senior Research Fellowship to SV, the EU FP6 SENSOPAC grant to SV, the EPSRC HONDA CASE studentship to MH, the Greek State PhD scholarship to GP and the DFG Emmy Noether grant to MT.
References 1. Peters, J., Mistry, M., Udwadia, F.E., Cory, R., Nakanishi, J., Schaal, S.: A unifying framework for the control of robotics systems. In: IEEE Int. Conf. on Intelligent Robots and Systems (IROS 2005), pp. 1824–1831 (2005) 2. Nakamura, Y., Hanafusa, H.: Inverse kinematic solutions with singularity robustness for robot manipulator control. Journal of Dynamic Systems, Measurement and Control 108 (1986) 3. Baerlocher, P., Boulic, R.: An inverse kinematic architecture enforcing an arbitrary number of strict priority levels. The Visual Computer (2004) 4. Todorov, E.: Optimal control theory. In: Doya, K. (ed.) Bayesian Brain: Probabilistic Approaches to Neural Coding, pp. 269–298. MIT Press, Cambridge (2006) 5. Platt, R., Fagg, A., Grupen, R.: Nullspace composition of control laws for grasping. In: Proceedings of the IEEE-RSJ Int. Conf. on Intelligent Robots and Systems, Lausanne, Switzerland (2002) 6. Ijspeert, A.J., Nakanishi, J., Schaal, S.: Learning attractor landscapes for learning motor primitives. In: Advances in Neural Information Processing Systems, vol. 15, pp. 1523–1530. MIT Press, Cambridge (2003) 7. Schaal, S., Peters, J., Nakanishi, J., Ijspeert, A.: Control, planning, learning, and imitation with dynamic movement primitives. In: Workshop on Bilateral Paradigms on Humans and Humanoids, IEEE Int. Conf. on Intelligent Robots and Systems, Las Vegas, NV (2003) 8. Nakanishi, J., Morimoto, J., Endo, G., Cheng, G., S., Schaal, K.M.: Learning from demonstration and adaptation of biped locomotion with dynamical movement primitives. In: Workshop on Robot Learning by Demonstration, IEEE Int. Conf. on Intelligent Robots and Systems (2003) 9. Vijayakumar, S., D’Souza, A., Schaal, S.: Incremental online learning in high dimensions. Neural Computation 17, 2602–2634 (2005) 10. Klanke, S., Vijayakumar, S., Schaal, S.: A library for locally weighted projection regression. Journal of Machine Learning Research (2008) 11. Roweis, S., Ghahramani, Z.: 6. In: Haykin, S. (ed.) Learning Nonlinear Dynamical Systems using the EM Algorithm, pp. 175–220. Wiley, Chichester (2001) 12. Briegel, T., Tresp, V.: Fisher scoring and a mixture of modes approach for approximate inference and learning in nonlinear state space models (1999) 13. de Freitas, J., Niranjan, M., Gee, A.: Nonlinear state space estimation with neural networks and the em algorithm. Technical report (1999) 14. Sciavicco, L., Siciliano, B.: Modelling and Control of Robot Manipulators. Springer, Heidelberg (2000) 15. Craig, J.J.: Introduction to Robotics: Mechanics and Control. Pearson Prentice Hall, London (2005) 16. Li´egeois, A.: Automatic supervisory control of the configuration and behavior of multibody mechanisms. IEEE Trans. Systems, Man, and Cybernetics SMC-7, 245– 250 (1977)
Planning and Moving in Dynamic Environments
191
17. Khatib, O.: A unified approach for motion and force control of robot manipulators: The operational space formulation. IEEE Journal of Robotics and Automation RA3(1), 43–53 (1987) 18. Peters, J., Mistry, M., Udwadia, F.E., Nakanishi, J., Schaal, S.: A unifying framework for robot control with redundant DOFs. Autonomous Robots Journal 24, 1–12 (2008) 19. Howard, M., Gienger, M., Goerick, C., Vijayakumar, S.: Learning utility surfaces for movement selection. In: IEEE International Conference on Robotics and Biomimetics (ROBIO) (2006) 20. Park, J., Khatib, O.: Contact consistent control framework for humanoid robots. In: Proc. IEEE Int. Conf. on Robotics and Automation (ICRA) (May 2006) 21. Gienger, M., Janssen, H., Goerick, C.: Task-oriented whole body motion for humanoid robots. In: 5th IEEE-RAS International Conference on Humanoid Robots, 2005, December 5, 2005, pp. 238–244 (2005) 22. Howard, M., Vijayakumar, S.: Reconstructing null-space policies subject to dynamic task constraints in redundant manipulators. In: Workshop on Robotics and Mathematics (RoboMat) (September 2007) 23. Verbeek, J.J., Roweis, S.T., Vlassis, N.: Non-linear CCA and PCA by alignment of local models. In: Advances in Neural Information Processing Systems, vol. 16. MIT Press, Cambridge (2004) 24. Schaal, S., Ijspeert, A., Billard, A.: Computational approaches to motor learning by imitation. In: The Neuroscience of Social Interaction, pp. 199–218. Oxford University Press, Oxford (2004) 25. Ijspeert, A.J., Nakanishi, J., Schaal, S.: Learning attractor landscapes for learning motor primitives. In: Becker, S., Thrun, S., Obermayer, K. (eds.) Advances in Neural Information Processing Systems, vol. 15, pp. 1523–1530. MIT Press, Cambridge (2003) 26. Ijspeert, A.J., Nakanishi, J., Schaal, S.: Movement imitation with nonlinear dynamical systems in humanoid robots. In: Proc. IEEE International Conference on Robotics and Automation (ICRA), pp. 1398–1403 (2002) 27. Grimes, D.B., Chalodhorn, R., Rao, R.P.N.: Dynamic imitation in a humanoid robot through nonparametric probabilistic inference. In: Proceedings of Robotics: Science and Systems (RSS 2006). MIT Press, Cambridge (2006) 28. Grimes, D.B., Rashid, D.R., Rao, R.P.N.: Learning nonparametric models for probabilistic imitation. In: Advances in Neural Information Processing Systems (NIPS 2006), vol. 19. MIT Press, Cambridge (2007) 29. Antonelli, G., Arrichiello, F., Chiaverini, S.: The null-space-based behavioral control for soccer-playing mobile robots. Proceedings, pp. 1257–1262 (2005) 30. Nakamura, Y.: Advanced Robotics: Redundancy and Optimization. Addison Wesley, Reading (1991) 31. Howard, M., Klanke, S., VIjayakumar, S.: Learning nullspace potentials from constrained motion. In: Proc. IEEE International Conference on Intelligent Robots and Systems (IROS) (2008)
Towards Cognitive Robotics Christian Goerick Honda Research Institute Europe GmbH Carl-Legien-Str. 30 63073 Offenbach, Germany
[email protected] Abstract. In this paper we review our research aiming at creating a cognitive humanoid. We describe our understanding of the core elements of a processing architecture for such kind of an artifact. After these conceptual considerations we present our research results on the form of the series of elements and systems that have been researched and created.
1
Introduction
Research about intelligent systems interacting in the real world is gaining momentum due to the recent advances in computing technology and the availability of research platforms like humanoid robots. Some of the most important research issues are architectural concepts for the overall behavior organization of the artifacts. The spectrum spans from mechanisms for action selection in a direct fashion [1] towards research with the target of creating cognitive architectures [2]. One long-term goal of the research presented in this paper is aiming at incrementally creating an autonomously behaving system that learns and develops in interaction with a human user as well as based on internal needs and motivations. The other long-term goal is to understand how the human brain works, the only truly intelligent system as of today. Both goals are coupled in an way that is called analysis by synthesis. We would like to create brain-like intelligent systems, hence we have to understand how the brain works. The artifacts we are creating should show what we have understood from the brain so far, and should help formulating the next questions aiming at further understanding the brain. The vehicle for the research considered here are humanoid robots. Their anatomy and embodiment is considered a necessary condition in order to create intelligence in an anthropocentric environment. In this paper we report on our current research efforts towards cognitive robotics: the researched elements and the endeavors aiming at a brain-like control architecture for humanoid robots.
2
Towards an Architecture
As stated above, the long term goal of our research is establishing a cognitive architecture for controlling humanoid robots. We are convinced that cognitive B. Sendhoff et al. (Eds.): Creating Brain-Like Intelligence, LNAI 5436, pp. 192–214, 2009. c Springer-Verlag Berlin Heidelberg 2009
Towards Cognitive Robotics
193
or intelligent performances of artifacts can only be achieved within an architecture orchestrating the individual elements in a phenomenologically coherent way. What those elements exactly are and how they are to be arranged is subject to current research. Nevertheless, it is understood that there is a minimal subset of those elements that have to be addressed. The analysis of biological systems teaches us about those principled elements and their possible role within animals’ brains [3]. We consider the following elements to belong to this subset: – Sensory perception: Comprising extero- (vision, audition, tactile) and proprioception (measured internal states like posture). – Presence or working memory: A short term representation of behaviorally meaningful percepts or internally generated entities as the basis for external and internal actions. The content of this presence is modulated by top down attention processes. – Plastic (i.e. learnable) long term memory for the storage and retrieval of consolidated persistent entities like object, words, scenes and mental concepts. – Elements for predictions and internal simulation for creating expectations about the world for selecting the relevant information from the sensory streams and choosing internally the most effective action from a set of possible alternatives without testing all alternatives externally. – Basic motor control and coordination means for efficiently controlling systems with a large number of degrees of freedom. – A basic behavior repertoire building on the motor control level for a more abstract and robust representation of actions. – A more abstract behavior organization comprising the traditionally separated issues of communication and action, leading to behaviorally routed definitions of the semantics of language. – A representation of goals and processes working on those representations for organizing a meaningful system behavior on mesoscopic time scales above purely reactive sensory actions and below strategically driven decision processes. Those elements will also influence the perception in a top down fashion for focusing on goal relevant entities in the external world. – A set of internal drives and motivations for establishing internal forces towards a continuous self-development of the system and controlling the balance between explorative and exploitative actions. – A value system (“emotions”) providing basic guidelines for the limits of autonomous behavior by unconditioned preferences concerning elements and states of the environment. This subset is far from being complete, but we consider those elements as the most pressing research issues that will provide major progress in the field of cognitive robotics. Figure 1 shows a sketch relating the above mentioned functional entities to each other. Those functional elements can be again related to the corresponding areas of the brain. The depicted resulting architecture is called Pisa, the Practical Intelligence Systems Architecture. It represents our current best understanding of an abstract long term research goal. It is to be understood more
194
C. Goerick
Fig. 1. Pisa: The Practical Intelligence Systems Architecture, showing the major functional elements and relating them to each other. It represents joint work with Frank Joublin and Herbert Janßen dating back to 2004. Please refer to the text for a more detailed description.
Towards Cognitive Robotics
195
in the fashion of a strategic means rather than a concrete goal that is to be realized according to the master plan derived from the drawing. It is very valuable for the strategic organization of research and incremental systems architectures, because all research activities can easily be put into relation and the crucial communication between researchers evolves naturally. A major issue for most of the elements stated above is learning and adaptation, always in interaction with the real world. We consider this as an important issue, because otherwise we may not ask the research questions stated above in the right way. Nevertheless, we take the freedom to ignore technological issues like on-board computing resources for now, as long as we consider the property of scaling of the researched methods from the beginning. In the following we will focus on selected issues and created systems.
3
Task and Body Oriented Motion Control
The first focus is on the movement generation. In Pisa it is located in the bottom-most section called “movements”. Before we can consider researching how to learn movements, actions and behaviors, we have to understand how to do the basic control of the body similar to the spinal cord and the brain stem in biology. Technically, controlling the physical motions of a biped humanoid robot is not a trivial task. The number of degrees of freedom that have to be coordinated is high, the balance of the system has to be maintained under all circumstances and it is currently possible for such kind of robots to mechanically destroy themselves by commanding position of the limbs that lead to self collisions. On the other hand, humans and animals effortlessly control their endeffectors for solving tasks without continously reflecting on the level of joints about their current motions. Additionally, they have some kind of body image representing the anatomy and boundaries of the own body that helps acting in complex tasks without continously having contacts with the own body or external objects. From the constructive point of view, it seems desirable in a cognitive architecture to be able to cognitively control only the task relevant parameters and leave the “tedious” details to underlying levels of control. This should include the avoidance of self collisions, which is in technical systems much more disastrous than in natural ones. As a results of current research we have established a stable layer for the motion control with a so called motion interface that fulfills the requirements stated above and allows cognitive control processes to perform complex control tasks with the humanoid Asimo with a minimum effort [4]. In contrast to classical joint level control, the robot is controlled by a task level description. The tasks can e.g. be defined by four separate targets for the two hands, the head gaze direction and a position of the centre of mass projected on the ground, respectively. The corresponding coordinate systems are depicted in Figure 2. The coupling between the tasks and the mapping to the actually controlled joints is performed by a whole body motion controller. This controller implements a redundant control scheme considering all degrees of freedom of the robot simultaneously. The
196
C. Goerick
Fig. 2. Kinematics model of Asimo used for the whole body motion control
commanded tasks usually do not determine all degrees of freedom uniquely. For the remaining degrees of freedom it is possible to state potential functions that model the task unspecific preferences of those. This could be closeness to rest positions and the avoidance of extreme joint positions close to the physical limits. What do we gain by pursuing such kind of task description and whole body control in a cognitive architecture? First of all we have a description of tasks in a more natural way than in the joint space. For example, the right hand is commanded to a position in 3D space with a certain attitude. The necessary joint trajectories are computed automatically online on-board the robot. Therefore, higher level processes don’t have to care about the details of the robot motion. Additionally, since the whole body including walking is employed for reaching the commanded target, the motion range is extended incrementally. Imagine a 3D position for the right hand is commanded that is not reachable by arm motions alone. The whole body motion controller first induces a leaning motion of the upper body in order to reach the target, and if this does not suffice Asimo starts walking for finally reaching the commanded target. Again, higher level cognitive tasks still command only the 3D target position of the right hand. During those movements the appearance of the robot motion is naturally relaxed, because the redundant degrees of freedom are “softly” adapted to the requirements given by the hard task constraints. This fact can be envisioned by assuming springs between the segments of the robot, the task command would correspond to a force pulling the respective hand and the rest of the robot’s body adapting to the influence of this force. The walking is conceptually more advanced but can also be treated within the same framework. Such kind of natural appearance can help solving acceptance problems with robots. Summarizing, the task space description and the whole body control approach gives more freedom to the motion control level and disburdens the higher levels of control. Further extensions in the same spirit are commanding task intervals instead of crisp tasks positions and including self collision avoidance on the level of motion control [5]. Current research is concerned with strategy selection based on internal simulation,
Towards Cognitive Robotics
197
allowing Asimo to autonomously choose the hand to grasp the commanded target with. The proposed approach performs interactively with visually specified targets [6], which is in contrast to state of the art as described in [7,8]
4
Visually and Behaviorally Oriented Learning
In the introduction we have stated that the long term goal of this work is the creation of a humanoid robot that is equipped with mechanisms for learning and development. We move the focus away from the movement generation towards vision and the generation and exploration of visually oriented behaviors, including mechanisms for learning and development. The concrete goal here is to present an interactively behaving vision system for the humanoid that comprises already both kinds of mechanisms: autonomous developmental mechanisms governing the behavior generation and selection, and interactive learning mechanisms allowing for teaching the system new objects to be recognized online. Regarded separately, both mechanisms already represent a valuable step towards autonomous adaptive systems. But the emphasis is more on the principled combination of both. In contrast to statistical learning we are here less concerned about the representation of the variability of the input space but rather in learning behaviorally relevant external and internal entities. For studying those issues we have created a biologically motivated interactive vision system with adaptive basic behaviors being able to learn and recognize freely presented objects in interaction. The learning is governed by mechanisms building on an internal needs dynamics based on unspecific and specific rewards governing and exploring the parameterization of the basic behaviors [9]. On a schematic level, the system can be described as follows. Please refer to Figure 4 for a graphical representation. Based on the images from a stereo camera pair a set of features is computed. Those features comprise the general visual “interestingness” (saliency) Sv of image locations, the most prominent region in the
Fig. 3. Interaction with the stereo camera head. Waving attracts attention, showing objects within the peripersonal space fixes attention to the presented object. The fixated object can be learned interactively and recognized immediately.
198
C. Goerick
Fig. 4. Schematics of the active vision system. See text for detailed description.
image based on visual motion Sm , and the most prominent region in the image based on closeness Sd to the system. The information is represented as activation maps over the image location, with high values of the maps corresponding to interesting locations. Those maps are weighted with weights wv , wm , and wd respectively, and added. Based on this combined saliency S and some memory about previous gaze directions the new gaze direction is determined by means of an integrative peak selection with hysteresis. This simple system exhibits some interesting behaviors. It is a homogeneous control loop that is constantly executed without any structural changes. The behavioral spectrum of the system is spanned by the weighting parameters wv , wm , and wd of the maps. What kind of behaviors can such kind of a system show? Priming the saliency computation for a certain color like red and having a weight wv greater than zero will yield a system that gazes at locations in its environment with red color. If there is no red color in the current view it will randomly look around and shifts its field of view until it finds a spot with red color. For an external observer it looks like the system is “searching for red color”, even though the system was never directly programmed to show a search behavior. Next, imagine having an interacting human in the scene approaching the system and trying to raise the attention of the system (see Figure 3). Assigning a value to wm greater than wv will cause the system to look at a spot were the waving is located in the image, i.e. with such kind of parameter setting the system will look at regions containing visual motion rather than general visual saliency. Nevertheless, since the two channels are superimposed, they support each other for stabilizing the gaze selection. If the user continues to approach the system, the distance based map Sd will have an activation corresponding to the closest part of the user’s body to the system. If this contribution receives the highest weight wd , the system will continously
Towards Cognitive Robotics
199
focus on, e.g. , the hand of the user. This behavior corresponds to the biological concept of a peripersonal space around the robot. The psychological concept of the peripersonal space is defined as the space wherein individuals manipulate objects, whereas extrapersonal space, which extends beyond the peripersonal space, is defined as the portion of space relevant for locomotion and orienting [10]. Here, the peripersonal space establishes a very important concept for any further meaningful interaction: Sharing the “attention” between the system and the user. An interacting user can show something to the system and the system will focus on the shown entity. Based on this capability we have addressed several new scientific concepts. The first one is the online learning of complex objects freely presented to the system. The object within the peripersonal space is segmented from the image and processed by a biologically motivated visual object recognition and learning system [11]. Two different memory stages and speech interaction serve to continously learn and label objects in real-time, allowing for online correction of errors during learning. Secondly, we have introduced an internal homeostatic control system representing internal drives allowing for learning the weights of the different maps. This corresponds to learning the visual interaction behaviors instead of working with hard coded weights. The third one is the combination of the previous ones. The system learns to interact with the human user in such a way that its internal needs are in the temporal average equally satisfied including the curiosity for learning new objects. This represents a new quality in interactively learning systems. The implementation is currently limited to controlling the gaze direction of a head, but the concepts are sufficiently general in order to allow for interactively learning behaviors including manipulators. This is currently being investigated. Research aiming in a similar direction can be found in e.g. [12]. We would like to point out that the system described above is internally not organized in terms and structures of externally visible behaviors. There is no “tracking” or “interaction” module within the system, even if those terms can be attributed to externally observable behaviors. In our opinion it is crucial to make this distinction, because a system organized in terms of externally observable entities will by definition be confined to the predefined set of behaviors put into the system, and no self-driven cognitive self-development will be possible. Or phrased differently, the mechanisms generating certain functions should clearly be distinguished from the semantics that can be associated with them. In the next section this point will be further elaborated. The research elements presented here are in Pisa located in the areas for visual perception, behavior generation and needs.
5
Alis: Autonomous Learning and Interacting System
The concrete system considered in this section is called Alis, an acronym for “Autonomous Learning and Interacting System”. It is our current design of an
200
C. Goerick
incremental hierarchical control system for the humanoid robot Asimo comprising several sensing and control elements. Those elements are visual saliency computation and gaze selection, auditory source localization for providing information on the most prominent auditory signals, a visual proto-object based fixation and short term memory of the current visual field of view, the online learning of visual appearances of such proto-objects and an interaction-oriented control of the humanoid body. The whole system interacts in real-time with users. It builds on and extends incrementally the research presented in the previous two sections, i.e. the motion generation and control as well as the visual behaviors. The corresponding original publication can be found in [13]. The focus is not on single functional elements of the system but rather on its overall organization and key properties of the architecture. We will describe the architecture by means of a conceptual framework that we developed. The clear focus of this framework is to have a general but not arbitrary means for describing incremental architectures, focusing on the hierarchical organization and on the relations and communication between hierarchically arranged units when they are being created layer by layer. We are convinced that researching more complex intelligent systems without such a kind of framework is infeasible. To our knowledge, Alis represents the first system integrated with a full size biped humanoid robot that interacts freely with a human user including walking and non-preprogrammed whole body motions, in addition to learning and recognizing visually defined object appearances and generating corresponding behaviors. Our architectural concepts point in a similar direction as presented in [14], where a subpart of a mammalian brain has exemplarily been modeled as a hierarchical architecture. We share the view that such kinds of hierarchical organizations are promising for modeling biological brains. We go beyond the arguments presented there by considering explicitly the internal representation and the dependencies in the sensory and behavioral spaces. This is the main difference to classical subsumption-like architectures as summarized in [15]. The approach we pursue is incremental w.r.t. the overall architecture, which goes beyond an incremental local addition of new capabilities within already existing layers. This is the main difference to the state of art in comprehensive humanoid control architectures including learning as presented in [16], [17,18] and [19]. A similar reasoning applies to the comparison to classical three-layer architectures [20]. The hierarchies we are considering are not fixed to the common categories of deliberation, sequencing and control. In the next subsection (5.1), we will formulate the framework and discuss the biological motivation. Subsequently, we will present the realized system in more detail. In subsection 5.4 we will report on experiments performed in interaction with the system. 5.1
Systematica
We call the framework “Systematica”. It was devised for describing incremental hierarchical control architectures in a homogeneous and abstract way. Here,
Towards Cognitive Robotics
201
Fig. 5. Schematics of Systematica
we will introduce the notation that is necessary for making the points of the concrete system instance presented in this contribution. One future target of our research are comparative studies of different kinds of hierarchical control architectures by means of the presented framework. Each identifiable processing unit or loop n is described by the following features (see Figure 5 for reference): – – – –
– – – –
–
it may process independently from all the other units; it has an internal process or dynamics Dn ; its full input space X is spanned by exteroception and proprioception; it can create some system-wide publicly accessible representations Rn used by itself and other units within the system. The indices may be extended in order to denote the units that are reading from the representation, e.g. Rn;m,o,... means that representation Rn is read by units m and o; it may use a subspace Sn (X) of the complete input space X as well as the representations R1 , . . . , Rn−1 ; it can be modulated by top-down information Tm,n for m > n; it can send top-down information / modulation Tn,l for n > l; it may emit autonomously some behaviors on the behavior space Bn by issuing motor commands Mn with weight / priority Pn and / or by providing top-down modulation Tn,l ; the value of the priority Pn is not necessarily coupled to level n, see for example underlying stabilizing processes like balance control etc.;
202
C. Goerick
– a unit n can choose to work solely based on the input space X without other representations Rm =n ; – the coupling between the units is such that the behavioral space covered by the system is n Bn , denoting the vector product or direct sum of the individual behavior spaces; – the behaviors Bn may have different semantics Zj depending on the current situation or context Ci , i.e. the behaviors Bn represent skills or actions from the system’s point of view rather than observer dependent quantities; – the motor commands of different units may be compatible or incompatible. In the case of concurrently commanded incompatible motor commands a conflict resolution decides based on the priorities; – all entities describing a unit may be time dependent. The index n represents the index of creation in an incremental system. Therefore, units with a lower index n cannot observe the representations Rm of units with a higher index m. The combination and conflict resolution is not to be understood as the primary instance for such cases but rather as the last resort. Conflicts and combinations must be treated as major issues between and inside of the units of the architecture, e.g. according to the biological principles of inhibition and disinhibition. The sensory space Sn (X) can be split into several aspects for clearer reference. The aspects that are concerned with the location of the corresponding entity in the world are termed SnL (X), and the features are termed SnF (X). Correspondingly, the behavior space Bn can be split into parts concerned with the potential location of the actions (termed BnL ), and the qualitative skills or motions (termed BnS ). We use the term behavior in the meaning of an externally observable state change of the system. This comprises actions and motion as well as speech and communication. The behavior space BnS is spanned by the effective degrees of freedom or order parameters of the dynamical system Dn of the unit. In a wider sense, it is spanned by the parameters that are governing changes in the stereotypical actions controlled by the respective unit. The presented framework allows to characterize the architecture of such systems with respect to the following issues: Find a system’s decomposition or a procedure to decompose or construct units n consisting of Sn (x), Dn , Bn , Rn , Mn , Pn , Tm,n such that – an incremental and learning system can be built; – the system is always able to act, even if the level of performance may vary; – lower level units n provide representations and decompositions that • are suited to show a certain behavior at level n, • are suited to serve as auxiliary decompositions for higher levels m > n, i.e. make the situation treatable for others, provide an ”internal platform” so that higher levels can learn to treat the situation.
Towards Cognitive Robotics
203
In our understanding, a necessary condition for achieving the abovementioned system properties is a hierarchical arrangement of sensory and behavioral subspaces, the representations and top-down information. Another crucial aspect is the separation of behaviors from the semantics of the behaviors in a certain context. We will discuss this aspect in more detail in subsection 5.3. Due to space limitations we forbear from a further in-depth mathematical definition and treatment of the presented terms. The concrete system presented in subsection 5.3 should elucidate the underlying concepts in a graspable fashion. 5.2
Biological Embedding of Systematica
If the goal is to research brain-like intelligent systems, the creation of a fixed hierarchy with units stacked on top of each other is not sufficient: the interplay of the units is the crucial issue. In the classical subsumption paradigm the interplay within a hierarchy is modeled as inhibition of sensory signals and motor commands. We argue that a deeper communication between the units is biologically more plausible and beneficial, because it is more efficient in terms of (re-)using already established representations and processes. The biological motivation of a sensory space X that is in principle accessible for all levels of the hierarchy has already been discussed in [14]. The individual subspaces Sn (X) may of course differ. The same applies to the direct access from higher levels of the hierarchy to the motors and actuators, with additional evidence given in [21]. This may not correspond to the predominant signal flows, but is in some cases necessary for the acquisition of completely new motions. The difference between lower and higher levels is mainly that lower levels act on a coarser level of the sensory signals and do not allow for a fine control of actuators. A very fine analysis of sensory signals and a corresponding fine control of e.g. finger motions is subject to cortical and not sub-cortical regions of the brain [22]. What is mainly not addressed in the technical literature is the synergistic interplay of the different levels of the hierarchy. The main issues are the following: a) underlying control processes in the brain perform a basic stabilization and allow higher areas to modulate those stabilizations according to some goals. This is e.g. the case for the balance and the upright standing of the human body that is maintained by the brain stem (mid brain, hind brain and medulla oblongata) [23]. The higher areas in the brain rely on those functional loops. b) Specific structures in the brain maintain representations Rn for their own purposes, but those representations are also observed and used by areas created later in evolution. This is e.g. the case for the superior colliculus. The target for the next gaze direction is observed by the cortex [24]. A similar reasoning applies to the area AIP, where the coarse information about graspable objects is maintained, which is observed by the Premotor Cortex and used for configuring and target setting of the motor cortex [25]. c) Lower level structures can autonomously perform certain actions but can be modulated from higher level structures by top-down information (Tn,m ). An example is here again the superior colliculus. In reptiles it directly controls sensory
204
C. Goerick
based behaviors as the highest level of control. In humans, it can control the gaze direction based on visual and auditory signals if “permitted” by the cortex. If the cortex is damaged, the superior colliculus can take over control again. The presented Systematica serves to organize such a kind of incremental design in a way that the resulting complexity and cross dependencies are still treatable. Compared to so-called cognitively oriented architectures, the approach presented here is de-central with respect to processes and representations involved. The incremental direction is here to be understood in a developmental sense with a number of levels, less as incrementally adding more functionality at already existing levels in the system.
5.3
Alis: Architecture and Elements
Based on Systematica we will now describe the architecture and elements of Alis. Alis represents an incrementally integrated system including visual and auditory saliency, proto-object based vision and interactive learning, object dependent autonomous behavior generation, whole body motion and self collision avoidance on the humanoid robot Asimo. The elements of the overall architecture are arranged in hierarchical units that produce the overall observable behavior, see Figure 6. The corresponding areas from Pisa are auditory and visual perception, presence, movements and behaviors.
Fig. 6. Schematics of ALIS formulated in the framework Systematica. For explanation please refer to subsection 5.3.
Towards Cognitive Robotics
205
The first unit with dynamics D1 is the whole body motion control of the robot, including a basic conflict resolution for different target commands and a self collision avoidance of the robot’s body as described in section 3. It receives the current robot posture as sensory data. The top-down information Tn,1 providable to the unit is in the form of targets for the right and left hand respectively, the head and the walking. Any other unit can provide such kind of targets. Without top-down information, the robot is standing in a rest position with a predefined posture at a predefined position. The posture and the position are controlled, i.e. if the top-down information is switched off, the robot walks back to the predefined home position while compensating for external disturbances. The behavior subspace B1 comprises target reaching motions including the whole body while avoiding self collisions. The subspace B1S is spanned by variables controlling the choice of the respective actuator group: mainly the gaze, the hands and the body’s position and orientation in 3D. The subspace B1L comprises the area that is covered by walking and that can be reached by both hands. Many different kinds of semantics Zj can be attributed to those motions like “pointing”, “pushing”, “poking” and “approaching” etc. The representation R1 used and provided is a copy of the overall posture of the robot. Unit 1 provides motor commands M1 to the different joints of the robot and establishes the body control level many other units can incrementally build upon. It unloads much of the tedious control from higher level units. The second unit with D2 comprises a visual saliency computation based on contrast measures for different cues and gaze selection as partially described in section 4. Based on the incoming image, visually salient locations in the current field of view are computed and fixated by providing gaze target positions as topdown information T2,1 to unit 1. The spatial component S2L (X) of the sensory space comprises the field of view covered by the cameras. The representation R2 comprises the saliency maps, their modulations and the corresponding weights. As top-down information Tn,2 , the modulations and the corresponding weights can be set. Depending on this information, different kinds of semantics Zj like “visual search”, “visual explore” and “fixate” can be attributed to the behavior space B2 emitted by this unit. The subspace B2S is spanned by the weights of the different cues, the time constant of the fixation and the time constant for inhibition of return as described in [26]. The unit performs an autonomous gaze control that can be modulated by top-down information. It builds on unit 1 in order to employ the whole body for achieving the commanded gaze direction. The unit with D3 computes an auditory localization or saliency map R3 . It is provided as top-down information T3,2 for unit 2, where the auditory component is higher weighted than the visual. The behavior space B3 comprises the fixation of prominent auditory stimuli, which could semantically be interpreted as “fixating a person that is calling the robot”. The space is spanned by the weight balancing the auditory versus the visual saliency maps. The sensory space S3F (X) is spanned by binaural time series, the spatial component S3L (X) is the area all around the robot. The corresponding auditory processing is
206
C. Goerick
described in [27]. Unit 3 builds on and employs the gaze selection mechanism of unit 2. The combination of both units 2 and 3 corresponds to an autonomous gaze selection based on visually and auditory salient stimuli. Unit 4 extracts proto-objects from the current visual scene and performs a temporal stabilization of those in a short term memory (PO-STM). The computation of the proto-objects is purely based on depth and peripersonal space (see below), i.e. S4L (X) is a range limited subpart of S2L (X). The PO-STM and the information which proto-object is currently selected and fixated form the representation R4 . The top-down information T4,1 provided to unit 1 are gaze targets with a higher priority than the visual gaze selection, yielding as behaviors B4 the fixation of proto-objects in the current view. The unit accepts top-down information Tn,4 for deselecting the currently fixated proto-object or for directly selecting a specific proto-object. The concept of the proto-object as we employ it for behavior generation is explained in more detail in [6]. The main difference between the approach described there and this one is the extraction of the proto-objects from the scene. Here we are extracting three dimensional descriptions of approximately convex three dimensional blobs within a certain distance range from the robot. As introduced in section 4, we call this range the peripersonal space. The combination of the units 1-4 autonomously realizes the framework for the interaction with the robot. Seen from the robots point of view, the “far-field” interaction is governed by the visual and auditory saliency computation and gaze selection computations. The close-to-the-body or peripersonal interaction is governed by the proto-object fixation. Those processes run continously without an explicit task and take over control depending on the location of the interaction w.r.t. the robot’s body. Unit 5 is based on the incrementally established interaction framework. It performs a visual recognition or interactive learning of the currently fixated protoobject without own control of the robot. The sensory input space S5L (X) is the same as S4L (X), the feature space S5F (X) is the full color image and the corresponding depth map. The unit relies on the representation R4 for extracting the corresponding sub-part of the information from S5 (X). The three-dimensional information of the currently fixated proto-object is used to extract the corresponding segment from the high resolution color image space. The segments are being classified w.r.t. the object identity O-ID. For newly learned objects, the target identity has to be provided as top-down information Tn,5 . The representation R5 is the object identity O-ID of the currently fixated proto-object. The motor commands M5 emitted by the unit are speech labels corresponding to the object identity. The unit described here corresponds mainly to our work described in [9,28] and section 4. The object identity O-ID is the first instance of fixed semantics, since we use user-specified labels like “blue cup” or “toy car”. From the incremental architecture point of view, we now have a system that additionally classifies or learns the objects it is currently fixating. Unit 6 performs an association of the representations R4 and R5 , i.e. it maintains an association R6 between the PO-STM and the O-IDs based on the
Towards Cognitive Robotics
207
identifier of the currently selected PO. This representation can provide the identity of all classified proto-objects in the current view. Except for the representations it has no other inputs or outputs. From the incremental point of view we have now an additional memory of all classified proto-objects in the current view. Unit 7 with D7 builds on the sensory processing and control capabilities of many of the underlying units. It governs the control of the robot’s body except for the gaze direction. This is achieved by deriving targets from the proto-object representation R4 and sending them as top-down information T7,1 for the right and the left hand as well as for walking to unit 1. Additional top-down information T7,4 can be sent to the proto-object fixating unit 4 for requesting the selection of another proto-object. Details of the internal dynamics D7 can be found in [29]. Here, it is based on the evaluation of the current scene as represented by R4 (proto-object short term memory) and R6 (association object identifier and proto-object identifier) and the top-down information Tn,7 concerning the current assignment. An assignment is an identifier for a global mode of the internal dynamics of unit 7. The first realized assignment (A1) is pointing once with the most appropriate hand or both hands to the fixated and classified proto-object. The second assignment (A2) differs from the first one in the respect that pointing is continuous and immediate to the fixated and not yet classified proto-object. Whether the pointing is done using a single hand or both arms depends on the currently arbitrarily defined category of the classified object: both-handed pointing for toys, single handed pointing for non-toys. The definition is currently associated with the labels of the objects. During both assignments, the distance to the currently fixed proto-object can autonomously be adjusted by walking. Additionally, the autonomous selection of a new protoobject is requested (T7,4 ) from the proto-fixation if the currently fixated one has been classified successfully two times. This allows for a first autonomous scene exploration. The third assignment (A3) is pointing with each hand at a proto-object irrespective of the classification result and without walking. The behavioral space spanned by this unit is a subspace or a sub-manifold of B1 . The semantics of the behaviors are currently fixed by design, like “both handed pointing to toys” etc. From the incremental design point of view unit 7 is a thin layer controlling different kinds of interaction semantics for the body based on the sensory processing and control capabilities provided by the underlying system. The last unit 8 works on another audio stream S8 (X) and processes speech input. The results are currently provided as object labels for the recognizer (T8,5 ) and as assignments for unit 7 (T8,7 ). It serves for establishing verbal interaction with the user in the current setting. In summary, the presented system consists of several independently defined units that build on each other in an incremental way for yielding the combined performance. Due to the incremental nature of the architecture, the units can be implemented, tested and integrated one after the other, which is an important means for dealing with the increasing complexity of the targeted system.
208
C. Goerick
The described system, except some parts of unit 1, is implemented in our framework for distributed real-time applications [30] and runs with 10Hz for the command generation in interaction. The implementation consists of 288 processing components. The workload is distributed across 10 standard CPUs in 6 computers without any further optimization. 5.4
Experiments
Users can freely interact with the running Alis. The behavior of the system is governed mainly by the interaction. Figure 7 shows the measurements of a recorded experiment. The bottom most graph shows the measured minimal distance between the arms, because the self collision of the arms constitutes in this experiment the highest risk. The next higher graph depicts which of the possible top-down feedback T4,1 (proto-fixation), T3,2 (auditory saliency) or T2,1 (visual saliency) is controlling the gaze direction. The graph with the label “activity T7,4 ” shows the occurrence of the request for fixating a new proto-object by the proto-fixation unit 4. The graphs with the labels “L hand activity (T7,1 )”, “R hand activity (T7,1 )” and “leg activity (T7,1 )” depict the active control of the respective effector group by unit 7. The topmost graph with the label T8,7 shows the currently valid assignment, namely A1, A2 and A3 in a sequence. The following time course is shown in Figure 7. From the beginning until second 32, Asimo is mainly interacting with its environment by gazing at far distance visual and auditory stimuli. Beginning with second 32, the user presents recognize "toy tiger"
assignment T 8,7
learn "cell phone"
investigate "cell phone" & "toy tiger"
point to proto−objects
A3 A2 A1
leg activity( T 7,1 ) R hand activity( T 7,1) L hand activity( T 7,1) activity( T 7,4) head control
arm distance [m]
T4,1 T3,2 T2,1 1 0 0
20
40
60
80 time [s]
100
120
140
160
Fig. 7. Measurements from the interactive experiment. In the time range from sec. 0 until sec. 32, Alis is mainly driven by saliency based interaction with the world. From sec. 32 until sec. 47 the human is presenting a known object, from sec. 54 until sec. 81 the system is learning an unknown object. From sec. 86 until sec. 118 two objects are presented by the human, sequentially attended and recognized. From sec. 135 on two objects are being presented by the human and continously pointed at by the robot. Please refer to subsection 5.4 for further explanations.
Towards Cognitive Robotics
209
Fig. 8. Image series from the interactive experiment. From top left row-wise to bottom right. Rest position (sec. 6), saliency based interaction (sec. 21), proto-object fixation (sec. 32), fixation and both handed pointing after recognition (sec. 39), learning of a new object (sec. 76), fixation and pointing to first object of two (sec. 93), fixation and pointing to second object of two (sec. 100), return to rest position (sec. 121), pointing to two proto-objects (sec. 143). For further explanation see subsection 5.4.
an object in the peripersonal space, which is immediately fixated by means of the control of unit 4. At second 37, the object is successfully recognized as a “toy-tiger” and pointed at once with both hands since it belongs to the category “toys”. After pointing, the object is still fixated and the distance is adjusted by walking until second 47. After termination of the close interaction by the human, Asimo returns autonomously to the rest position. At second 52 the assignment is switched to A2, and starting with second 54 Asimo fixates and continuously points to the presented proto-object. It is unknown and learned in interaction as “cell phone” until second 81 when Asimo returns back to the home position. At second 86 the previously trained “cell phone” is presented together with the “toytiger”. The cell phone is fixated and pointed at, and successfully recognized at second 91. At second 98 it is successfully recognized for the second time and the fixation of a new proto-object is requested from unit 7 to unit 4 by the activity of T7,4 . At second 105 the toy-tiger is first misclassified, but subsequently recognized at second 111 and second 117. At second 127 the assignment is changed to A3, and at second 135 Asimo starts pointing at two objects with both hands. The user tries to force a self collision crossing the arms with the fixated proto-objects until the arms touch each other. This is depicted in the arm distance plot, which
210
C. Goerick
comes close to the limit of a self-collision but never reaches it. The self collision is prevented by the continously running self collision avoidance of unit 1. After the termination of the close interaction, Asimo returns to the rest posture. Figure 8 shows some snapshots from the running experiment. The sequence of the interaction is just an example, the resulting behavior as well as all motions of the robot are computed online and depend on the interaction of the user with the robot. 5.5
Discussion
After the presentation of the conceptual framework (Systematica), the instance (Alis) and the experiments we would like to point out some of the key features. – Units run autonomously and without explicit synchronization mechanisms in parallel. The undirected publication of the representations Rn and the directed top-down information Tn,m establish a data driven way of synchronization depending on activity. – The top-down information flow is not restricted to the communication between two adjacent layers but can project from any higher to any lower level. – Unit 1 provides the basis for higher level units to control the robot’s hands, head and steps positions including the avoidance of self collisions. It “unloads” a lot of detailed knowledge about the robots kinematics of the higher level units. This kind of unloading allows for an easier incremental design or development of the system. – The space S3L (X) covered by the audio saliency is the largest one: it includes the space S2L (X) covered by the visual saliency, which again includes the sensory space S4L (X) of the current implementation of the peripersonal space. The arrangements of these spaces and the corresponding behavior space serve as the basis for getting and staying in interaction with the system. – The lower level units are to a large extent free of specific semantics. Higherlevel units like 5 and 7 temporarily define the semantics for the lower units. – The same physical entity can be represented / perceived by different sensory spaces. The proto-object extraction of unit 4 is based on grey value stereo image pairs on a low resolution for extracting the three-dimensional information. The visual recognition of unit 5 is based on a high resolution color image segment. The segment is extracted from this color image based on the information from the currently fixated proto-object. The segment is extracted at the time of the classification, not at the time of the extraction of the proto-object. Based on this arrangement, the classifier can easily be combined with the proto-object fixation loop. The feature part of the sensory space of unit 4 is more coarsely resolved than the feature space of unit 5. – The location part of the behavior space of one unit may dynamically extend the location part of the sensory space of another unit. This is, for example, the case for the peripersonal space S4L (X) that is dynamically extended by adjusting the distance by unit 7.
Towards Cognitive Robotics
211
The presented system has already a certain complexity and shows some important features, but the question of scalability has to be addressed. Alis is already working in the real world in real-time interaction, which covers the aspect of scaling / bringing a concept to the real world. Asking about the scalability to more complex and prospective behaviors is a crucial point. We are confident to be on the right track because of the following reasons: Each of the hierarchical layers individually already performs some meaningful behavior, and some of them additionally serve as building blocks for more complex systems. This is facilitated via the coupling of the units by the publicly observable representations and directed top-down information, for us a key issue in successful scaling. A more loose argument for now but subject to current research is the following: Biology seems to have taken a similar route in evolving the brains of animals towards the brains of humans by phylogenetically adding structures on top of existing structures, and maybe mildly changing the existing structures. The communication between the “older” and the “newer” structures can be seen as providing existing representations and sending top-down information from the “newer” structures to the “older” ones. Does the presented approach scale in the direction of learning and development? We consider the visual object learning as a successful start in this direction. Nevertheless, the step towards learning is currently done only on the perceptive side. The learning on the behavior generation side is not explicitly addressed here, but in section 4 and [31] we showed our approach towards using general developmental principles for the adaptation of reactive behaviors. Transferring this work into the presented architecture would formally require the addition of another unit and some changes in existing ones. This argument is of course made irrespective of the many open scientific questions involved in actually doing this step because the system considered in [31] is considerably simpler than the one discussed here. Nonetheless, it makes us confident about the scalability of the proposed architecture. Summarizing this section, we have presented the conceptual framework Systematica for describing and designing incremental hierarchical behavior generation systems. A framework like this is crucial for researching more complex intelligent systems. On the one hand, it provides the concepts handling the growing complexity, on the other hand it establishes a necessary common language for the collaboration of several researchers. Within this framework we have created the system Alis, integrated with Asimo. Alis allows for the first time the free interaction of a human with a full size biped humanoid including non-preprogrammed whole body motions, interactive behavior generation, visual recognition and learning.
6
Summary
In this paper we have presented research concerned with elements and systems aiming at embodied brain-like intelligence and cognitive robotics. We started with presenting our guiding model Pisa, followed by sections about movement generation and control as well as visually oriented behavior generation and learning. The
212
C. Goerick
last section contains a view an Systematica and Alis, showing our research in the area of large scale intelligent systems. The work presented here should show that we are researching and creating in an incremental and holistic fashion, leading to a better understanding of natural and artificial brain-like systems.
Acknowledgments The authors would like to thank Michael Gienger, Herbert Janßen, Hisashi Sugiura, Inna Mikhailova, Bram Bolder, Mark Dunn, Heiko Wersing, Stephan Kirstein, Julian Eggert, Antonello Ceravola, Frank Joublin and Edgar K¨ orner for their contributions, support and advice.
References 1. Pirjanian, P.: Behavior coordination mechanisms – state-of-the-art. TR IRIS-99375. Technical report, Institute of Robotics and Intelligent Systems, School of Engineering, University of Southern California (1999) 2. Vernon, D., Metta, G., Sandini, G.: A survey of artificial cognitive systems: Implications for the autonomous development of mental capabilities in computational agents. IEEE Transactions on Evolutionary Computation 11(2), 151–180 (2007) 3. Kandel, E.R., Schwartz, J.H., Jessell, T.M.: Principles of Neural Science, 4th edn. McGraw-Hill, New York (2000) 4. Gienger, M., Janßen, H., Goerick, C.: Task-oriented whole body motion for humanoid robots. In: IEEE/RSJ Int. Conf. on Humanoid Robots (2005) 5. Sugiura, H., Gienger, M., Janßen, H., Goerick, C.: Real-time self collision avoidance for humanoids by means of nullspace criteria and task intervals. In: IEEE-RAS Int. Conf. on Humanoid Robots. IEEE Press, Los Alamitos (2006) 6. Bolder, B., Dunn, M., Gienger, M., Janßen, H., Sugiura, H., Goerick, C.: Visually guided whole body interaction. In: IEEE Int. Conf. on Robotics and Automation (2007) 7. Nishiwaki, K., Kuga, M., Kagami, S., Inaba, N., Inoue, H.: Whole-body cooperative balanced motion generation for reaching. In: IEEE/RSJ Int. Conf. on Humanoid Robots (2004) 8. Sian, N., Yokoi, K., Kajita, S., Tanie, K.: A framework for remote execution of whole body motions for humanoid robots. In: IEEE/RSJ Int. Conf. on Humanoid Robots (2004) 9. Goerick, C., Mikhailova, I., Wersing, H., Kirstein, S.: Biologically motivated visual behaviors for humanoids: Learning to interact and learning in interaction. In: IEEE/RSJ Int. Conf. on Humanoid Robots (2006) 10. Couyoumdjian, A., Di Nocera, F., Ferlazzo, F.: Functional representation of 3d space in endogenous attention shifts. The quaterly Journal of Experimental Psychology 56a(1), 155–183 (2003) 11. Wersing, H., Kirstein, S., G¨ otting, M., Brandl, H., Dunn, M., Mikhailova, I., Goerick, C., Steil, J.J., Ritter, H., K¨ orner, E.: A biologically motivated system for unconstrained online learning of visual objects. In: Kollias, S.D., Stafylopatis, A., Duch, W., Oja, E. (eds.) ICANN 2006. LNCS, vol. 4132, pp. 508–517. Springer, Heidelberg (2006)
Towards Cognitive Robotics
213
12. Ude, A., Cheng, G.: Object recognition on humanoids with foveated vision. In: IEEE/RSJ Int. Conf. on Humanoid Robots (2004) 13. Goerick, C., Bolder, B., Janßen, H., Gienger, M., Sugiura, H., Dunn, M., Mikhailova, I., Rodemann, T., Wersing, H., Kirstein, S.: Towards incremental hierarchical behavior generation for humanoids. In: Proceedings of the IEEE/RSJ International Conference on Humanoid Robots (Humanoids 2007), Pittsburgh, USA (2007) 14. Prescott, T.J., Redgrave, P., Gurney, K.: Layered control architectures in robots and vertebrates. Adaptive Behavior 7, 99–127 (1999) 15. Pfeiffer, R., Scheier, C.: Understanding Intelligence. MIT Press, Cambridge (1999) 16. Brock, O., Fagg, A., Grupen, R., Platt, R., Rosenstein, M., Sweeney, J.: A framework for learning and control in intelligent humanoid robots. International Journal of Humanoid Robotics 2(3) (2005) 17. Arkin, R.C., Fujita, M., Takagi, T., Hasegawa, R.: An ethological and emotional basis for human-robot interaction. Robotics and Autonomous Systems (3-4), 191– 201 (2003) 18. Chernova, S., Arkin, R.C.: From deliberative to routine behaviors: A cognitively inspired action-selection mechanism for routine behavior capture. Adaptive Behavior 15(2), 199–216 (2007) 19. Asfour, T., Regenstein, K., Azad, P., Schr¨ oder, J., Bierbaum, A., Vahrenkamp, N., Dillmann, R.: ARMAR-III: An integrated humanoid platform for sensory-motor control. In: Proceedings of the IEEE/RSJ International Conference on Humanoid Robots (Humanoids 2006), Genoa, Italy (2006) 20. Gat, E.: On three-layer architectures. In: Artificial Intelligence and Mobile Robots. MIT/AAAI Press (1997) 21. Chouinard, P.A., Paus, T.: The primary motor and premotor areas of the human cerebral cortex. The Neuroscientist 12(2), 143–152 (2006) 22. Swanson, L.W.: Brain Architecture: Understanding the Basic Plan. Oxford University Press, Oxford (2002) 23. Purves, D., Augustine, G.J., Fitzpatrick, D., Hall, a.C., Lamantia, A.-S., McNamara, J.O., Williams, S.M. (eds.): Neuroscience. Sinauer Associates (2004) 24. Sommer, M.A., Wurtz, R.H.: What the brain stem tells the frontal cortex. I. oculomotor signals sent from superior colliculus to frontal eye field via mediodorsal thalamus. Journal of Neurophysiology 91, 1381–1402 (2004) 25. Battaglia-Mayer, A., Caminiti, R., Lacquaniti, F., Zago, M.: Multiple levels of representation of reaching in the parieto-frontal network. Cerebral Cortex 13, 1009– 1022 (2003) 26. Goerick, C., Wersing, H., Mikhailova, I., Dunn, M.: Peripersonal space and object recognition for humanoids. In: Proceedings of the IEEE/RSJ International Conference on Humanoid Robots (Humanoids 2005), Tsukuba, Japan (2005) 27. Rodemann, T., Heckmann, M., Sch¨ olling, B., Joublin, F., Goerick, C.: Real-time sound localization with a binaural head-system using a biologically-inspired cuetriple mapping. In: Proceedings of the International Conference on Intelligent Robots & Systems (IROS). IEEE, Los Alamitos (2006) 28. Wersing, H., Kirstein, S., Goetting, M., Brandl, H., Dunn, M., Mikhailova, I., Goerick, C., Steil, J.J., Ritter, H., K¨ orner, E.: Online learning of objects in a biologically motivated visual architecture. International Journal of Neural Systems 17(4), 219–230 (2007)
214
C. Goerick
29. Bergener, T., Bruckhoff, C., Dahm, P., Janßen, H., Joublin, F., Menzner, R., Steinhage, A., von Seelen, W.: Complex behavior by means of dynamical systems for an anthropomorphic robot. Neural Networks (7), 1087–1099 (1999) 30. Ceravola, A., Stein, M., Goerick, C.: Researching and developing a real-time infrastructure for intelligent systems - evolution of an integrated approach. Robotics and Autonomous Systems 56(1), 14–28 (2008) 31. Mikhailova, I., von Seelen, W., Goerick, C.: Usage of general developmental principles for adaptation of reactive behavior. In: Proceedings of the 6th International Workshop on Epigenetic Robotics, Paris, France (2006)
Approaches and Challenges for Cognitive Vision Systems Julian Eggert and Heiko Wersing Honda Research Institute Europe GmbH Carl-Legien-Str. 30 63073 Offenbach, Germany
Abstract. A cognitive visual system is generally intended to work robustly under varying environmental conditions, adapt to a broad range of unforeseen changes, and even exhibit prospective behavior like systematically anticipating possible visual events. These properties are unquestionably out of reach of currently available solutions. To analyze the reasons underlying this failure, in this paper we develop the idea of a vision system that flexibly controls the order and the accessibility of visual processes during operation. Vision is hereby understood as the dynamic process of selective adaptation of visual parameters and modules as a function of underlying goals or intentions. This perspective requires a specific architectural organization, since vision is then a continuous balance between the sensory stimulation and internally generated information. Furthermore, the consideration of intrinsic resource limitations and their organization by means of an appropriate control substrate become a centerpiece for the creation of truly cognitive vision systems. We outline the main concepts that are required for the development of such systems, and discuss modern approaches to a few selected vision subproblems like image segmentation, item tracking and visual object classification from the perspective of their integration and recruitment into a cognitive vision system.
1 1.1
Introduction Motivation: The Quest for a Cognitive Vision System
Imagine a complex visual scene, like given by a working environment in a factory or a traffic situation, where several objects have to be analyzed and kept in mind for a meaningful, visually-guided way of operation. What happens in the mind of humans when interacting with such a scene is still largely a mystery. A plethora of questions immediately arises on how the brain copes with the large potential complexity of visual sensory analysis of complex scenes, in particular when they are not static (which is the case in nearly all situations in a real environment, with most exceptions being artificially generated like when observing a photograph). With potential complexity we denote the combinatorial way of choices that the brain has to deal with for the visual analysis: It e.g. has to decide on which visual properties to concentrate (dynamic properties like motion-induced displacements B. Sendhoff et al. (Eds.): Creating Brain-Like Intelligence, LNAI 5436, pp. 215–247, 2009. c Springer-Verlag Berlin Heidelberg 2009
216
J. Eggert and H. Wersing
and appearance changes, static properties like characteristic patterns, colors, shadings, textures, 3D aspects like depth or surface curvature, to name only a few), how to tune the system on these properties (usually the visual properties that the brain has access to are not capable of analyzing a large sensory spectrum in full detail, instead, sensory analysis has to focus on the relevant sensory ranges by a dynamic adaptation process), how to extract single parts and objects from the scene (deciding on what makes up the most relevant aspects on a scene), how to analyze these parts during the time-course of the scene analysis, how to detect and analyze properties that depend on the combined treatment of several parts or objects (like e.g. relational properties where different parts have to be considered as a conjunction, as it is the case when a distance or a relative position of parts is of interest, or the appearance of two similar objects is analyzed in detail for discrimination), and finally, how to combine the results with each other resp. how to bootstrap choices made in a particular domain of the visual analysis using results gained in another domain. This list of choices that a human visual system has to perform is of course non-exhaustive and could be extensively continued; but we can already notice the diversity and the complexity of operations that is involved in such a process. In short, multiple specialized analyses occur during vision, which have to be tuned, adapted, and selectively integrated over time. To the contrary, when we speak of vision, we often only denote a particular, isolated aspect, like e.g. visual object classification, i.e., the attribution of a class or category label to a selected portion of the visual input. With the previous list in mind, we are able to understand that vision is a complex process within which object classification or any other specialized visual analysis is only one minor component among many. The main reason behind the complexity of visual operations is an inherent resource limitation. Now, given that many of the specialized analyses can probably be carried out in parallel, and taking into consideration that the brain is a device with a myriad of elements working concurrently, particularly specialized for parallel processing, why should there be any resource limitation at all? The reason is that the space of possible interactions among several parts and objects in a scene is too large. The combinatorial complexity explodes when visual analyses involve the integration of several cues and objects. In addition, several resources for visual analysis have to be recruited and adapted exclusively for a single visual subtask rendering them inaccessible for others, as can be easily understood in the case of the eyes, which during gazing concentrate on a particular portion of a scene and even on a particular depth. Even though in cases of higher visual processing that are further away from the sensory periphery the case of exclusive resource allocation is not as evident as for the eyes, the logical considerations are analogous. A case where this becomes evident is for objects defined by conjunctions of visual properties (e.g. form and color), which require attentional focusing for correct recognition, a phenomenon that has been hypothesized to work in analogy to an internal “zoom lens” [19,18] which would allow the preferential but exclusive processing of only one object at a time. Besides some visual preprocessing steps like the extraction of local edges, patterns or velocities, which can be carried out in parallel over the entire visual field and which serve as a common sensorial
Approaches and Challenges for Cognitive Vision Systems
217
basis, most subsequent visual processing steps suffer from similar resource limitations: They have to be specially tuned to a particular purpose, object or visual task so that they are exclusively specific, meaning that they cannot be used for the inspection of e.g. another object at the same time since this would require the same processing substrate. In other words, they have to be controlled by some higher level instances and the control strategies of specialized visual processes have to be orchestrated depending on higher level demands, as provided by a broader knowledge context or information about a task that a system has to perform. 1.2
Access to Visual Memory
A further important factor that introduces a resource constraint is visual memory. Although this is a term with a broad range of meanings, here we denote it as the capacity to retain information about aspects of a visual scene that can be recalled at later moments or used to reinspect parts of the scene. We can retain information about form, visual properties like color, texture, shading and reflectance, as well as positions and positional relations of visual objects. On a scene level, we can recall the experience of a particular scene impression, as well as the overall spatial arrangement and identity of visual objects. In the following, we use the term visual memory in the sense of a working memory for the current visual scenery1 . While it is undisputed that selected results of visual analysis subprocesses are stored in visual memory, there is a diverging debate about how much information can be stored, what exactly is stored and to which degree of accuracy. The last point refers e.g. to the dispute whether the brain attempts a faithful internal reconstruction of the physical world that it inspects through its visual senses. The alternatives are that visual memory may be trying to construct an as-completeas possible internal representation of the world as opposed to being partial and selective, in the sense that it only stores information about specific objects that are of interest at a given moment. Similarly, it is argued that the brain targets at an accurate representation of the true physical causes of a sensory input (e.g. representing the world as an accurate geometric environment with physical objects) vs. representing the world only up to the level of description that suffices for a given task or behavior in a situative context. The tendency is towards a partial and selective representation at a suitable level of description that is adjustable to the situation, with the main arguments supported by change blindness (the fact that changes of visual properties or parts of a scene are not noticed if they are not attended, e.g. [41,6]) and memory capacity measurements (many psychophysical experiments suggest that the capacity of visual short term memory is extremely small, about 4-5 items, see e.g. [45,15], but this refers particularly to a very specific type of iconic memory). We will briefly return to this topic 1
A metaphor suggestive for the type of information that is stored in visual working memory is that of a theatre stage as introduced by [5], containing a context, a scenario, actors and objects; in addition to spotlights that highlight parts of the scene.
218
J. Eggert and H. Wersing
in section 5; the important bottomline here is that visual memory constitutes a resource bottleneck for visual processing. Why should this be so? If we regard visual memory as more than a mere buffer for storing n memorized iconic items, but rather a sketchpad where information from the specialized visual analyses can converge, then it is the substrate where selective, object- and situation-specific integration of information occurs. As argued in the last section, such an integration most likely involves selective tuning of the underlying visual subprocesses, under consideration of resource constraints like exclusive recruitment and competition. Specific integration would therefore involve an active, selective choice of the system to inspect certain visual properties, based on the knowledge that the visual memory is able to provide. At the same time, such a selective choice would imply that an order for accessing visual subprocesses in terms of prioritization schemes is imposed (some visual objects or attributes are identified as being important to be inspected before others), which on its own implies that the proper sequentialization has to be cared for. Such a control scheme would require a large amount of prior knowledge about the visual subprocesses themselves, i.e., it would require knowledge of the system about its own sensory apparatus and its limitations. The picture that emerges is that of a generalized visual memory working as the central executive for the control instances responsible for an active acquisition of visual information. It would be a visuospatial sketchpad (see [7] for an early proposal of a visuospatial sketchpad, however quite different from the specific one proposed here) where visual events and measurements are annotated, hypotheses about causes relating visual events are created (eventually leading to notions of rudimentary objects or interaction elements), corroborated and refuted, the entire visual presence (the knowledge about the current visual situation) is kept up-to-date and from which the processes for the underlying visual analyses are controlled. At the same time, such a sketchpad would be the ideal candidate for the integration of, and coupling with, information from other senses. 1.3
Overview
In this paper, we will put forward the idea that the combinatorial complexity of controlling several visual processes should be at the center of considerations when trying to understand a cognitive vision system. We will term this the “control view of cognitive vision”, and, in the following, mean this view when we speak about “cognitive vision”, if not especially denoted otherwise2 . This is a view that differs considerably from most standard approaches to vision3 , and that has a number of deducible consequences that we should focus on. First of all, we can ask which high-level representational framework does allow best for a vision system operating in the described conditions. Second, we can ask which low level visual sensory subprocesses are a prerequisite for such a system, especially when 2 3
Although the opinion of different authors on what cognitive vision is differ substantially in the literature. However, other work in comparable directions exists, see e.g. [9,35].
Approaches and Challenges for Cognitive Vision Systems
219
operation in a sufficiently complex visual environment is demanded. One of the many pitfalls of modern vision systems is that the need for control processes does not become apparent if the conditions are too restrained or specific. Third, we can ask what particular characteristics should visual subprocesses have that operate in a cognitive vision system as described. How do they specialize to enable control, and how general should the results be that they deliver to other parts of the system? Forth, we can ask who mediates the control processes. This question has tight interactions with the quest for understanding one of the key ingredients for the power of visual processing in the human brain, visual attention. Fifth, we have to ask about the intrinsic properties of the control process (or, rather, the plural control processes) itself. What type of convergence of visual information does it need, where and how (and when!) is this information represented, what does the control process optimize, how does it subdivide and delegate control to visual subprocesses and tune them accordingly, and how does it finally evaluate and integrate the results of a visual analysis. In its majority, these questions have rarely been approached yet by current vision research (at least under the perspective of an integrated cognitive vision system), and the scientific research is not able to give a concluding answer to any of them at this point4 . Nevertheless, they provide a starting point for a paradigm shift in the research of cognitive vision systems. In the following sections, we will proceed step-by-step to develop the idea of a control-based cognitive vision system. In a first section, we will give a brief account of current paradigms in vision research, shortly reviewing the main characteristics of the different approaches and trying to position the control view of visual processing within these ideas. We will see that the control view describes a regime that is not covered by the two prominent (admittedly extreme) paradigms of current vision - cognitivist and emergent - , rather, it can be identified as a third paradigm that poses important questions on its own right that are not explicitly covered otherwise. In addition, we will explain how the control view is related to many open themes reoccurring in visual research, as there are: Active vision, grounding, anchoring, binding, visual anticipation and prediction. In a second, more extensive section, we exemplarily zoom in on a few specialized visual processing “subsystems” that current vision research has identified as key ingredients of a general vision architecture. We will review the properties of such systems, concentrating on them from the perspective of the control view of cognitive vision. These subsystems would represent the (on one hand) fixed basic structure, since it determines which visual properties the system can in principle analyze; on the other hand they would have to be sufficiently flexible to be able to be recruited and tuned by control processes. Being an open topic, a short discussion about the type of required representation for cognitive vision and the interaction of this representation with the visual subprocesses concludes the paper. 4
In particular, the last question is central since it somehow codetermines the previous ones; i.e., we need an understanding of the nature of the control processes in vision to be able to design and understand better visual subprocesses dealing with specialized analyses.
220
2
J. Eggert and H. Wersing
Challenges for Cognitive Vision Systems
In the broadest sense, the term cognitive vision is used for the current state of research dealing with vision systems that incorporate a rich internal state representing accumulated visual information and which operate flexibly even under unforeseen changes of the visual environment. It has been introduced to separate these from previous attempts of visual systems that often were tailored to specific visual contexts and which exhibited little robustness and adaptivity. 2.1
The Range from Cognitivist to Emergent Vision System
The two main paradigms in cognitive vision are the cognitivist and the emergent systems approaches. The cognitivist approaches assume that the target of such systems are the faithful reconstruction, by terms of an appropriate explicit representation, of the external world from visual data. The representation is centered around the requirement that it should describe the objective external world [49] as accurately as possible, including its geometrical and physical properties (already Marr stated that the goal of a computer vision system should be a “description of the three-dimensional world in terms of surfaces and objects present and their physical properties and spatial relationships” [32]). The process to achieve this is an abstraction chain, which starts at the perceptual level,
Behavior layer
High-level scene interpretation
Geometrical scene description
Visual objects
Combined features
Low level features
Preprocessing Fig. 1. A typical architecture of a cognitivist vision system. A cascade of representation layers serves to construct increasingly amodal abstractions that describe the objective external world. At the highest level, this is used to achieve intelligent behavior. Details of the representation have to be foreseen by a human designer.
Approaches and Challenges for Cognitive Vision Systems
221
abstracts from there using appropriate symbol sets, and reasons symbolically with the gained representations in order to achieve an intelligent behavior. For the cognitivist approach, the main job of a cognitive vision system is to provide the symbolic representation which then can be operated upon with more general logical frameworks. Since all vision systems operating in real world have to start at a quantitative level based on noisy and uncertain data, modern cognitivist systems are turning towards subsymbolic preprocessing levels based on probabilistic, machine learning, or connectionist techniques. Figure 1 shows an example of a cognitivist system. We can see a cascade of stages that extracts increasingly complex components of a scene, ranging from signal to symbolic representations through several layers of processing. Each stage involves computations that abstract and generalize on the preceding stage. At the final stage, the representation is used to reason about high-level concepts such as spatial object configurations, and to generate behavior. The information flow between the layers may be bidirectional, expressing that there can be a modulatory influence from higher levels to improve the performance of lower level processing stages. A major critique of cognitivist approaches is that they heavily depend on the designer to find a representation that is suited to the solution of a visual problem, and that the representational structures gained from human idealization exhibit a bias that is detrimental. In addition, purely symbolic representations and rule-based reasoning on such representations has proven to be insufficient to capture the variability of real-world sensory perception. Probabilistic and learning frameworks are being proposed as alternatives to this problem [36], relaxing the demand for an explicit representation and adapting a systems structure to empirically provided constraints. The second paradigm is the emergent systems view. This view emphasizes that the system is embedded in a cognitive agent whose capabilities are determined and have been developed in interaction with an environment. The agent operates within the environment and constructs its representation of the world as a result of this operation [33]. This is enabled by a continuous and real-time interaction of the system with the environment, and leads to systems that can cope well with the specific environmental conditions and the variability of the systemenvironment interaction. Figure 2 shows a sketch of an emergent vision system. The agent preprocesses the visual data and then passes it on to a flexible structure where the proper representations should emerge during system-environment interaction and codetermination. The importance here is on the coupling between the system and the environment through the behaviors of the agent. In the emergent paradigm, the purpose of vision is simply to provide the appropriate sensory data to enable sensible actions. The richness of the developed vision system is to a large extent determined by the richness of the action interface. The work of the designer is to choose the developmental structure, the complexity of the environment (which may vary over time) and the action interface.
222
J. Eggert and H. Wersing Interaction with the environment
Behavior action output selection
Self-organized representation
Preprocessing
Input from visual sensors Environment
Fig. 2. The emergent systems view. In this case, the system is embedded into an environment and the representations self-organize by continuous interaction with the environment. No symbolic or human-designed representations are necessary, the expectation is that cognitive phenomena arise implicitly from the developmental process.
Emergent vision systems are usually implemented using parallel, real-time and distributed architectures. Dynamical systems models provide characteristics that allow, in principle, the development of cognitive visual functions, achieved by means of consequent self-organization. The vision system as such is a sort of black box, which adjusts its dynamics so as to achieve a desired behavior. The representation of the visual information is implicit, in the sense that there are no a-priori identifiable states or nodes representing entities of the visual world like objects, etc. In addition, no symbolic representations (especially humandesigned) are required. Cognitive phenomena like visual memory should emerge from the developmental process; in addition, identifying these phenomena is a matter of interpretation of the systems dynamics by an external observer. The accuracy and completeness of the visual representation of an emergent vision system is optimally adapted to its interaction repertoire, meaning the it is just sufficient to enable the system to do certain things. In practice, although systems could be developed that exhibit surprisingly non-trivial behaviors that would otherwise require considerable designing efforts (see e.g. [21] and [48] for a review), it remains to be shown that emergent visual systems can develop higher-order cognitive capabilities. Solutions evolved by emergent systems tend to specialize to a particular visual context (which often is representationally poor or at least does not require perceptually more abstract representations) and have problems to scale up and generalize to other domains. It is again the burden of the designer, this time not to choose the detailed
Approaches and Challenges for Cognitive Vision Systems
223
representational structure but the teaching signals, the environmental richness and the necessary learning mechanisms. Furthermore, the capabilities to generalize are often already given by an appropriate preprocessing stage, which also has to be provided by the designer. 2.2
Grounding and Binding
In a symbolic cognitivist approach, an internal representation of the world from sensory signals is gained by increasing abstractions that could allow for a decoupling from the systems perceptual apparatus (they become amodal [8]). If this were so in a rigorous sense, representational parts of the system could be isolated that have no relation at all with the external world. The question then arises how these parts can have semantic content on their own right, i.e., a meaning that refers to real world entities and behaviors. In a more relaxed consideration, one could say that representations at higher processing levels of a cognitivist system loose the information about the original sensory objects that created them. This is called the so-called symbol grounding problem. There is a very close analogy to a second hypothetical problem of cognitive science, the binding problem. Interestingly, this second problem is usually motivated by the connectionist and neural network background and not by cognitivist approaches. It denotes the loss of reference to the originally constituting cues as one moves along a processing hierarchy. An often cited example is given by a configuration of two stimuli, each defined by a particular conjunction of two different cues (e.g. form and color, with stimulus 1 having a T-form and red color, and stimulus 2 being a green cross). If form and color are processed independently of each other and generalize over position, all the information that arrives at a higher stage is that there are two forms and two colors present. The “binding” of its two constituting cues to an object is then lost, making it impossible to retrieve the information which colors and forms correspond to each other. In a sense, this is the same problem as for symbol grounding, only that we do not consider the relation between external sensory signals and internal symbolic representations, but between internal representations at different levels of abstraction. In the brain, binding losses indeed seem to occur when inspecting scenes with several objects defined by conjunctions of cues, leading to conjunction errors [46], meaning that people make mistakes when forced to determine which form corresponds to which color for each object. These errors always appear in combination with attentional overload, i.e., when there are not sufficient attentional resources that can be devoted to each object, either because of too many objects present or too short presentation times. 2.3
Anchoring and FINST’s
Both the symbol grounding as well as the binding problem are rather artifical, idealized constructions, as we will argue in the following. They are based on the assumption that even when looking at a representational result in isolation
224
J. Eggert and H. Wersing
one should get an automatic reference to the original objects or lower level representations that generated it. By automatic it is meant that this reference is a passive, or straightforwardly deducible property. Proposals on how to do this exist, at least for the binding problem. They introduce mechanisms which code the references between representational items by additional states, which can be evaluated when accessing the representations. These states serve to relate (i.e., they “bind”) the different representations with each other, or representations with parts of the original sensory input. One proposal is to use time labeling mechanisms, e.g. by the phase of an oscillatory activity that synchronizes by appropriate internal system dynamics. Although this seems reasonable at a first glance, it is still a passive property, in the sense that it should be automatically present when some access to a part of the internal representation is required. Nevertheless, the capacity of such mechanisms is severely limited, since they can work only on a very small subset of the representational items at a time. The important point here is that binding should not be considered as a representational property, but rather as an effortful process. Effortful meaning here that it requires active and selective focusing on a small subset of representational items because it allocates exclusive processing resources in a vision system. As such this process has to be controlled from within a larger knowledge context, including both context information about the visual scene as well as specific information about the visual subprocesses that mediate the binding. A term that appears in the literature in a similar context is that of grounded cognition [8] and anchoring [14]. Grounded cognition emphasizes the role of modal (perception specific) representations, internal simulations, imagery and situated action for binding internal representations to external objects. Anchoring is the “process of creating and maintaining the correspondence between symbols and percepts that refer to the same physical objects” [14], and is presumed to be a necessary component in any physically embedded system that uses a symbolic internal representation. For the special domain of simultaneous multiobject tracking and attentional selection, FIngers of INSTantiation (FINST’s, [38]) have been proposed to solve the anchoring task in early vision. It is argued that the process of incrementally constructing perceptual representations, solving the binding problem as well as grounding perceptual representations in experience, arises from the capacity to select and keep track of a small number of sensory items. These items are identified to have a particular, consistent and enduring identity that can be maintained during the tracking process despite considerable changes in their properties. In a sense, FINST objects have been described as mental “rubber bands” between internal representations and external objects. Extending the ideas of a cognitive vision system from section 1, we consider FINST’s to be but one emanation of a deeper concept that emphasizes the process and control aspects of establishing temporary, selective correspondences between more abstract, higher-level representations and the input at the sensory periphery.
Approaches and Challenges for Cognitive Vision Systems
2.4
225
Anticipation and Prediction
What is the deeper purpose of anchoring? On one side anchoring and its manifestations, e.g. as proposed by FINST’s or as experienced during tracking, represent a behaviorally useful capability in its own right: It keeps the attended item in focus so that it can be easily reaccessed when needed. It also stabilizes the sensory input, so that some variablities in the appearance change are compensated for, as can be seen in a straightforward way in the case of translational (in)variance during object tracking (this is particularly evident when combined with the overt behavior of gazing at an object during smooth pursuit). But the more important point of anchoring is that it requires an active internal anticipation of the stimuli as they are expected in the near future. A good anticipation is necessary because it narrows down the search range and therefore decreases the processing resources for reaccessing the item in next timesteps (here we close the loop to the resource limitation arguments of section 1). Anticipation is a hard generalization task for a cognitive vision system, because visual stimuli are highly variable due to two different reasons: First, the items themselves as they appear in the physical world (e.g. non-rigid objects) together with their projection by the sensory apparatus (e.g. 3D onto 2D, causing view changes as an object rotates) are highly variable, and second, nearly every behavior can have a severe effect on the sensory input (which is straightforward for direct interaction with objects like manual manipulation, but consider also indirect effects like egomotion of the system changing the visual appearance of an object). If a cognitive system wants to generalize its anticipatory capabilities, it has to be able to separate the two sources of variability. This has deep consequences: It means that the system has to acquire knowledge about the process of its own sensory apparatus on one side and about the external causes of sensory inputs on the other side5 . Process knowledge on its own sensory apparatus will allow such a system to discount or compensate for changes caused by own behavior as well as internal adaptation and modulation, leading to a more robust and stable sensory analysis of sensory objects. Knowledge about the external causes leads to more stable and generalizable representations of the world, since many visual changes occur by own sensory behavior and not by changes in the state of the world objects themselves (as in the case when objects are static but an observer moves around them). The two types of knowledge and their interaction during active sensory processing can be schematized in a perceptual cycle as shown in figure 3. The knowledge about a sensory item resp. the external causes of a sensory input is represented in 5
This reminds us to the cognitivist idea of building an accurate and detailed model of the physical world, see section 2.1. In the current argumentation, however, we 1) emphasize the role of the dynamic anticipatory and control process instead of concentrating on the structure and content of the internal world representation, 2) we make no claims about the degree of accuracy, so that variably coarse descriptions of the physical causes may already be sufficient for reasonably good anticipation, depending on the demands of a task, and 3) the modal knowledge of the system about its own sensory capabilites is crucial, whereas purely cognitivist approaches prescind from this, targeting an amodal, abstract representation of the outer physical world only.
226
J. Eggert and H. Wersing Sensory input
Measurement likelihood
State update
Measurement model
World item state
Predicted state
Prediction model Fig. 3. Prediction as a fundamental process for active acquisition of visual information in a cognitive visual system. Shown is a prediction-measurement-update loop that makes use of two types of knowledge: 1) The prediction model that expresses how the state of a sensory item changes over time, representing knowledge about the external causes of sensory inputs, and 2) the measurement model that comprises knowledge about a systems own sensory processes, anticipating the expected sensory input for a given hypothetical state of a sensory item. In the control view of cognitive vision systems, several prediction-measurement-update loops interact, comprising process information about sensory items and the active coupling between internal representations and external sensory measurements.
the item state (bottom left) and a prediction model that indicates how the state is expected to change in the future (bottom). The knowledge about its own sensory processes is comprised in the measurement model, which is applied onto the sensory input (from top) to estimate how likely a sensory measurement is for the assumption of a predicted item state. This likelihood is used in an update step to adjust the items state. In the control view of cognitive vision, perceptual cycles with prediction-measurement-update steps constitute a central element, both for low-level visual processes (for example in a tracking context), as well as operations far from the sensory periphery, working on more indirect representations. 2.5
A Modern Account of Active Vision Systems
In this paper we argue that the selective choice of sensory information and the corresponding tuning of sensory parameters, i.e., the effective focusing of visual
Approaches and Challenges for Cognitive Vision Systems
227
processing resources, is a necessary property of any artificial or biological vision system with a claim for some minimal generality and flexibility. Such focusing capabilities largely make up for the flexibility of biological visions system that, depending on the (visual) task in question, the context of already acquired as well as prior information and the available processing resources, may deploy very differently even for identical sensory input6 . The idea is that different visual tasks, motivated by internal goals, trigger different visual processes, and that these processes have to be organized in a systematic way because there is simply not enough capacity otherwise. Such a system would therefore continuously have to modulate and adapt itself, organize the cooccurrence or their temporal order of visual operations, and monitor their success. The processes referred here are mainly seen as internal operations, such as e.g. the selective enhancement of competition, the dynamic adjustment of filter parameters or the concentration on special feature channels, like edges, motion, color, etc. The means by which this could occur is via attention, combining top-down signals that provide expectations and measurement resp. confirmation requests with bottom-up signals that provide measurements coming from the sensors. The task-dependence of internal organization processes in such a vision system is a view shared with behaviorist paradigms, which concentrate on “visual abilities which are tied to specific behaviors and which access the scene directly without intervening representations”. One of them is active vision (see e.g. [2]), a term that is used for systems that control the image acquisition process e.g. by actively modulating camera parameters like gaze direction, focus or vergence in a task-dependent manner. Along a similar line, purposive vision ([1]) regards vision processes always in combination with the context of some tasks that should be fulfilled. Common to both active and purposive vision approaches is that they have concentrated on overt behaviors and actions that are directly observable from outside, and in how visual information can be extracted that supports particular behaviors. To the contrary, in the framework put forward in this paper, a proper minimal representational structure on which the control and modulation processes can operate is crucial. Visual cognition is understood as any goal-driven mediation between an internal representation and the incoming sensory stimulation. The mediating control processes serve to gather visual information that could be potentially used for guiding overt behaviors, (without necessarily being tied to the behaviors). In fact, we interprete any internal modulation and attentional focusing as a virtual action, in principle not different from overt actions7 . The basic assumption is that, from a task-driven perspective, there is simply not 6 7
With the classical attentional phenomena being one notorious example for the focusing of visual resources during operation. We even explicitly disregard any overt actions like gaze or head orienting for the following argumentation, since we think that the more interesting aspects of visual cognition appear without the need to concentrate on the hardware specificities of sensory devices.
228
J. Eggert and H. Wersing
enough processing capacity to cover all the different ways to operate on the visual input in a hard-wired manner, so that a vision system has to flexibly organize its internal visual processing during operation and this organization has to be controlled by the (visual) intentions of the system. The tasks and intentions we mention here are supposed to be of intermediate level, but still relatively close to the sensory domain, like e.g. “concentrate on an interesting moving object in the visual scene and keep its coordinates up-to-date”, “compare the feature composition of two objects” or “track an object, use motion segmentation to separate it from the background”. So what is a cognitive vision system intended to do operationally? It should: – Establish temporary or semi-continuous links from internal representations of sensory events to the incoming sensory information (“anchoring”). – Use this possibility actively when different / additional / not yet analyzed data is required or information has to be renewed. – Work with the stored information to establish relations, discover regularities and analogies, i.e., explore and learn about the visual sensory world. – Use the gained world knowledge to control active processes for the acquisition of visual information. The establishment of temporary or semi-continuous links between internal representations and sensory information occurs by means of prediction-measurementupdate loops as introduced in section 2.4. Ideally, the granularity of the information represented in the prediction-measurement-update loops need not be defined a priori, but may be developed in an self-organized way by visual investigation, resulting in the right level of abstraction and detail. In any case, it is assumed that several loops exist, that they interact and that they even organize in a hierarchical manner. One example is given by the multiple adaptation processes at different abstraction levels occurring during vision, as there are: – Local modulation processes adapting to optimal sensory ranges, as e.g. given by local filter contrast adaptation and contour completion processes. – Prediction-measurement-update loop of elementary sensory states of visual items, such as the position update of an item that is being tracked. – Higher-level prediction-measurement-update loops dealing with engagement and loss of lower level loops, such as given by processes of finding suitable sensory items, engaging in a tracking loop, evaluate the success to detect e.g. when a prediction fails, reacquiring the item when it was lost and finding an alternative item if necessary. – Serialization processes. Since the prediction-measurement-update loops of different visual events compete for resources, higher-level visual tasks requiring several of them have to organized temporally, e.g. establishing which precede others. Figure 4 shows a scheme of the necessary structures for a cognitive visual system as they are proposed in this paper. On the lowest level, after some general-purpose preprocessing that is independent of the items in visual memory,
Approaches and Challenges for Cognitive Vision Systems
229
Long-term memory Sensory memory (other modalities) Visual memory, scene presence
Active visual short-term memory
Control structures Visual subprocesses
Visual input
Fig. 4. Scheme of the necessary structures for a cognitive visual system with a focus on virtual visual actions and their control. White nodes denote control instances which mediate between sensory item representation and visual processes at all levels of abstraction. Sensory items denote state information which can be updated by temporarily activating prediction-measurement-update loops, as it occurs on a rapid timescale for the items in active visual short-term memory. The basic control framework as well as the visual subprocesses are assumed to be given, whereas the memory content and its representational structure can be developed in an emergent way.
multiple visual subprocesses in form of prediction-measurement-update loops apply. The recruitment and the modulation of these loops is organized by control structures (indicated by white nodes) acting as proxies between them and visual memory. The loops work largely in an object-specific mode, e.g. specialized to search and find sensory objects with predefined visual properties, or segment a visual region starting with an already known position and approximate size. Nodes indicate “representational compounds”, comprising both states of sensory items as well as process knowledge on how to link the state with active visual subprocesses in a prediction-measurement-update loop to couple the state with sensory perception. Different levels of abstractions of memory items are indicated by boxes, ranging from short-term, purely visual to long-term, cross modal memory. The main difference between them is in the type of integration into the sensory control processes, reflecting a control hierarchy rather than an representation hierarchy as previously in the cognitivist paradigm of figure 1. As an example, active visual short-term memory is composed of those visual items in memory that are engaged in a short-term prediction-measurement-update loop, anchoring them temporarily but continuously with sensory events. In our view, this is a buffer of a small subset of task-relevant items which are dynamically selected by a responsible control structure (in figure 4 the white node in the “scene presence” frame) from a larger number of items that make up the visual
230
J. Eggert and H. Wersing
scene memory. The items in active visual short-term memory provide predictions and modulation priors to visual subprocess control structures (indicated by the arrows with white heads), which steer the subprocesses on demand and update the information of the items. Other information from sensory memory can provide high-level modulation priors, as indicated in figure 4 by the additional arrow from top-left to a subprocess control structure. However, control processes mediating between memory and the active acquisition and update of information are assumed to work not only at the sensory periphery, but also between higher levels of representation. A cognitive vision system as proposed is therefore composed of an intertwined hierarchy of representations providing item and control information, together with a corresponding control dynamics of the processes necessary for the active acquisition of information at any representational level. Control structures of this kind could in principle self-organize in an emergent way (see the emergent systems view from section 2.1), but are very hard to develop systematically, in a self-organized fashion. Therefore, we propose to provide a framework for control structures and their representations, but not the representation of sensory items themselves, which could develop incrementally and autonomously during interaction. Only at the lowest sensory level, at the interface to some well-defined visual subprocesses we would predefine the basic visual sensory events.
3
Ingredients of a Cognitive Vision System
In the following, we present some concrete descriptions of visual subprocesses that would be needed by a cognitive vision system. As suggested in the introduction, we are interested in the control aspects of such subprocesses, highlighting the multiple control loops that arise when being operated in combination with other subprocesses in conjunction with visual memory representations. The particular choice of the subprocesses is not intended to be unambiguous or complete; rather, it is motivated by their suitability to be recruited into a general-purpose control framework for cognitive vision. 3.1
Preprocessing Example: Segmentation
One very important visual subprocess that precedes several other visual operations is image segmentation. In the following description we understand by image segmentation the segregation of the 2D visual input space into 2 regions, one characterizing the (generally more specific) region of interest corresponding to an object or a part of the scene, and the other one (generally unspecific) corresponding to the rest, i.e., the “background”. We describe one image segmentation method of choice (there are numerous) that we are using for building a cognitive vision system, again with the focus of understanding it in relation with a superordinate control instance. The segmentation occurs by means of level-set methods [37,34,57,11,28], which separate all image pixels into two disjoint regions [37] by favoring homogeneous
Approaches and Challenges for Cognitive Vision Systems
231
image properties for pixels within the same region and dissimilar image properties for pixels belonging to different regions. The level-set formalism describes the region properties using an energy functional that implicitly contains the region description. Minimizing the energy functional leads to the segmentation of the image. The formulation of the energy functional dates back to e.g. Mumford and Shah [34] and to Zhu and Yuille [57]. Later on, the functionals were reformulated and minimized using the level-set framework e.g. by [11]. Among all segmentation algorithms from computer vision, level-set methods provide perhaps the closest link with the biologically motivated, connectionist models as represented e.g. by [24]. Similar to neural models, level-set methods work on a grid of nodes located in image/retinotopic space, interpreting the grid as having local connectivity, and using local rules for the propagation of activity in the grid. Time is included explicitly into the model by a formulation of the dynamics of the node activity. Furthermore, the external influence from other sources (feedback from other areas, inclusion of prior knowledge) can be readily integrated on a node-per-node basis, which makes level-sets appealing for the integration into biologically motivated system frameworks. Level-set methods are front propagation methods. Starting with an initial contour, a figure-background segregation task is solved by iteratively moving the contour according to the solution of a partial differential equation (PDE). The PDE is often originated from the minimization of an energy functional [34,57]. Compared to “active contours” (snakes) [27], that also constitute front propagation methods and explicitly represent a contour by supporting points, level-set methods represent contours implicitly by a level-set function that is defined over the complete image plane. The contour is defined as an iso-level in the level-set function, i.e. the contour is the set of all locations, where the level-set function has a specific value. This value is commonly chosen to be zero, thus the inside and outside regions can easily be determined by the Heaviside function H(x)8 . A level-set function φ ∈ Ω → R is used to divide the image plane Ω into two disjoint regions, Ω1 (background) and Ω2 (object), where φ(x) > 0 if x ∈ Ω1 and φ(x) < 0 if x ∈ Ω2 . A functional of the level-set function φ can be formulated that incorporates the following constraints: – Segmentation constraint: the data within each region Ωi should be as similar as possible to the corresponding region descriptor ρi . – Smoothness constraint: the length of the contour separating the regions Ωi should be as short as possible. This leads to the expression9 2 E(φ) = ν |∇H(φ)|dx − χi (φ) log pi dx Ω
(1)
i=1 Ω
with the Heaviside function H(φ) and χ1 = H(φ) and χ2 = 1 − H(φ). That is, the χi ’s act as region masks, since χi = 1 for x ∈ Ωi and 0 otherwise. The first 8 9
H(x) = 1 for x > 0 and H(x) = 0 for x ≤ 0. Remark that φ, χi and pi are functions over the image position x.
232
J. Eggert and H. Wersing
term acts as a smoothness term, that favors few large regions as well as smooth region boundaries, whereas the second term contains assignment probabilities p1 (x) and p2 (x) that a pixel at position x belongs to the outer and inner regions Ω1 and Ω2 , respectively, favoring a unique region assignment. Additional terms can be very easily appended, expressing e.g. prior knowledge about the expected form of the region. Minimization of this functional with respect to the level-set function φ using gradient descent leads to ∂φ ∇φ p1 = δ(φ) ν div + log . (2) ∂t |∇φ| p2 A region descriptor ρi (f ) that depends on the image feature vector f serves to describe the characteristic properties of the outer vs. the inner regions. Examples are statistical averages, variances or histograms. The assignment probabilities pi (x) for each image position are calculated based on an image feature vector via pi (x) := ρi (f (x)). The parameters of the region descriptor ρi (f ) are gained in a separate step using the measured feature vectors f (x) at all positions x ∈ Ωi of a region i. This occurs alternatingly, updating in a first step the level set function, characterizing the segmented region, and in a second step the region descriptors of the inner and outer regions. In [16,51], probabilistic and histogram-based region descriptors are combined with level-set methods for an application in a multicue setting, required for general-purpose segmentation tasks (see below). Figure 5 shows a block diagram of a more generalized segmentation framework. A visual input is first preprocessed by analyzing different but arbitrary visual cues and properties, like colors, textures and gabor structures at different orientations (but also more sophisticated cues can be incorporated, like disparity from binocular vision, or motion estimates from a sequence of images). The only prerequisite is that they all operate in the same spatial image space. For each target that should be segmented, the cues are combined with their proper region descriptors to get the assignment probability maps. These are fed into a recurrent dynamics as described to minimize the level-set functional. In a postprocessing step, the segmentation result can be used for other purposes, like image classification, or the extraction of statistics resp. new region descriptors for the gained region. The link with the control view of cognitive vision systems appears when we regard the numerous possibilities to control and tune the segmentation process using prior knowledge to specialize it to a given task. Figure 5 indicates these control influences by the vertical arrows from the top. First of all, the multicue preprocessing demands for a cue adaptation and selection process, since some cues provide correlated, or, alternatively, irrelevant information. Second, specific region descriptors can be provided by previously gathered prior knowledge, e.g. about an objects’ color, texture, etc. Third, the recurrent level-set dynamics can incorporate explicit information about a regions expected form, spatial preferences, and dynamic compensations (e.g. if during the segmentation process the visual input changes systematically so that it can be predicted). Sitting on top of this, modules have to decide on all these control alternatives, deciding
Approaches and Challenges for Cognitive Vision Systems
233
High level priors and control
Segmentation result
Recurrent dynamics
Form priors and constraints Initialization and compensation
Region assignment Region descriptors Multicue preprocessing
Dynamic cue weighting
Fig. 5. Processing and control flow in a generalized segmentation process. The proper segmentation process occurs in a recurrent network dynamics indicated by the circular arrow. Local control loops involve the adjustment of the dynamic cue weightings as well as the compound of region assignment, recurrent dynamics and segmentation result evaluation. Furthermore, higher-level contexts requiring visual memory are involved in controlling the cue adaptation and selection process, the region descriptors, the initialization as well as other prior information on the item that should be segmented (arrows with white heads). All these processes are assumed to be covered by corresponding perceptual prediction-measurement-update cycles.
on a first place that there may be a candidate area of the scene which may be worthwhile to look at or that more detailed information about an object with assumed properties that should be highlighted by the segmentation process is required. Similarly, modules have to decide on the success of the segmentation, detecting a failure and reengaging into the segmentation process if necessary. The segmentation process itself (i.e., the candidate locations for segmentation, the prior assumptions for the segmentation tuning, the results and all the control issues) would have to be captured by an appropriate representation at visual working memory level, as was suggested in sections 1.2 and 2.5. 3.2
Multicue Tracking
As suggested in the first and second sections, tracking objects or other parts of a scene is a fundamental property of a cognitive visual system to temporally establish and maintain a link between an internal representation and sensory
234
J. Eggert and H. Wersing
measurements originated by external causes. In short, tracking is a constrained feature and property search, dedicated to a object that can be described by specific, but rather arbitrary visual features (e.g. a visual pattern or statistical properties like certain cue combinations), together with an iterative estimation of dynamic object properties like its position, velocity and visual changes. Humans are generally able to track arbitrary objects even if they have not seen or learned them before (i.e., without long-term memory), i.e., they can start tracking immediately from any region with characteristic properties. In addition, objects can be tracked even if their visual configurations change considerably (even if these changes can sometimes not be reported, [47]), it seems to be sufficient if certain dynamic assumptions are fulfilled (in easy cases, smoothness and continuity in one of the cues that make up the tracked object suffice). And even better, humans can track simultaneously several objects at once, although this capacity is limited to a handful of objects [39], a number that reminds to the capacity limits of visual short-term memory from section 1.2. Taken altogether, visual target tracking is remarkably robust and flexible, being able to deal with all sorts of target property changes and dynamics, uncertainties in the measurement and even periods of occlusion. For a cognitive vision system, we target at a similarly flexible visual object tracking process, with the purpose to lock and maintain attention on an object or a part of a visual scene for a short time period. It should be able to deal with varying visual conditions as well as asynchronous, nonregular update at low frame rates. In addition, for varying visual conditions, no single cue will be sufficiently robust to provide reliable tracking information over time, so that we have to use multiple cues for the tracking process (with a preprocessing as described for the multicue segmentation process in section 3.1). The idea is that if the cues are sufficiently complementary, there will always be at least one which can provide a tracking signal that can be exploited. For varying visual conditions, the reliability of the cues varies and some cues undergo signal jumps, but some of the remaining cue channels exhibit predictable signals that can be used for tracking. After cue preprocessing, the fundamental problem that a tracking system has to solve is that of iterative, dynamic target state estimation. This means that it has to estimate continuously the state of a dynamic system using a series of measurements gained from an observable that can be put in relation with the state. Fortunately, this type of problems has been extensively studied in the domain of dynamic nonlinear filtering, see e.g. [42] for a review. For noisy states and measurements, the dynamic filtering problem can be formulated as an optimal recursive Bayesian estimator. Well-known estimators are e.g. the Kalman filter used for linear Gaussian problems (and its variants), but also techniques for the approximate numerical handling of the estimation problem, as given e.g. by the family of particle filter approaches (see e.g. [4] for an overview). For the Bayesian estimator, one attempts to construct for each timestep the posterior probability density function (pdf) of the state, taking into consideration the whole series of past measurements. The pdf contains the
Approaches and Challenges for Cognitive Vision Systems
235
complete solution to the estimation problem (in a statistical sense), which means that from it, we can extract any relevant statistical estimate of the state. During tracking, an estimate of the state is calculated every time a new measurement Zt is received. This means that the filter is applied sequentially every time a new measurement becomes available, hopefully converging over time towards the solution. At every time step, only the most recent measurements are used to refine the estimate, so that the computational cost remains within acceptable bounds. The posterior of the state given all past measurements reads ρ(Xt |Zt , ..., Z1 )
(3)
with the present state Xt and the measurements Zt , ..., Z1 for all discrete, past timesteps t, t − 1, ..., 1 including t. Let us start from timestep t − 1. We assume that the last posterior ρ(Xt−1 |Zt−1 , ..., Z1 )
(4)
is known. The target is now to estimate the new, present posterior Eq. 3 by taking into account – some additional knowledge about how the state X evolves over time from t − 1 to t and – knowledge about the measurement that is expected at time t if the system is in a state X – the real, new measurement Zt taken at time t. These points express formally in two stages of the filtering process, usually termed prediction and update stages. The prediction stage uses the knowledge about the systems state deployment over time to predict the expected posterior for the timestep t, i.e., it propagates the posterior from one timestep to the next without consideration of the new measurement. This type of prediction is usually coupled with uncertainty, so that it will generally spread and broaden the pdf. To the contrary, the update step uses the measurement Zt to confirm and narrow the prediction. The two steps are then combined via the Bayes theorem, the prediction corresponding to the Bayesian prior and the measurement to the Bayesian likelihood used for adjusting the prior when extra information is available. All this is a probabilistic concretization of the prediction-measurementupdate steps as introduced in section 2.4. Using knowledge about how the state X evolves over time from t − 1 to t means, in a probabilistic sense, knowing ρ(Xt |Xt−1 )
(5)
if we restrict to a Markovian process of order one. Note that there is no dependency on the measurements/observables here, since we assume the measurement to not have any impact on the state itself. Then, Eq. 5 can be used to get (see e.g. [42]) ρ(Xt |Zt−1 , ..., Z1 ) = ρ(Xt |Xt−1 ) ρ(Xt−1 |Zt−1 , ..., Z1 ) dXt−1 , (6)
236
J. Eggert and H. Wersing
which is the expected posterior for time t by taking into consideration all past measurements Zt−1 , ..., Z1 , but not yet including the most up-to-date measurement Zt . Similarly, using knowledge about the expected measurement for time t means to know ρ(Zt |Xt , Zt−1 , ..., Z1 ) . (7) Bayes then gives us ρ(Xt |Zt , ..., Z1 ) ∼ ρ(Zt |Xt , Zt−1 , ..., Z1 ) ρ(Xt |Zt−1 , ..., Z1 )
Measurement likelihood Predictive prior
(8)
which combines the two equations Eqs. 6 and 7 to get the estimation of the new, updated posterior. (The proportionalities indicate that all the pdf’s always have to be normalized.) Ideally, Zt is the complete multicue measurement. In practice, it is often assumed that the measurements are independent for each cue, so that the formalism applies for each cue likelihood independently and afterwards these can be combined. The probabilistic approach then automatically decreases the weight of the contributions of more “uncertain” cues (in terms of noisy, fluctuating). A probabilistic multicue tracking method that is robust against changes sudden changes in single cues is presented by Eggert et al in [17]. A nice property of the fully probabilistic approach is that it takes multiple simultaneous hypotheses into consideration. This implies that testing the different hypotheses is cheap - and therefore does not apply to more specialized scenarios, where a dedicated machinery has to be specialized and adapted in order to test each single hypothesis. The probabilistic framework for tracking is therefore subject to severe resource constraints, as stated in section 1, this time in terms of prediction range. In practice, the probabilistic approach only works for simple predictive models and has to be extended by further non-probabilistic adaptation loops. Figure 6 shows the block diagram of tracking from a more general perspective. After the preprocessing of multiple cues, knowledge about the particular target is incorporated, e.g. as a multicue template or other indication of visual properties, similarly to the region descriptors from section 3.1. This gives us the targetspecific measurements which are used for the state estimation. Control from high-level processing can be exerted at the level of the multicue preprocessing step, similarly to the segmentation case. Second, the target descriptor has to be provided and adjusted depending on an objects’ appearance change. Third, the state adjustment relies on a predictive estimation that can be influenced by other visual subprocesses, e.g. including context knowledge about preferred location, velocity, rotation, etc. Forth, the dynamic state prediction model may be subject to change (consider as an example the case of a ball that is being tracked, rolling on a table surface and then reaching the border, falling down and bouncing away). Fifth, scene context information is crucial for the measurement part of the estimation, since object occlusions could be bridged
Approaches and Challenges for Cognitive Vision Systems
237
High level priors and control
State estimation State priors Prediction model (from context) Likelihood Measurement model (occlusions) Measurement Target descriptors Multicue preprocessing
Dynamic cue weighting
Fig. 6. Processing and control flow in a generalized item tracker. The core of a tracker is a probabilistic formulation of the prediction-measurement-update loop (circular arrow in the state estimation). To cope with a changing visual appearance of the target, an adaptation loop spans the measurement, likelihood and state estimation modules adjusting the target descriptors. Higher level priors may also influence the target descriptors, as well as the measurement and prediction models. On top of all, an item finding, engagement, tracking, evaluation and release or and reengagement loop actively binds items from visual short-term memory to the tracking process, see fig. 4 and section 2.5.
by changing the state adjustment process if knowledge about occluder objects is available, by this way “explaining” situations in which the tracking fails. Context information is also necessary for the case of arising correlations between different object dynamics (e.g. as present in a hand-object coordination scene during grasping), which can be captured by modification of the prediction models (the prediction models of the interacting objects becoming entangled). Finally, higher-level modules have to either start the tracking engagement by presenting object hypotheses and starting conditions, finish the engagement when tracking fails, or organize for reengagement if the object should be kept in the processing focus of the system. As argued before, we postulate that a suited representation in visual working memory that has access to the multiple adjustment loops and serves to control the tracking processes has to be established. This representation would then couple to other processes that demand a sustained focus on an object or the estimation of an objects’ parameters as delivered by the tracking process, as well
238
J. Eggert and H. Wersing
as to superordinate processes that organize the distribution of tracking resources on hypothetical objects of interest and the creation and destruction of tracking subprocesses. It remains to be stated that an entire plethora of visual tracking approaches exist, depending on one hand on the types of representations that are used for representing the objects and on the other hand on the complexity of the appearance changes to be captured. For technical systems, many tracker work in constrained environments, like high input frame rates (resulting in very simple or nearly linear appearance changes, as assumed by KLT or Mean-Shift based differential tracking systems, see e.g. [31,44,13]) or a stationary background against which changes can be easily extracted. Here, we wanted to highlight control issues in trackers that work with large appearance changes, low frame rates, asynchronous update or measurement and sporadic and selective tracker engagement; controlled for a dedicated visual task within a specific visual scene and visual item memory context.
4
Learning in Cognitive Vision Systems
The human visual system is not fixed and static over time, but a rather flexible and adaptive system that is subject to a wide variety of learning mechanisms. Learning occurs at different time scales, ranging from minutes up to life-long time spans, required to obtain proficiency in specialized, visually dominated tasks. Flexible and autonomous learning is also a key ability that distinguishes human visual capabilities from current state-of-the-art technological solutions in machine vision and object recognition. In the following we first describe our approach to the main general principles that underlie the buildup of a task-driven behaviourally relevant visual knowledge representation. We then concentrate on the two issues of perceptual learning, operating on rather short time scales, and already realized concepts of integrated visual attention and object category learning. 4.1
Self-referential Buildup of Visual Knowledge Representations
The learning processes starting from the initial perceptual capabilities of an infant after birth up to the later development of specialized visual proficiency are an example for a complex knowledge acquisition process that requires the coordinated interaction of several areas of the brain. As an approach to understand this, the concept of self-referential learning [29] has been proposed, emphasizing the autonomous and active character of any brain-like knowledge acquisition process and the prevalence of task-driven and behaviorally relevant learning. Both aspects together ensure the consistent buildup of a visual knowledge representation that is useful for a living being, that has to survive in a dynamically changing and unpredictable environment. Any higher-level biological cognitive system faces the challenge, that its development is to a large part determined by the interaction with its surroundings.
Approaches and Challenges for Cognitive Vision Systems
239
Here the feedback from the environment rarely provides explicit teaching signals that have the quality of the supervised learning paradigm of neural networks or machine learning. The system rather has to rely on its own internal dynamics in determining the buildup of meaningful visual categories and evaluating their success in the interaction with the environment. K¨orner and Matsumoto [29] have emphasized the importance of a subjective stance towards this acquisition process, defining a “self”, determined by a value system and guiding the learning process by emotions and resulting attentional biases. This value system strongly relates to phylogenetically older substructures of the brain, where especially the limbic system plays a crucial role. An important concept in self-reference means that the already acquired representational system strongly constrains the perception, evaluation, and thus value-driven acquisition of new knowledge. This filtering of new knowledge based on existing representations ensures the overall consistence of newly stored information in relation to the prior knowledge. The reference to a value-system for guiding the acquisition of meaningful representations provides a direct link to the importance of behavior and task-related concepts for the learning of visual representations. According to this approach, the formation of visual categories is generally done in reference to a task and behavioral constraints. This provides a strong coupling between action and perception, being a key ability of biological intelligent systems that has proven notoriously difficult to achieve in technical systems. A good example for such a representation in the field of vision are action-related object representations of manipulable man-made objects, that have been found in the dorsal brain areas [12]. 4.2
Perceptual Learning
Perceptual learning has been defined as an acquired change to a perceptual system to improve its ability to respond to the environment [23]. The corresponding time spans range from minutes up to days and weeks and the effects of perceptual learning can be quite long lasting. This type of learning adaptation has been contrasted against cognitive learning processes in the way that it applies to the perceptual or pre-attentive processes, that are beyond conscious accessibility. In an attempt to categorize different mechanisms of perceptual learning, Goldstone [23] has distinguished the following mechanisms: – Attentional weighting modifies the relevance of visual features in a particular task context – Imprinting introduces new, specialized features in a perceptual situation – Differentiation separates previously indistinguishable features – Unitization merges separate features into greater units to ease perception of such compounds Perceptual learning is generally assumed to modify rather the early stages of cognition and is thus prior to high-level reasoning. The perceptual effect can, however, deeply influence higher areas by influencing the feature representations
240
J. Eggert and H. Wersing
that are the basis of higher level concepts. This low-level property is highlighted by the limits of generality of this form of learning. Training on simple visual discriminations often does not transfer to different eyes, to different spatial locations, or to different tasks involving the same stimuli [20]. Although perceptual learning is an ubiquitous phenomenon in biological vision systems, almost all current computer vision systems lack this basic capability. The reason is that there is no general concept available that could deal with the resulting plasticity in such a system. A simple example is an object classifier that is trained by supervised learning on the output of a feature detection stage. Most current classification models assume a static feature representation and cannot handle an incremental and dynamic input stage. In a recent contribution Wersing et al. [53] have investigated a model of coupled learning between the “what” and “where” pathways for the bootstrapping of the representations for localizing and classifying objects. This establishes first steps towards modular cognitive vision systems where parallel learning in different modules can occur without destabilizing the robustness of the system. 4.3
Visual Category Learning and Object Attention
Main problems. The processes involved in visual categorization are generally considered more on the high-level or cognitive side of perception. Nevertheless it is obvious, that sensing and learning of object classes is strongly dependent on phenomena of attention, expectation, and task-driven utility. In creating a visual system with an autonomous strategy for learning visual concepts, the following questions have to be answered: – What and where do we have to learn ? – When do we have to learn ? The first question is related to the ability of a learning visual system to attend to particular parts in the scene that offer some saliency that can be both bottom-up and top-down driven. In general an autonomously learning system requires an active strategy for selecting elements within a visual scene that are both interesting and can be robustly separated from the distracting surroundings. It should be one of the main targets of a cognitive vision systems to relax the strong segmentation constraints that are currently necessary for many computer vision approaches to supervised learning of object categories. Segmentation in this framework should be rather a form of attention that is mainly top-down driven by the prior knowledge of the system. In the human visual system there exists a clear functional differentiation in the processing of object identity (“what”) and object positions (“where”) [56]. The second question is related to the temporal coherence and stability of the learning process. An autonomously learning system cannot rely on an explicit teacher signal that triggers the start and end of a learning phase. It rather needs intrinsic signals that characterize points in time where learning is feasible, based on an internally driven detection of learning success. Prediction is one of the main
Approaches and Challenges for Cognitive Vision Systems
241
concepts that can be used to estimate the feasibility of learning in a particular scene context. For prediction of the sensory input, it is necessary to produce generative models, that are capable of reproducing the relevant visual structures of real objects. We are here, however, not mainly concerned with the temporal aspect of prediction, but with prediction in the sense of the ability of the system to effectively represent an externally changing input using its internal representations. This is normally referred to the concept of deriving a generative model for the considered stimulus domain. To make this autonomous learning feasible, apriori information on relevant structural constituents of objects can be useful. Related Work. The questions of attention-based learning and object isolation in real-world scenes have been investigated by a number of recent contributions: Shams & von der Malsburg [43] considered the autonomous learning of visual shape primitives in an artificially generated setting with rendered scenes containing geon components. Using a correlation measure based on Gabor jet feature representations they manage to derive simple constituents of the scene. The scaling to more complex real-world scenes, however, was not yet considered. Williams & Titsias [54] have proposed a greedy learning approach of multiple objects in images using statistical learning in a generative model setup. Their approach is based on a predefined sequence of learning steps. First the background is learned, then the first object, and subsequently more objects. The representation is based on a mask and a transformable template. A limitation is that the method can only register a single pose of a particular object. In a similar Bayesian generative framework, Winn & Jojic [55] use their LOCUS model for the learning of object classes with unsupervised segmentation. Additionally they can handle stronger appearance variation among the members of the learned class, i.e. color and texture. Walther et al. [50] investigate the usage of bottom-up saliency for determining candidate objects in an unsupervised way in outdoor and indoor scenes. For each frame of a video sequence such candidate objects are determined offline, and represented using the SIFT feature approach developed by Lowe [30]. Matching objects are determined between pairs of frames, and compared to a human labeling of the objects in the scene. The saliency-based segmentation improves the matching performance and the system is robust with regard to scaling and translation, but not very good at representing 3D rotation and multiple poses of objects. An interesting approach to supervised online learning for object recognition was proposed by Bekel et al.[10]. Their VPL classifier consists of feature extraction based on vector quantization and PCA and supervised classification using a local linear map architecture. They use a bottom-up saliency coupled with pointing gestures in a table setting to isolate objects in the scene. The segmentation method is similar to the one in [50]. Arsenio [3] uses an active perception model for object learning that is using motion-based segmentation, sometimes even induced by robot actions. The object representation is based on hashing techniques that offer fast processing, but only limited representational and discriminatory capacity.
242
J. Eggert and H. Wersing
Face
+
Peripersonal obj.
Attention Gaze control
Top−down Feedback
Motion Contrast
Image acquisition
Attention Selection Face hypothesis
Skin Peripersonal Object Hypothesis
Sensory Memory Stereo depth
Shape Feature Maps Object Memory
Temporal Integration
RGB
Speech Input/Output
Fig. 7. Overview of the visual online learning architecture by Wersing et al. The system combines feature-based bottom-up saliency with top-down expectations on faces to be learned. Objects and faces are then incrementally learned in a unified shape feature map representation using short-term and long-term memory.
Itti [25] develop a general theory of attention as Bayesian surprise. In their approach, surprise is quantified by measuring the difference between posterior and prior beliefs of the observer. Attention for Online Object Learning. Wersing et al. [52] have presented a biologically motivated architecture for the online learning of objects and people in direct interaction with a human teacher. Their system combines a flexible neural object recognition architecture with an attention system for gaze control, and a speech understanding and synthesis system for intuitive interaction. A high level of interactivity is achieved by avoiding an artificial separation into training and testing phase, which is still the state-of-the-art for most current trainable object recognition architectures. They do this by using an incremental
Approaches and Challenges for Cognitive Vision Systems
243
learning approach that consists of a two-stage memory architecture of a contextdependent working or sensory memory and a persistent object memory that can also be trained online. They use a stereo camera head mounted on a pan-tilt unit that delivers a left and right image pair for visual input (see Fig.7). The gaze is controlled by an attention system using bottom-up cues like edge/color/intensity contrast, motion, and depth, presented in more detail in [22]. Additionally top-down information on face targets is provided to be followed with a peaked map at the detected face position. Each cue is represented as a retinotopic activation or saliency map. A simple addition of the different cues is used, where clear priorities are induced by weighting the cues in the following sequence: contrast < motion < depth < face. This simple model enables a quite complex interaction with the system to guide the attention appropriately. The default state of the gaze selection system is an exploratory gazing around that focuses on strong color and intensity contrasts. Moving objects attract more attention. An even stronger cue is generated by bringing an object into the peripersonal space, that is the near-range space in front of the camera that corresponds to the manipulation space of a humanoid robot [22]. However, also the weaker cues of contrast give a contribution and stabilize the attention. The strongest cue is the presence of a detected face, generating a strong task-specific attention peak at the detected position. To trigger the online learning and recognition, two parallelly computed object hypotheses are used. Firstly, objects are learned and recognized, if they are presented within the peripersonal space. The object is attended, as long as it resides within this interaction space. Secondly, using skin color segmentation, candidate region segments are classified according to their face similarity. An accepted face region is then selected and processed using the same online learning and recognition pathway as for objects. The attention is retracted from the face, if no valid face-like segment was detected near the image center for two input frames. The system of Wersing is capable of learning and robust recognition of several objects and face categories, as was shown in [52]. The interaction between the attention system and the object learning, however, is manually designed and not dynamic with regard to the selected feature modalities like stereo or face shapes. The implementation of dynamic mechanisms and learning principles also for this part of the system will be an important future step to ensure stronger autonomy of such online learning visual systems.
5
Conclusions
During the last years, considerable progress has been made for single visual processes, as it is also the case for the examples presented here: Segmentation, tracking and appearance-based object classification and learning. Nevertheless, in a real-world scenario these processes have to be constrained by sensible assumptions to make the problems tractable. It is likely that no general-purpose
244
J. Eggert and H. Wersing
solution exists for any of them without severe constraints, a dilemma that may be shared with biological vision systems. This means that we are confronted with the principled problem that a number of visual subprocesses has to be organized, constrained, adapted and arbited dynamically, even for simple, brief visual tasks. As a consecuence, visual subprocesses have to be approached and designed in a substantially different way than in classical computer vision. This paper presents a proposal on how the organization of visual subprocesses could be achieved. Where should information about how to constrain and adapt visual subprocesses come from in a first place? In essence, visual and long-term memory can store a large amount of specific priors and context knowledge which may be recalled to tune processes to particular scenarios, object categories and objects. The main role of the control processes is to bring together different types of internal knowledge - long-term assumptions about the world and its items, short term scene and object memory, and process knowledge about its own internal adaptation processes, limitations and capabilities - to actively steer the acquisition of information. Because of limited processing resources, this occurs selectively, and on demand. It is about anchoring in the broadest sense, but with dynamically changing configurations of the memory representations which are being bound to sensory events and only when required for a visual task. So far we have not discussed concrete realizations of the memory structure itself. What would be a minimal set of representational properties of a memory that is capable to serve as knowledge basis for control processes, and that can be temporarily entangled with selected sensory measurements? How feasible is the idea of a visual memory that gathers information about items, objects and scene properties? The experimental evidence about peoples inability to detect severe visual changes does not seem to support the idea of a persistent dedicated visual memory. It rather suggests that “visual representations may be sparse and volatile, providing no cumulative record of the items in a scene” [6]. However, most of the studies do not take special care of attention, so that it may be that the visual system still builds a cumulative record of all attended stimuli and still miss all changes involving items that were not attended. Here we reencounter the resource limitation argument, both in terms of memory access and a bottleneck in attentional resources, since attended items require exclusive resource allocation. Visual memory may therefore store just what is necessary and what was accessible with limited access resources to visual subprocesses, rendering control processes deciding on what to focus the visual resources on even more important. This applies both to visual short-term memory as well as for consolidation processes during visual exploration, as introduced in section 4.3: What, where and when to learn. Visual scene representations must therefore provide a substrate on which these issues can be taken into account. It is selectively impoverished (accumulating only sparse and incomplete visual information) and volatile (referring to shortterm visual memory with its limited and temporary anchoring capabilities to
Approaches and Challenges for Cognitive Vision Systems
245
sensory events), but it has to provide interfaces to control structures and control processes and to the different types of information extracted by the different visual subprocesses and modalities. First attempts to couple a sparse, relational visual memory with a simple visual system are presented in [26,40]. A principled approach to integrate memory in form of priors, contextual and process information with dedicated control structures that tune visual subprocesses is however an open - yet fundamental - research topic for cognitive vision.
References 1. Aloimonos, J.: Purposive and qualitative active vision. In: Proc. 10th Int. Conf. Patt. Recog., June 1990, pp. 345–360 (1990) 2. Aloimonos, Y.: Active vision revisited. In: Active Perception (1993) 3. Arsenio, A.: Developmental learning on a humanoid robot. In: Proc. Int. Joint Conf. Neur. Netw. 2004, Budapest, pp. 3167–3172 (2004) 4. Arulampalam, S., Maskell, S., Gordon, N., Clapp, T.: A Tutorial on Particle Filters for On-line Non-linear/Non-Gaussian Bayesian Tracking. IEEE Trans. Signal Processing, 100–107 (2001) 5. Baars, B.J.: Metaphors of consciousness and attention in the brain. Trends in Neuroscience 21(2), 58–62 (1998) 6. Backer, M., Pashler, H.: Volatile visual representations: Failing to detect changes in recently processed information. Psychonomic Bulletin and Review 9, 744–750 (2002) 7. Baddeley, A.D., Hitch, G.J.: Working memory. In: Bower, G.A. (ed.) Recent Advances in Learning and Motivation, vol. 8, p. 47. Academic Press, New York (1974) 8. Barsalu, L.W.: Grounded cognition. Annu. Rev. Psychol. 59, 617–645 (2008) 9. Bauckhage, C., Wachsmuth, S., Hanheide, M., Wrede, S., Sagerer, G., Heidemann, G., Ritter, H.: The visual active memory perspective on integrated recognition systems. Image and Vision Computing 26(1) (2008) 10. Bekel, H., Bax, I., Heidemann, G., Ritter, H.: Adaptive computer vision: Online learning for object recognition. In: German Pattern Recognition Symposium, pp. 447–454 (2004) 11. Chan, T., Vese, L.: Active contours without edges 10(2), 266–277 (2001) 12. Chao, L.L., Martin, A.: Representation of manipulable man-made objects in the dorsal stream. Neuroimage 12(4), 478–484 (2000) 13. Comaniciu, D., Meer, P.: Mean shift analysis and applications. In: International Conference on Computer Vision, pp. 1197–1203 (1999) 14. Coradeschi, S., Saffiotti, A.: An introduction to the anchoring problem. Robotics and Autonomous Systems 43(2-3), 85–96 (2003) 15. Cowan, N.: The magical number 4 in short-term memory: A reconsideration of mental storage capacity. Behavioral and Brain Sciences 24(1), 87–185 (2001) 16. Weiler, D., Eggert, J., Willert, V., Koerner, E.: A probabilistic method for motion pattern segmentation. In: Proceedings of the IJCNN 2007 (2007) 17. Eggert, J., Einecke, N., Koerner, E.: Tracking in a temporally varying context. In: Tsujino, H., Fujimura, K., Sendhoff, B. (eds.) Proceedings of the 3rd HRI International Workshop on Advances in Computational Intelligence, Honda Research Institute, Wako, Japan (2005) 18. Eriksen, C.W.: Attentional search of the visual field. In: David, B. (ed.) International Conference on Visual Search, 4 John St., London, WC1N 2ET, pp. 3–19. Taylor and Francis Ltd, Abington (1988)
246
J. Eggert and H. Wersing
19. Eriksen, C.W., James, J.D.S.: Visual attention within and around the field of focal attention: A zoom lens model. Percept. Psychophys. 40, 225–240 (1986) 20. Fahle, M., Morgan, M.: No transfer of perceptual learning between similar stmuli in the same retinal position. Current Biology 6, 292–297 (1996) 21. Metta, G., Sandini, G., Konczak, J.: A developmental approach to visually-guided reaching in artificial systems. Neural Networks 12(10), 1413–1427 (1999) 22. Goerick, C., Wersing, H., Mikhailova, I., Dunn, M.: Peripersonal space and object recognition for humanoids. In: Proceedings of the IEEE/RSJ International Conference on Humanoid Robots (Humanoids 2005), Tsukuba, Japan (2005) 23. Goldstone, R.L.: Perceptual learning. Annual Review of Psychology 49, 585–612 (1998) 24. Grossberg, Stephen, Hong, Simon: A neural model of surface perception: Lightness, anchoring, and filling-in. Spatial Vision 19(2-4), 263–321 (2006) 25. Itti, L.: Models of bottom-up attention and saliency. In: Itti, L., Rees, G., Tsotsos, J.K. (eds.) Neurobiology of Attention, pp. 576–582. Elsevier, San Diego (2005) 26. Eggert, J., Rebhan, S., Koerner, E.: First steps towards an intentional vision system. In: Proceedings of the International Conference on Computer Vision (ICVS 2007) (2007) 27. Kass, M., Witkin, A., Terzopoulos, D.: Snakes: Active contour models. International Journal for Computer Vision 1(4), 321–331 (1988) 28. Kim, J., Fisher, J.W., Yezzi, A.J., C ¸ etin, M., Willsky, A.S.: Nonparametric methods for image segmentation using information theory and curve evolution. In: International Conference on Image Processing, Rochester, New York, September 2002, vol. 3, pp. 797–800 (2002) 29. K¨ orner, E., Matsumoto, G.: Cortical architecture and self-referential control for brain-like computation. IEEE Engineering in Medicine and Biology 21(5), 121–133 (2002) 30. Lowe, D.G.: Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision 60(2), 91–110 (2004) 31. Lucas, B.D., Kanade, T.: An iterative image registration technique with an application to stereo vision. In: International Joint Conference on Artificial Intelligence, pp. 674–679 (1981) 32. Marr, D.: Vision. Freeman, San Francisco (1982) 33. Maturana, H., Varela, F.: The Tree of Knowledge - The Biological Roots of Human Understanding. In: New Science Library, Boston (1987) 34. Mumford, D., Shah, J.: Optimal approximation by piecewise smooth functions and associated variational problems. Commun. Pure Appl. Math. 42, 577–685 (1989) 35. Navalpakkam, V., Itti, L.: An integrated model of top-down and bottom-up attention for optimizing detection speed. In: IEEE Computer Vision and Pattern Recognition or CVPR, pp. II: 2049–II: 2056 (2006) 36. Neumann, B., Moller, R.: On scene interpretation with description logics. Image and Vision Computing 26(1), 82–101 (2008) 37. Osher, S., Sethian, J.A.: Fronts propagating with curvature-dependent speed: Algorithms based on Hamilton-Jacobi formulations. J. Cmpt. Phys. 79, 12–49 (1988) 38. Pylyshyn, Z.W.: The role of location indexes in spatial perception: A sketch of the FINST spatial index model. Cognition 32(1), 65–97 (1989) 39. Pylyshyn, Z.W., Storm, R.W.: Tracking multiple independent targets: Evidence for a parallel tracking mechanism. Spatial Vision 3, 179–197 (1988) 40. Rebhan, S., R¨ ohrbein, F., Eggert, J., K¨ orner, E.: Attention modulation using shortand long-term knowledge. In: Gasteratos, A., Vincze, M., Tsotsos, J.K. (eds.) ICVS 2008. LNCS, vol. 5008, pp. 151–160. Springer, Heidelberg (2008)
Approaches and Challenges for Cognitive Vision Systems
247
41. Rensink, R.A., O’Regan, J., Kevin, J., Clark, J.J.: To see or not to see: the need for attention to perceive changes in scenes. Psychological Science 8(5), 368–373 (1997) 42. Ristic, B., Arulampalam, S., Gordon, N.: Beyond the Kalman Filter. Artech House (2004) 43. Shams, L., von der Malsburg, C.: Acquisition of visual shape primitives. Vision Research 42(17), 2105–2122 (2002) 44. Shi, J., Tomasi, C.: Good features to track. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR 1994), Seattle (June 1994) 45. Sperling, G.: The information available in brief visual presentations. Psychological Monographs: General and APllied 74(11), 1–30 (1960) 46. Treisman, A., Schmidt, H.: Illusory conjunctions in the perception of objects. Cognitive Psychology 14, 107–141 (1982) 47. Triesch, J., Ballard, D.H., Hayhoe, M.M., Sullivan, B.T.: What you see is what you need. Journal of Vision 3(1), 86–94 (2003) 48. van Gelder, T., Port, R.F.: It’s about time: An overview of the dynamical approach to cognition. In: Port, R.F., van Gelder, T. (eds.) Mind as Motion - Exploration in the Dynamics of Cognition, Bradford Books, MA, pp. 1–43. MIT Press, Cambridge (1995) 49. Vernon, D.: Cognitive vision: The case for embodied perception. Image and Vision Computing 26, 127–140 (2006) 50. Walther, D., Rutishauser, U., Koch, C., Perona, P.: Selective visual attention enables learning and recognition of multiple objects in cluttered scenes. Computer Vision and Image Understanding 100(1-2), 41–63 (2005) 51. Weiler, D., Eggert, J.: Segmentation using level-sets and histograms. In: Ishikawa, M., Doya, K., Miyamoto, H., Yamakawa, T. (eds.) ICONIP 2007, Part II. LNCS, vol. 4985, pp. 963–972. Springer, Heidelberg (2008) 52. Wersing, H., Kirstein, S., G¨ otting, M., Brandl, H., Dunn, M., Mikhailova, I., Goerick, C., Steil, J.J., Ritter, H., K¨ orner, E.: Online learning of objects and faces in an integrated biologically motivated architecture. In: Proc. ICVS, Bielefeld (2007) 53. Wersing, H., Kirstein, S., Schneiders, B., Bauer-Wersing, U., K¨ orner, E.: Online learning for bootstrapping of object recognition and localization in a biologically motivated architecture. In: Gasteratos, A., Vincze, M., Tsotsos, J.K. (eds.) ICVS 2008. LNCS, vol. 5008, pp. 383–392. Springer, Heidelberg (2008) 54. Williams, C.K.I., Titsias, M.K.: Greedy learning of multiple objects in images using robust statistics and factorial learning. Neural Computation 16(5), 1039– 1062 (2004) 55. Winn, J., Jojic, N.: LOCUS: Learning object classes with unsupervised segmentation. In: ICCV 2005, pp. I: 756–I: 763 (2005) 56. Zeki, S.: Localization and globalization in conscious vision. Annual Review Neuroscience 24, 57–86 (2001) 57. Zhu, S.C., Yuille, A.L.: Region competition: Unifying snakes, region growing, and bayes/MDL for multiband image segmentation. PAMI 18(9), 884–900 (1996)
Some Requirements for Human-Like Robots: Why the Recent Over-Emphasis on Embodiment Has Held Up Progress Aaron Sloman School of Computer Science, University of Birmingham, B15 2TT, UK
[email protected] http://www.cs.bham.ac.uk/∼axs/ Abstract. Some issues concerning requirements for architectures, mechanisms, ontologies and forms of representation in intelligent human-like or animal-like robots are discussed. The tautology that a robot that acts and perceives in the world must be embodied is often combined with false premises, such as the premiss that a particular type of body is a requirement for intelligence, or for human intelligence, or the premiss that all cognition is concerned with sensorimotor interactions, or the premiss that all cognition is implemented in dynamical systems closely coupled with sensors and effectors. It is time to step back and ask what robotic research in the past decade has been ignoring. I shall try to identify some major research gaps by a combination of assembling requirements that have been largely ignored and design ideas that have not been investigated – partly because at present it is too difficult to make significant progress on those problems with physical robots, as too many different problems need to be solved simultaneously. In particular, the importance of studying some abstract features of the environment about which the animal or robot has to learn (extending ideas of J.J.Gibson) has not been widely appreciated.
1
Introduction
I shall discuss a collection of competences of humans and other animals that appear to develop over time under the control of both the environment and successive layers of learning capabilities that build on previously learned capabilities. In the process I shall point out some important limitations in current ideas about embodiment and dynamical systems. To set the scene, I use a well known and influential paper by Rodney Brooks (1990), that expresses very clearly both views that are still widely held that I think are seriously mistaken, and also some important truths. However, some of those influenced by Brooks have been more mistaken than he was. I also believe he has changed his position since writing it. There is certainly some truth in the claim made by Brooks and others that early work on symbolic AI ignored important issues relating to design of robots able to act in the world continuously in real time (also stressed in [32,3,34]). However it is usually forgotten that in early work on AI, including the SRI robot Shakey1 and the Edinburgh robot Freddy [1], the available computing 1
http://www.sri.com/about/timeline/shakey.html
B. Sendhoff et al. (Eds.): Creating Brain-Like Intelligence, LNAI 5436, pp. 248–277, 2009. c Springer-Verlag Berlin Heidelberg 2009
Some Requirements for Human-Like Robots
249
power was so limited (e.g. with CPU speeds measured in at most kilocycles per second and computer memories in at most kilobytes) as to rule out any real-time interaction. For example it could take 20 minutes for an AI visual system to find the upper rim of a teacup. This meant that the only option for AI researchers with long term goals was to adopt an extremely slow and artificial, sense-thinkact cycle for their experiments. Unfortunately despite its flaws, noted below, that cyclic 3-stage model has persisted with variations (e.g. sense-compute-act) even in some systems that reject symbolic AI methods. Each new AI fashion criticises previous fashions for failing to achieve their predicted successes, alleging that they used the wrong mechanisms. This is a serious error. The main reason why predictions were wrong was that the problems were not understood, not that the wrong mechanisms were adopted. The problems of replicating human and animal competences, especially the problems of seeing and interacting with a structured 3-D environment, are very complex in ways that we are only now beginning to understand. Had the problems been understood in the early days of AI the over-optimistic predictions would not have been made. None of the tools and forms of representation so far proposed could have enabled the targets of AI to be met on the predicted time-scales. Moreover, none of them is adequate to the long term task, since, as various researchers have pointed out (e.g. Minsky in [20]), human-like robots will need information-processing architectures that combine different mechanisms running concurrently, doing different sorts of tasks, using different forms of information, processed in different ways.2 But before we can achieve that, we shall need to do a deeper and more complete requirements analysis in order to understand the problems that need to be solved.
2
The Seduction of Embodiment
One of the fashions that has immense plausibility for beginners (especially those not trained in philosophy) and has influenced large numbers of researchers over the last decade or two, has a number of aspects. It emphasises the fact that animals are embodied, claims that animal intelligence depends on the detailed morphology, including the sensors and effectors available, denies the need for symbolic representations and mechanisms for doing planning or logical reasoning, relies heavily on information being constantly available in the environment, emphasises the importance of the brain as a dynamical system linked to the environment through sensors and motors and in some cases makes heavy use of statistical learning and probabilistic control mechanisms. A good example is [18]. In working systems that allow machines to manipulate internal states with semantic contents it is common within this fashion to stress the importance of “symbol grounding” [11], i.e. the derivation of all semantic content from the 2
I use the word “representation” to refer to whatever is used to encode information. It could be some physical structure or process, or a structure or process in a virtual machine. It could be transient or enduring. It may be used for a single function or for many functions. See also http://www.cs.bham.ac.uk/research/projects/ cogaff/misc/whats-information.html
250
A. Sloman
contents of sensor signals and the patterns of association between sensory and motor signals. The idea of symbol grounding is closely related to the old philosophical theory of concept empiricism, discredited long ago by the work of Kant and finally buried by 20th century philosophers of science considering the role of theoretical terms in science (e.g. “electron”, “gene”, “valence”, etc.) that are primarily defined by their roles in explanatory theories.3 In his 1990 paper, Brooks presented the key ideas of so-called “nouvelle AI”, emphasising embodiment and sensory-motor interactions with the environment, including the proposal to dispense with symbolic representations, for instance in perceiving, planning, reasoning and learning, because “... the world is its own best model. It is always exactly up to date. It always contains every detail there is to be known. The trick is to sense it appropriately and often enough. ... The constructed system eventually has to express all its goals and desires as physical action, and must extract all its knowledge from physical sensors. ... We explore a research methodology which emphasises ongoing physical interaction with the environment as the primary source of constraint on the design of intelligent systems.” The claim was that once this point was understood and systems were built using a hierarchy of layers of sensorimotor control, everything would turn out to be “rather simple once the essence of being and reacting are available”. Part of the argument was that it took far longer for evolution to reach the stage just before the first mammals evolved than to achieve all the subsequent developments. From this, it was inferred that once we had produced insect-like systems everything else would be “rather simple”, a view parodied in the title of Kirsch (1991). Of course, Brooks was not the only defender of the emphasis on embodiment and the rejection of symbolic AI architectures and mechanisms.4 It is not clear that he adopted the “symbol-grounding” approach and the closely related assumption that all a robot’s knowledge must be expressed in terms of sensorimotor contingencies, though other supporters of nouvelle AI did that explicitly or implicitly, e.g. by restricting the learning of robots to discovery of statistical patterns relating sensory and motor signals. Many of them now present robots 3 4
As explained in this tutorial: http://www.cs.bham.ac.uk/research/projects/cogaff/talks/#models The paper seems to me to be ambivalent between two positions: (a) wholesale rejection of symbolic AI as irrelevant, and (b) recommending that research on symbolic methods be temporarily shelved until after the techniques of nouvelle AI are well developed. Whatever his real views, many readers and followers in many countries interpreted Brooks’ work as demonstrating that symbolic AI is totally irrelevant, and one result of that is that many students who learn about AI are now taught nothing about AI techniques for planning, parsing, symbolic reasoning, logical inference, learning and reasoning by analogy, etc. though they may learn a lot of mathematics related to probabilistic representation and reasoning. Compare this comment by Cliff in [6]: “Most work in AI prior to the mid-1980’s is largely of historical interest, and will be discussed no further.”
Some Requirements for Human-Like Robots
251
performing tasks where much use is made of the morphology of the robot to reduce the software sophistication required. An extreme example of this approach that I have not yet seen presented but would illustrate the ideas well would be the claim that a tennis ball is an elegant design for a kind of robot for going down helter-skelters. The extensions required for climbing to the top of the helter-skelter unaided will take a little longer. 2.1
Morphology Can Be Important, for Some Tasks
I am not denying that choice of physical morphology can considerably simplify software design in some contexts. One of the oldest examples is the use of a “compliant wrist” to simplify problems of grasping and manipulation of 3-D objects e.g. in [7]. Some of the recent work on embodied robots has been very impressive. One of the most advanced is the BigDog robot designed by Marc Raibert and colleagues at Boston Dynamics5 capable of being a load-bearer to help humans in difficult and varied terrain. The Honda Asimo, a humanoid robot is able to perform a number of non-trivial actions.6 Brooks’ company iRobot makes and sells many robots.7 The Shadow robot hand8 has been used in demonstrations showing how the physical properties of the hand contribute to control functions. There is no evidence, however, that robots based solely on the principles of nouvelle AI will be able to perform all the demanding tasks for which human-like robots are often expected to be needed, including, for example, being a helpful companion to a disabled or blind person. Being able to select a suitable place to cross a busy road requires a vision system that can tell where the nearby pedestrian crossings are, select one, and plan a route to get from the current location to the required point on the other side of the road via the crossing. This functionality should cope with foreign towns where traffic lights and pedestrian crossings have different visual features. All of that could be avoided by having roadsides wired with hidden cables guiding robots to crossing points, but a robot that required such guidance would not be a human-like robot. Apart from the cameras and motors for moving them, parts of the body of robot (in contrast with information about parts of the body) are not needed for the process of using visual perception and prior knowledge to work out a sensible route, although the whole body is of course relevant to carrying out the plan. To be fair to Brooks, since early 2002, I have heard him publicly acknowledge in conference presentations and discussions that the ideas of nouvelle AI are insufficient, since a mixture of old and new AI techniques is needed for progress in robotics. Some early strident proponents of embodiment and how much you can get “for free” by suitable physical design of robots have also recently adopted a more muted tone, while still failing to mention that intelligent robots may have to plan future actions, reason about unobserved events, or represent mental 5 6 7 8
Shown in online videos here: http://www.bostondynamics.com/ Illustrated at http://world.honda.com/ASIMO/ Illustrated here http://www.irobot.com/sp.cfm?pageid=109 http://www.shadowrobot.com/hand/videos.shtml
252
A. Sloman
states of others, e.g. [24]. The issues are discussed in more detail in the special issue of Cognitive Systems Research on embodiment [54].
3
Fallacies in Nouvelle AI
Unfortunately there remain many believers in nouvelle AI, whether they use that label or not, writing books and papers, giving presentations and teaching young and impressionable students. So I shall try to explain the two main fallacies, and present an alternative synthesis. The first fallacy is in the use of the evolutionary argument, and the second amounts to ignoring many of the important cognitive competences of humans and some other animals, including primates and some nest-building birds such as corvids, competences that arise from the complex and diverse structure of the environment. 3.1
The Argument from Evolutionary Timescales
The time required for evolution is not an indication of what would be hard for us. Evolution started from a very primitive state, and among other things, had to create mechanisms for creating mechanisms. It could turn out that the most difficult problem solved by evolution was using the physics and chemistry available on earth to produce any sort of organism that could grow itself, maintain and repair itself, including obtaining nutrients and disposing of waste materials, and in addition reproduce itself (including its behaviours) using mechanisms that support evolutionary adaptation. Mechanisms supporting autopoiesis are required for all biological organisms except perhaps the very simplest. The robots produced by “nouvelle AI” do very little of that. They are preconstructed and typically move around rather pointlessly (possibly doing something useful as a side-effect), or achieve arbitrary goals imposed from outside, as opposed to doing what is required to grow, maintain and reproduce themselves: a goal for robotics that is unlikely to be achieved in the foreseeable future except where quite complex components are provided ready made by human engineers. There are some impressive demonstrations of very narrow competences based on the sensorimotor approach, such as hands that will close round objects of a variety of shapes with which they come into contact, though nothing yet like a robot that can use such a hand to assemble toys from meccano parts, guided by a picture.9 3.2
The Failure to Identify Deliberative and Metacognitive Requirements
More importantly, the fact that some task is very hard, and that it provides the foundation for animal competences, does not imply that everything else is unimportant and easy to replicate. It is one thing to build an elephant that can walk and maintain its balance on a hard flat surface. It is quite another to build one that can learn how to negotiate 9
Like the crane illustrated here http://www.cs.bham.ac.uk/research/projects/cosy/photos/crane/
Some Requirements for Human-Like Robots
253
different sorts of terrain, including walking up a slippery muddy slope, can learn to distinguish edible and nutritious matter from other things and how to get to and consume such food (including pushing down a tree to get at the leaves at the top), and in addition can learn the layout of a large terrain and remember where to go at different times of the year, along with learning about the social hierarchy of its group and how to anticipate or interpret actions of other individuals. The immediate environment may be a good source of directly sensed information about the immediate environment, but it is not a source of immediately available information about remote locations, or about consequences of actions that have not yet been performed or about dispositional qualities of materials in the environment that need to be represented in a theory-based conceptual framework. Elephants may not play chess, solve algebra problems, prove topological theorems, or design jetliners, but nothing follows about what they and other animals can do, or how what they do might be related to human capabilities they lack. They are still far more intelligent than current robots. In the course of evolution, changing physical environments and changing physical designs (and consequent physical needs and competences) in a species can both produce new opportunities and new design requirements in order that those opportunities are made use of. In particular, some niches require symbolic planning capabilities while others do not [41]. Some animals have to be able to work out how to achieve tasks such as nest building, picking berries, or dismembering a carcass after a kill to get at the edible parts, using complex independently movable manipulators. In general that can involve thinking two or more steps ahead in order to choose appropriate actions, sometimes comparing alternative branching futures – unless the required actions have been pre-compiled by evolution or learnt through trial-and-error searching, both potentially very slow processes compared with planning. For some species in some environments, neither evolution nor prior learning can provide a solution for every situation. There is strong evidence that at least one species of spider can plan a new route in advance of following it – the portia spider works out a route to its prey then follows it even when it can no longer see the prey, making detours if necessary and avoiding branches that would not lead to the prey [50].10 For humans, tasks such as building shelters, tracking down prey, weaving cloth, attacking enemies, preparing defences against attack, designing and building weapons to help with hunting or warfare, and in recent times building many kinds of vehicles, buildings, and machines for making machines, all extend information processing requirements – specifically requirements concerned with representing and choosing between future possible actions and products. A robot with similar abilities would need, along with sensory and motor competences, an architecture supporting “fully deliberative” competences, as explained in [40]. More human-like robots would need to be able to develop an interest in various branches of mathematics, including geometry, topology, and transfinite set 10
Tarsitano states: “By visual inspection, they can select, before setting out, which detour routes do and do not lead to prey, and successfully perform a detour with no further visual contact with the prey”.
254
A. Sloman
theory, and would explore ways of proving theorems in those domains, as discussed in [44]. Some would become interested in philosophical questions, or enjoy composing or reading novels. The notion that intelligence involves constant interaction with the environment ignores most of human nature. I’ll return to the implications of this in discussing varieties of dynamical systems required. Simpler organisms may cope with “proto-deliberative” reactive mechanisms, in which a combination of sensed state and current goals can activate alternative responses, triggering a competitive mechanism to make a selection, and act accordingly. Some researchers confusingly use the label “deliberative” for this. So I use “fully deliberative” to label the ability to construct several representations of branching futures, represent and compare their relative strengths and weaknesses, and then select one as a plan and use it to generate behaviours, while being prepared to learn from mistaken decisions of that sort. Few other animals can do all this, but human-like robots will need to, as explained in [40], though how such competences are implemented is an open question.
4
Limitations of Symbolic AI
It was indeed unwise for early AI researchers to assume that after building their systems with planning and problem-solving competences everything else would be easy and that human-like robots could be achieved in a few decades. In contrast, Turing’s prediction in 1950 that a high proportion of the general population could be fooled by a text-based simulation program for about five minutes by the end of the century was far more modest (and I think turned out true). Nevertheless, the fact that symbolic AI systems do not suffice does not mean that such systems are not required for human-like intelligence, as Brooks was obviously aware. He wrote (in section 6.1) “Thus the two approaches appear somewhat complementary. It is worth addressing the question of whether more power may be gotten by combining the two approaches. However, we will not pursue that question further here.” This paper offers reasons why the two approaches need to be combined, in contrast with many of the followers of Brooks’ approach who believe that symbolic AI has failed and can safely be ignored (e.g. Cliff, quoted in Note 4). Of course, I am not claiming that only the specific symbolic algorithms developed in the 1960s should be included. The early planning and reasoning systems worked in simple contexts but were not sufficient for planning future actions in all cases since they did not take account of the amount of uncertainty and scope for error in perceptual information in many (though not all) contexts and they did not do justice to the cases where the consequences of performing specific actions could not always be relied on, e.g. because of slippage, unreliable components, effects of wear and tear on mechanisms, and the possibility of external (animate and inanimate) sources of interference with actions. However, the designers of Shakey and Freddy were not stupid and did allow for the possibility
Some Requirements for Human-Like Robots
255
of actions not producing intended results. For example, the PLANEX module was available in Shakey to cope with such failures by re-planning, though that is not always a good solution.
5
Meta-semantic and Exosomatic Ontologies
Another point missed by some of the nouvelle AI community is that where there are other intelligent agents in the environment whose actions need to be understood, and sometimes obstructed or assisted, or if individuals need to monitor and improve their own thinking and planning procedures, as in Sussman’s HACKER [49], then there is a requirement to represent things that themselves represent other things: i.e. meta-semantic competences are required. This is not the place to go into the details of that requirement, which includes coping with referential opacity. Some comments on meta-semantic competences and requirements for supporting them can be found in [45]. Warneken [53] demonstrated behaviours in pre-verbal children and in chimpanzees that indicate some sort of meta-semantic competence insofar as the experimental subjects are spontaneously able to identify goals of others and perform actions to achieve them. The ability to represent mental states and processes of information manipulation in other individuals is just one example among many of the requirement to acquire, manipulate and use information about things in the environment that cannot be sensed. For example, most of the properties of matter in the environment, including rigidity, impenetrability, compressibility, various kinds of flexibility, various kinds of elasticity, fragility, solubility, edibility, and many more, are not directly sensed but have to be inferred from other things, including how objects composed of such matter respond to various kinds of actions, e.g. lifting, pushing, prodding, twisting, attempts to break or tear, etc. It is commonly thought that colours are properties of light hitting the retina whereas many colour illusions refute that, and support the notion that what we see as colours are properties of surfaces indirectly inferred from sensory information [25]. Animals that make use of such properties of matter in decision-making must have the ability to acquire ontologies that go beyond patterns found in sensorimotor signals. I.e. somatic ontologies may suffice for insect-like robots [18] but will not suffice for more sophisticated animals (such as orangutans that make intelligent use of compliance in arboreal supports) and robots: exosomatic ontologies are needed [41]. Even within the domain of processes that can be sensed, such as visual and haptic sensing of movements of hands, it is arguable that amodal exosomatic ontologies (e.g. referring to structures and processes involving 3-D surfaces and their interactions, rather than the sensory signals produced by such things or the motor signals that can modify them) are far more useful and economical for acquiring and using knowledge about a wide variety of states and processes, including ones that have not previously been encountered – e.g. perceiving a grasping process viewed from a new angle. For example, using an amodal exosomatic ontology allows generalisations from actions performed using one hand to similar actions performed using the other hand, or using two hands, or
256
A. Sloman
even actions performed by other agents (a process often “explained” by almost magical properties of so-called mirror neurones). But that kind of exosomatic ontology would be ruled out by those whose nouvelle AI includes a commitment to symbol-grounding theory. Even the great mathematician Poincar`e [26] apparently fell into the trap of assuming all concepts must be grounded in sensorimotor patterns: “But every one knows that this perception of the third dimension reduces to a sense of the effort of accommodation which must be made, and to a sense of the convergence of the two eyes, that must take place in order to perceive an object, distinctly. These are muscular sensations quite different from the visual sensations which have given us the concept of the two first dimensions.” I suspect he would have modified his views if he had been involved in designing robots that can perceive and reason about 3-D scenes. The points just made are closely related to Gibson’s (1979) theory of perception as providing information about positive and negative affordances for action in the environment. However, Gibson’s work was just the beginning, as will be explained below in Section 11.
6
Developments Required in a Human-Like Robot
The growth of competences in a human, from infancy onwards, includes many different sorts of development, such as the following, all of which appear to be required in a human-like robot – though what mechanisms could produce this combination is still not known. – Early development includes learning to calibrate and control sensors and effectors. Recalibration, or adaptation, continues throughout life as the body grows, as new goals and new skills impose new requirements, as the environment changes, and as impairments caused by disease or injury produce new constraints. – After learning to control some movements and acquiring related perceptual competences, infants can use that competence to perform experiments on both the physical environment and nearby older humans, at first using only genetically determined exploratory processes rather than conscious rational exploration [39,5]. – After new concepts have been acquired they can be used to formulate new theories, in addition to new plans, goals, etc. This is most obvious in the adult developments of scientists and engineers, but I suggest much child development should be viewed in a similar way. – New forms of representation can be used to formulate new conceptual building blocks, and to formulate and test new plans, goals, hypotheses, etc., sometimes producing new forms of reasoning, problem solving and learning, using those representations. This happens explicitly and overtly to adults (e.g. learning mathematics, physics, genetics, musical theory) and seems to be required to explain some qualitative changes of competence in infancy and childhood.
Some Requirements for Human-Like Robots
257
– Later, learning to observe and modulate learning processes starts, e.g. attempting actions that test and challenge current competences. This seems initially to use genetically determined meta-cognitive mechanisms and competences, later enhanced by cultural influences including games, sporting activities and academic learning. – Higher level meta-cognitive mechanisms can later begin to monitor, debug and modulate the initial forms, in ways that depend on how the initial competences fare in coping with the environment in increasingly complex ways. – Representing the individual’s own information-processing or the information processing of others requires meta-semantic competence, which in turn requires architectural extensions that allow “encapsulated” representations to be used (e.g. representing the individual’s possible future beliefs and goals, or the beliefs and goals of others). Encapsulation is required to prevent hypothetical descriptions being used by mechanisms that use the owner’s beliefs, goals, etc. – All of the above can help drive the development of linguistic and other communicative competences using external, shared, languages. – Being able to communicate with others makes it possible to learn quickly what other individuals previously learnt more slowly and with greater difficulty, such as discovering a new way to make something. – Layered learning processes start in infancy, and, in humans, can continue throughout adult life, extending the architecture as well as modifying and extending contents of parts of the architecture. That list does not refer to results of esoteric laboratory experiments, but mostly summarises examples of human development that are fairly obvious in a reflective culture. Many people are already attempting to explain them. However, the competences are usually studied separately, in different disciplines, or in different research groups that do not work together, for instance groups studying language learning and groups studying perceptual development or groups attempting to build computer models of such development. So, people who attempt to build working AI models or robots normally consider only a small subset of the competences shown by humans and other animals, and different researchers focus on different subsets. However, it is not obvious that models or explanations that work for a narrowly focused set of tasks can be extended to form part of a more general system: as explained in [38] systems that “scale up” (i.e. do not consume explosively increasing amounts of time or memory as problem complexity grows) do not always “scale out” (i.e. many of them cannot be combined with other competences in an integrated system).
7
Morphology and Development
It is sometimes thought that body morphology determines cognitive development. This may be true in the very early stages of human development but as new layers of competence are added, they depend more and more on the nature of the environment, including other intelligent agents (friendly and unfriendly conspecifics, prey, predators, and in future also robots), and less and less on the specific sensors and
258
A. Sloman
effectors of the learner. That is in part because many aspects of the environment cannot be directly sensed or acted on, no matter what the morphology, including: (a) past, future and spatially remote events, (b) the “hidden” properties of matter that are only discovered through scientific investigation and (c) hypothetical future processes considered when making plans or predictions. Consequently, more abstract features of the environment can be learnt about and thought about by individuals with very different body morphologies and sensors. This is presumably why “thalidomide babies” who were born without limbs or with stunted limbs, and children born blind or deaf, or with motor control problems, manage to develop normal human minds by the time they are adults. Another illustration is the way different robots with different sensors and motors that provide information about distance to nearby rigid surfaces (e.g. using sonar, laser range-finders, or vision) can use similar abstract competences to build maps of their environment and localise themselves in relation to the maps using SLAM (Simultaneous Localisation And Mapping) techniques. People who emphasise strong relations between cognition and embodiment often forget that humans are capable of overcoming many physical obstacles produced by genetic deformity, illness, or accidents causing serious damage. There is no unique route from birth to the mind of an adult human. Instead, as clinical developmental psychologists know,11 there are many different possible sequences of achievement of the same competences: they are only partially ordered.12 The fact that later stages of human cognitive development are reached by multiple routes, depending on the circumstances and constraints of individual learners, is contrary to the opinions of many politicians who attempt to define educational policies that assume a fixed developmental sequence. There is not a totally ordered sequence of transitions, only a partial ordering, with many options that depend on variations in both external circumstances and individual impairments. The fact that serious physical deformity does not prevent development of normal human vision and other aspects of normal adult human cognition, may depend on the fact that the individual’s brain has many structures that evolved to meet the needs of ancestors with the full range of modes of interaction with the environment: Perhaps all the impaired individuals are using genetically specified forms of representation and theory-construction mechanisms that evolved to meet the requirements of humans with a full complement of normal limbs, digits, tongue, lips, and senses. The morphology of one’s ancestors may be more important in determining one’s overall cognitive potential than one’s own morphology. Depending on the impairment, more or less help may be required from gifted teachers who know what needs to be learnt. A gifted robot designer needs some of the characteristics of a gifted teacher: teaching is a form of programming – programming a very sophisticated largely autonomous machine. 11 12
I am grateful to Dr. Gill Harris, consultant at Birmingham Children’s hospital for this observation. Famous examples of abilities to engage in rich and deep intellectual communication with other humans despite very serious physical disabilities include Helen Keller and Stephen Hawking.
Some Requirements for Human-Like Robots
8
259
Requirements for Models and Explanations
Analysis of combinations of different sorts of competence, including perceiving, reasoning, planning, controlling actions, developing new ontologies, playing, exploring, seeing explanations, and interacting socially, provides very demanding requirements to be met by human-like robots that develop through interacting with a rich and complex 3-D world. There are closely related and equally demanding requirements for an explanatory theory of how humans (and similar animals) do what they do. There is some overlap between the themes of nouvelle AI and the viewpoint presented here. Both emphasise the importance of studying the environment when designing robots that learn to interact with it. This is also emphasised by McCarthy [19],13 and Neisser [22].14 This view is echoed in the work I have been doing with biologist Jackie Chappell [39,5,46]. However, the complexities of the environment are often grossly underestimated even by researchers who emphasise embodiment. The precise features of the environment that are relevant to different sorts of organisms in different sorts of contexts are often far from obvious, as J.J. Gibson’s work on affordances shows, for example. There is also an old and subtle point, made by Strawson in [48], and discussed in [33], that in order to be able to refer to spatially remote, or past, or future particulars, a referring individual needs to be embedded in a web of causal links connecting the individual’s time and place with the referent or its context. That web of relationships may be needed to eliminate ambiguities of reference that cannot be removed simply by adding more descriptive content to the specification of some particular. Despite the overlap between the position taken here and nouvelle AI, there is a very large difference of emphasis between the nouvelle AI claim that most of the information about the environment is easily available through use of sensors, provided it is sensed “appropriately and often enough”, or through sensorimotor interaction patterns, and my claim that learning about the environment involves developing deep explanatory theories using an ontology that extends beyond the reach of sensorimotor ontologies and beyond ontologies meeting the requirements of symbol-grounding theory (i.e. the requirements of concept empiricism). That is obvious for the ontologies of theoretical science, but it also applies to ontologies learnt much earlier, about different kinds of “stuff”, about internal states of complex systems, about remote places and times, and about social and mental phenomena. 13
14
“Instead of building babies as Cartesian philosophers taking nothing but their sensations for granted, evolution produced babies with innate prejudices that correspond to facts about the world and babies’ positions in it. Learning starts from these prejudices.” Footnote 2 adds: “There is a complication. Appropriate experience is often required for the genetically determined structures to develop properly, and much of this experience can be regarded as learning.” “We may have been lavishing too much effort on hypothetical models of the mind and not enough on analyzing the environment that the mind has been shaped to meet.”
260
A. Sloman
Both the engineer’s robot-building tasks and the scientist’s explanationbuilding tasks need to take account of the distinctive features of 3-D environments in which objects of very varied structure, made of many different kinds of materials with different properties, can interact, including objects manipulated by humans, animals or robots, where the manipulation includes assembling and disassembling structures of varying complexity, and varying modes of composition, and some objects with their own externally unobservable mental states and processes. In particular the variety of materials of which objects can be composed, the properties of those materials, and the consequences of juxtapositions and interactions involving those materials, need to be expressed using theoretical, exosomatic, concepts, as remarked in Section 5, and discussed further in 12.2, below. The evolutionary niches associated with such environments posed combinations of problems for our biological predecessors that need to be understood if we wish to understand the products of evolution.
9
Requirements for Human Visual Processing
In humans, the speed at which vision works when a person goes round a corner, or when coming out of railway stations or airports in new towns, or when watching TV documentaries about places never visited, etc. gives an indication of some of the processing requirements. An informal demonstration of the speed with which we can process a series of unrelated photographs presented at a rate of approximately one per second and extract quite abstract information about them unprompted is available online here http://www.cs.bham.ac.uk/research/ projects/cogaff/misc/multipic-challenge.pdf
No known mechanism comes anywhere near explaining how that is possible especially at the speed with which we do it. Reflection on a wide range of phenomena to be explained, including the speed and sophistication with which human vision copes with complex scenes presented with no warning, has led to a hypothesized architecture which at one level of description consists of a very complex dynamical system composed of a network of dynamical systems of different sorts, operating concurrently on different time scales, that grows itself, and which can be contrasted with the very much simpler kinds of dynamical system that have so far been investigated in biologically inspired robotics. The next section illustrates crudely some of the differences between commonly considered dynamical systems and the kind of complexity that I suggest exists in virtual machines that run on human brains and the brains of some other animals, and will need to be replicated in human-like robots. 9.1
Two Extreme Kinds of Dynamical System
Figure 1 depicts the kind of dynamical system assumed by some of those who attempt to build “biologically inspired” robots. The key feature is very close coupling between internal states and the environment, resulting from close coupling between the internal dynamical systems and the sensory and motor signals of
Some Requirements for Human-Like Robots
261
E N V I R O N M E N T Fig. 1. Dynamical systems as often conceived of: closely coupled through sensors and effectors with an environment
the animal or machine. Similar assumptions are found among some psychologists and neuroscientists, e.g. [2]. That can be contrasted with the kind of complexity depicted crudely in Fig 2 that I suggest is needed to explain visual and other capabilities in humans and some other species, and will be required for human-like robots: A very large collection of dynamical systems in virtual machines running concurrently on a brain is built up over time as a result of interaction with the environment (including toys and other artifacts, other animals, conspecifics and teachers). The requirements for this sort of self-extending architecture are specified more detail in [43]. Many of the details were hypothesised to explain aspects of the speed and diversity of human vision revealed in the informal experiment
R e m o t e
h i d d e n
p a a n r d t s
E N V I R O N M E N T
Fig. 2. A more complex kind of dynamical system, composed of sub-systems of different sorts, concerned with different levels of abstraction, some continuous and some discrete, with sub-systems grown at different stages of development, some of them referring to relatively remote or inaccessible parts of the environment. At any time many of the systems are dormant, but can be rapidly reactivated.
262
A. Sloman
mentioned above. Other aspects are related to human abilities to think, plan, or converse while performing physical actions, and also to monitor and notice features of their own physical behaviour and their thought processes, while producing them. (Not everything can be monitored concurrently, however, on pain of infinite regress.) A human designer’s ability to imagine construction of a type of object never previously encountered may use many different sorts of dynamical systems operating concurrently, even if the designer is lying flat on his back with his eyes shut, and cannot perform the assembly himself, but will need to use sophisticated machines that do not yet exist! Such a system must depend on a large amount of previously acquired information about image and scene fragments and compatibility constraints between and within levels. The learnt information would need to be compiled into a large network of “specialist” multistable dynamical systems, most of which are dormant at any time, but which can very rapidly be triggered into activity through new sensory inputs activating fast global constraint-propagation mechanisms that “switch on” dormant sub-systems able to collaborate to build bridges between the sensory data and any available current high-level knowledge or expectations (e.g. that a busy street scene, or a beach scene, is around the corner), partly constrained by current goals, and in the case of vision, partly in registration with the optic array. Once a global collection of subsystems with no detected inconsistencies15 has been activated and settled down to compatible sub-states, further changes in sensory or motor processes can produce corresponding gradual changes in the “visual interpretation”, for example when something in the perceived environment moves, or when the perceiver moves. Motion of the perceiver typically produces massive coordinated changes throughout the optic array which can help to disambiguate previous visual information in addition to providing new information about the changed location. Topics for further investigation include: How to build dynamical systems that can very rapidly configure themselves under control of static information both sensory and previously acquired, and then allow themselves to hand over control to changing sensory signals; how those dynamical systems are constructed over many years; how the constraint-propagation works; how the processes are influenced by current expectations, interests and goals; how the resulting percepts continue to be driven by changing sensory data; and how all of that is used in controlling and planning actions, and producing further learning. These are questions for future research. The structural changes in percepts are both global and rapid, suggesting that they are changes in virtual machines that have only a loose relationship to underlying physical machines, whose global structures cannot change rapidly, though coordinated local changes (e.g changes in synaptic weights) can happen rapidly. However, that leaves the problem of how large numbers of local changes are coordinated so as to produce the required global changes at the virtual machine 15
Pictures of impossible objects or scenes, e.g. Escher’s drawings, show that in humans percepts do not need to be globally consistent.
Some Requirements for Human-Like Robots
263
level. Available techniques for investigating and modelling neural structures and processes may have to wait until the theory is much further developed at the level of virtual machines, so as to guide the empirical neuroscience. Compare trying to work out what a modern computer is doing by studying electronic details, without being told anything about its operating system or the detailed functions that it performs.
10
The Variety of Possible Architectures
There are different ways of formulating explanatory designs, at varying levels of abstraction (discussed in [37]). Alternative designs need to be analysed and compared against sets of possible requirements. Previous sections present requirements for a kind of self-extending, multi-functional, virtual machine informationprocessing architecture to explain a variety of human capabilities – a collection of requirements that no current AI systems or current neural theories (at least none known to the author) address. 10.1
Dynamical Systems for Symbolic AI
Those who reject symbolic AI often fail to realise that symbolic mechanisms may be needed for the more abstract, more loosely coupled, more discrete, dynamical systems, for example the ones further to the right in Figure 2, that enable you to learn ancient history, do algebra in your head, discuss philosophy, make plans for conference travel, read music, or read this paper. However, existing symbolic AI systems are clearly nowhere near human competence except in very narrow domains, e.g. playing certain board games (chess, but not Go), certain mathematical competences (e.g. in packages like Mathematica, Matlab and others), and some kinds of compiler optimisation. Much of the technology now being used all over the world on the internet is more or less closely related to work done in symbolic AI, including rule-based expert systems in past decades (whose techniques have often been reinvented, because they are so useful). Inductive logic programming (ILP) techniques have contributed to molecular biology research on folding of protein structures [14]. Generalising and combining such specialised competences to match human flexibility is not trivial, but something like that will need to be done in combination with results of nouvelle AI. It is not obvious which non-human animals have competences that require sub-systems that are only loosely coupled to current sensory and motor signals, or de-coupled from them, for example the ability to think about past events, wonder about possible futures, form hypotheses about what is going on out of sight, plan journeys to remote locations, or think about the contents of other minds or their own minds. Of course not all the errors lie in the nouvelle AI tradition: as Brooks points out, people who think symbolic AI will suffice for everything often fail to attend to the kinds of intelligence required for controlling continuous actions in a 3-D structured environment, including maintaining balance while pushing a
264
A. Sloman
broom, drawing a picture with a pencil, or playing a violin [32]. Many such activities require both sorts of mechanism operating concurrently, along with additional monitoring, evaluating, and learning (e.g. debugging) mechanisms. The fact that humans, other animals, and robots are embodied can lead, and has led, some researchers in the nouvelle AI tradition to assume, mistakenly, that only the first type of dynamical system (Fig 1) is required because they forget that some embodied systems can think about past, future, distant places, games of chess, what elephants can’t do, transfinite ordinals and how to design intelligent systems. Reading and writing novels and history books that refer to physical environments does not require sensorimotor interaction with the environment referred to, but something far more abstract and symbolic. The different roles of dynamical systems operating concurrently at different levels of abstraction need to be accommodated in any theory of motivation and affect, since motives, evaluations, preferences, and the like can exist at many levels and operate on different timescales.16 It should be clear that the “traditional” sense→think/decide→act loop (presented in several AI textbooks) is much too restrictive to accommodate the requirements presented here. 10.2
Types of Architecture
We can crudely decompose the variety of sub-processes that occur in biological organisms in two dimensions shown in Figure 3: one dimension, related to what Nilsson called “Towers” in Chapter 25 of [23] is concerned with whether the processes are (1) perceptual/sensory, or (2) central or (3) concerned with effectors/actions; while the other dimension related to what Nilsson called “Layers”, depends on whether the processes are based on (1) reactive (often evolutionarily old) mechanisms, or (2) deliberative mechanisms (with the ability to consider and compare branching sets of possibilities) or (3) meta-management mechanisms (concerned with self-monitoring or control, or using meta-semantic competences in relation to other agents). Note that not everything reactive is closely coupled with the environment. Reactive processes can produce internal state-changes, such as activation of new motives, alteration of priorities, and triggering of learning processes. Superimposing the tower and layer dimensions produces a 3 by 3 grid of types of sub-system as illustrated in Figure 3. The grid, as explained in [37], is only an approximation – more sub-divisions are to be found in nature than this suggests (in both dimensions, but especially the vertical dimension). That is in part because biological evolution can produce only discrete changes, not continuous changes. The discrete changes vary in size and significance: e.g. duplication is often more dramatic than modification, and can be the start of a major new development. Further sub-divisions include different functions, different forms of 16
A critique of some shallow theories of emotion is presented in: “Do machines, natural or artificial, really need emotions?” http://www.cs.bham.ac.uk/research/projects/cogaff/talks/#cafe04 See also “What are emotion theories about?” (AAAI Spring Symposium 2004): http://www.cs.bham.ac.uk/research/projects/cogaff/04.html#200403
Some Requirements for Human-Like Robots
Perception
Central Processing
265
Action
Meta-management (reflective processes) (newest)
Deliberative reasoning ("what if" mechanisms) (older)
Reactive mechanisms (oldest)
Fig. 3. The CogAff schema specifying, to a first approximation, types of (concurrently active) components that can occur in an architecture, with links between many of them
representation, different types of learning, different mechanisms, different connections between modules, and different nature-nurture tradeoffs. A type of reactive mechanism that might have evolved after others is an “alarm” mechanism as indicated in Figure 4, that gets inputs from various parts of the architecture has outputs that influence many parts of the architecture, and can detect patterns that require rapid global reorganisation of processing (e.g. freezing, fleeing, ducking out of sight, pouncing on prey, reconsideration of current plans, etc.). Copying and modification of early versions of such alarm mechanism might later have led to development of more sophisticated metamanagement mechanisms for monitoring and control of processing. Such speculative theories can only be tested in the long term, to see whether they lead to progressive or degenerative research programmes, using the terminology of [17]. A special case of the CogAff schema that we have been developing for some years is the H-CogAff (Human-CogAff) architecture specification described in [36] and depicted crudely in Figure 4. The H-CogAff architecture overlaps with Minsky’s ideas in [21].17 There is a lot more detail regarding these ideas in presentations and papers in the Birmingham CogAff and CoSy projects.18 10.3
Layered Perception and Action
A point that can cause puzzlement is the suggestion that the central parts of the architecture can be directly connected to higher levels in the perceptual and 17
18
Minsky’s focus seems to be mainly on how to model or replicate certain aspects of human intelligence, whereas I am interested in attempting to understand the space of possible designs and niches explored by evolution and its relation to future possible artificial systems. http://www.cs.bham.ac.uk/research/projects/cogaff/ http://www.cs.bham.ac.uk/research/projects/cogaff/talks/ http://www.cs.bham.ac.uk/research/projects/cosy/papers/
266
A. Sloman
perception hierarchy
META-MANAGEMENT (reflective) processes
DELIBERATIVE PROCESSES
(Planning, deciding, ‘What if’ reasoning)
Personae
action hierarchy
Long term associative memory Motive activation
Variable threshold attention filters ALARMS REACTIVE PROCESSES
THE ENVIRONMENT
Fig. 4. The H-CogAff architecture – an instance of the CogAff schema that seems to be required to account for many human competences. This diagram should be viewed as a special case of Fig 2 rotated 90 degrees counter-clockwise, though many details are still unknown. (The diagram is intended only to be suggestive, not self-explanatory.)
action subsystems. In this context ‘directly’ does not mean physically directly, but something more abstract. The CogAff schema, and the H-CogAff instance, are both concerned with virtual machine architectures rather than physical architectures. (Virtual machines were also emphasised in Section 9.1.) The different levels inside the perception and action sub-systems concurrently process information at different levels of abstraction closely related to what is going on in the environment, and partly in registration with representations of the environment (e.g. the optic array (Gibson) rather than what’s on the retina (or V1), which is merely a device for sampling optic arrays.) This is related to gaps in current machine vision. Most people find it obvious that the perceptual mechanisms involved in understanding speech, or hearing music, have to operate at different levels of abstraction in parallel, in a mixture of top-down and bottom-up processing, e.g. in the case of speech, processing low level acoustic features, phonemes, morphemes, phrases, intonation contours, etc. As explained below,19 the same is true of vision, with different intermediate results of visual processing using ontologies at different levels of abstraction (not merely part-whole hierarchies), loosely in registration with the optic array. A working demonstration program was developed in the mid-70s, but funding was not continued, partly because referees were convinced (following Marr) that 19
And in http://www.cs.bham.ac.uk/research/projects/cogaff/crp/chap9.html Chapter 9 of [31].
Some Requirements for Human-Like Robots
267
Trespassers will will be prosecuted Fig. 5. Anyone who sees this as a familiar phrase, without noticing anything wrong with it, is not aware of what is in the low level sensory data
vision systems work purely bottom-up and that the rich information in natural scenes makes sophisticated visual architectures unnecessary. One way to convince yourself that vision operates at different levels of abstraction is to look at well-known spontaneously changing ambiguous images, such as the necker cube, the old/young woman, the duck-rabbit, and others,20 and try to describe what changes. When they ‘flip’ so that you are seeing something different, nothing changes in the image or in the 2-D experience: so what is seen as different is something that requires an ontology distinct from image contents. Likewise if you see a happy and a sad face, there may be geometric differences (e.g. direction of curvature of mouth) but happiness and sadness are not geometric properties. Figure 5 is a variant of a familiar example where some people fail to see anything wrong with the phrase in the box despite staring at it for some time, indicating that what they are conscious of is a high level percept, even though it is easy to establish that lower level information was taken in (as some of them discover if they cover the figure and try to make a copy). Many actions also require multi-level concurrent control, at different levels of abstraction. This is obvious for speech, nearly as obvious for performance of music, and perhaps less obvious for walking. But the way you walk can simultaneously involve low level control of motion and balance, higher level control of direction towards your target (e.g. a door) and perhaps more abstract expressions of mood or attitude (e.g. walking with a swagger) – a form of social communication with people known to be watching. Those different influences on the behaviours will come from different layers of the internal architecture performing different tasks concurrently, using very different forms of representation, and the mechanisms in the action “tower” will need to combine all those influences in producing the final low level output. Arnold Trehub’s 1991 book [51] (now online) presents closely related ideas in his “retinoid” system. Trehub proposed that the main perceptual data-structures representing what is in the environment at different levels of abstraction and different distances from the viewer are not in registration with the contents of the primary visual cortex (or the retina) because that mapping is constantly changing under the influence of saccades, and vestibular and other information about how the eyes and head are moving in 3-D space, whereas the information about contents of the immediate environment should not change so fast. 20
E.g available here http://lookmind.com/illusions.php, and here http://www.planetperplex.com/en/ambiguous_images.html
268
A. Sloman
The architecture diagrams presented here cannot express all that complexity, and are merely intended to challenge some “obvious” but mistaken views of perception and action (often expressed in architecture diagrams showing perception and action as small boxes), which may be correct only for a subset of organisms, e.g. most invertebrates, though the portia spider, mentioned in Section 3.2 seems to have the ability to perceive potential routes. An ecosystem, or even the whole biosphere, can be thought of as a complex virtual machine with multiple concurrent threads, producing multiple feedback loops of many kinds (including not just scalar feedback but structured feedback – e.g more like a sentence than a force or voltage). Individual organisms in the ecosystem, insofar as they have designs described here, will also include multiple virtual machines. All these virtual machines are ultimately implemented in physics and chemistry, but their states and processes and causal interactions are not describable in the language of physics and chemistry. We don’t yet know enough about what variety of virtual machines can be implemented on the basis of physical principles.
11 11.1
Perception of Actual and Possible Processes Beyond Gibson
Work on an EU-funded cognitive robotics project, CoSy (www.cognitivesystems. org) begun in 2004, included analysis of requirements for a robot capable of manipulating 3-D objects, e.g. grasping them, moving them, and constructing assemblages, possibly while other things were happening. That analysis, reported in [47], revealed (a) the need for representing scene objects with parts and relationships (as everyone already knew), (b) the need for several ontological layers in scene structures (as in [31, Ch 9]), (c) the need to represent “multi-strand relationships” because not only whole objects but also parts of different objects are related in various ways (e.g. a corner of one object touching an edge of another), (d) the need to represent “multi-strand processes”, because when things move the multi-strand relationships change, e.g. with metrical, topological, causal, functional, continuous, and discrete changes occurring concurrently, and (e) the need to be able to represent possible processes, and constraints on possible processes [35,42]. 11.2
Proto-Affordances
I call such possibilities and constraints restricting them “proto-affordances”, because they are the substratum of affordances, and more general: unlike J.J. Gibson’s affordances, the proto-affordances are not restricted to processes the perceiver can produce, or processes relevant to the perceiver’s possible goals. Moreover, proto-affordances can be combined in various ways because processes in the environment can be combined, including serial composition, parallel composition of non-interacting processes, and most importantly parallel composition of juxtaposed processes involving interacting objects, for instance one rotating gear
Some Requirements for Human-Like Robots
269
wheel causing another to rotate, a hand catching a ball or two moving objects colliding. So complex proto-affordances, involving possibilities of and constraints on possible processes can be formed by synthesising simpler proto-affordances. It is not clear how many animal species can use this possibility. It requires the use of an exosomatic ontology, not a sensorimotor ontology, since moving interacting objects (e.g. boulders rolling down a hill bumping into trees and other boulders) can exist without being perceived or acted on. 11.3
Epistemic Affordances
Gibson came very close to discussing what could be called epistemic affordances, concerned not with opportunities and obstacles for possible processes in the environment or actions of the perceiver, but with availability and limitations of information potentially usable by the perceiver. Intelligence often involves discovering which actions or other processes can usefully change epistemic affordances, e.g. removing uncertainty about a part of the environment by moving to a new viewpoint, or moving an occluding object – another example of composition of processes: a physical process involving motion and a more abstract process of changing available information. 11.4
Affordances in the Environment
Not all perceived changes are produced or could be produced by the perceiving agent. Moreover, seeing that a process is possible, e.g. an apple dropping off a tree, or that there are constraints on possibilities, e.g. because a table is below the apple, does not require the process to be one that the perceiver desires to produce. So perception of proto-affordances and perception of processes in the environment makes it possible to take account of far more than one’s own actions, their consequences and their constraints. One important consequence is perception of vicarious affordances, affordances for others, e.g. collaborators, prey, predators and offspring learning how to interact with the environment, who may sometimes need to be helped, rescued, or warned about risks. As discussed in Section 5, some of those competences require an exosomatic form of representation of processes; one that is not tied to the agent’s sensorimotor contingencies. That possibly is ignored by researchers who focus only on sensorimotor learning and representation, and base all semantics on “symbol-grounding”. 11.5
Representing Actual, Possible and Impossible Processes
The ability to perceive a multi-strand process requires the ability to have internal representations of the various concurrently changing relationships. Some will be continuous changes, including those needed for servo-control of actions, while others may be discrete changes that occur as topological relations change or goals or preferences become satisfied or violated. Such process-perception could include concurrent changes occurring in different concurrently active dynamical systems depicted in Figure 2. How processes can be represented in perceivers,
270
A. Sloman
and how the representation differs when the process is actually being perceived and when it is merely being thought about or reasoned about remain open questions. The fact that we can perceive and think about processes that we cannot produce ourselves (e.g. the sky darkening, or a flock of birds swarming) rules out theories that assume perceiving processes uses motor subsystems that could produce those processes. Perceiving possibilities of and constraints on processes in the environment requires additional forms of representation and mechanisms for manipulating them without being driven by sensory input, since there is no sensory input from a process that does not exist yet. An important research topic is how an animal or robot can represent proto-affordances and their composition, using the results in planning, generating and controlling behaviour. This ability develops both in the life of an individual and across generations in a culture. It seems that mechanisms closely related to those used for perceiving multistrand processes can also be used to predict outcomes of possible processes that are not currently occurring (e.g. when planning), and to explain how perceived situations came about. Both may use a partial simulation of the processes (compare [10,29]), though simulation is not enough in general: it may be necessary to save and compare “snapshots” of portions of the processes, for example in order to think about pros and cons of alternative future actions. The mechanisms for reasoning about processes and affordances seem also be closely related to the development of mathematical competences in humans discussed below in Section 12.2. 21
12 12.1
How Evolution Produced Mathematicians? The Importance of Kinds of Matter
Some predictions need to go beyond geometry and topology. A child, or robot, learning about kinds of process that can occur in the environment needs to be able to extend indefinitely the ontologies she uses, and not merely by defining new concepts in terms of old ones: there are also substantive ontology extensions, refuting symbol-grounding theory, as explained above. Concepts of “kinds of stuff” are examples. Whereas many perceived processes involve objects that preserve all their metrical relationships, there are also many deviations from such rigidity, and concepts of different kinds of matter are required to explain those deviations. For example, string and wire are flexible, but wire retains its shape after being deformed. An elastic band returns to its original length after being stretched, but does not restore its shape after bending. Some kinds of stuff easily separate into chunks if pulled, e.g. mud, porridge, plasticine and paper. A subset of those allow restoration to a single object if separated parts are pressed back together. A subset of those will retain a weakness at the 21
Examples are given in the discussion of fully deliberative competences in [40] and in this discussion paper on predicting changes in action affordances and epistemic affordances: http://www.cs.bham.ac.uk/research/projects/cosy/papers/#dp0702
Some Requirements for Human-Like Robots
271
joint. There are also objects that are marks on other objects, like lines on paper, and there are some objects that can be used to produce such marks, like pencils and crayons. Marks produced in different ways and on different materials can have similar structures. Lines drawn in sand with a finger or a stick are partly similar to lines drawn with a surface marker, and partly different: they can serve similar diagrammatic or symbolic functions, without depending in the same way on colour-contrasts to be perceived. As demonstrated by the Sauvys, in [28], children, and presumably future robots, can learn to play with and explore strings, elastic bands, pieces of wire, marks on paper and movable objects in such a way as to learn about many different sorts of process patterns. Some of those are concerned with rigid motions some not. Many of their examples use patterns in non-rigid motions that can lead to development of topological concepts. 12.2
From Empirical to Mathematical Truths
Consideration of a space of niches and a space of designs for different sorts of animal and different sorts of machine reveals nature/nurture tradeoffs, and indicates hard problems that AI researchers, psychologists and neuroscientists need to address. It was suggested in [44] that a robot or animal that learns through play and exploration in a complex, changing 3-D environment needs certain competences (e.g. the ability to perceive and reason about both action affordances and epistemic affordances and to discover invariant non-empirical features of spatial processes) that also underlie human abilities to do mathematics (including geometry, topology, number theory and set theory). A child may learn that going round a house in one direction enables doors and windows to be seen in a certain temporal order, whereas going round in the opposite direction reverses that order. (This requires perceptual memory capabilities that need further discussion.) That empirical discovery may later be recognized to allow no exceptions (if the building does not change) so that it can be totally relied on in future: it is seen not to be empirical. This is not a case of using observed correlations to raise a probability to a maximal value but something deeper: discovering an invariant intrinsic to a class of processes. Another example is discovering at first empirically that counting a set of objects in different orders always produces the same result, and then later understanding that there cannot be exceptions to that (if no objects are added, removed, merged or split). Impossibilities may also be discovered at first empirically then found to be necessary. A child may use a rubber band and some pins to make various shapes, e.g. a square, a triangle, a star, etc., and then discover the minimum number of pins required to make each particular shape, e.g. a six pointed star, or an outline capital H. How can the child be certain that the number found is the smallest? At first it may just be an empirical discovery, which remains open to refutation. But later it is seen to be a necessary truth. How?22 22
In http://www.cs.bham.ac.uk/research/projects/cogaff/talks/#mkm08 and [44] I suggest that there are many “toddler theorems”, and have begun to collect examples.
272
A. Sloman
12.3
It’s Not a Matter of Probabilities
The ability to reason about what is and is not possible is required for reasoning about affordances, and is also at the basis of mathematical competence. This points to a need for a form of learning that is very different from the heavily Bayesian (probabilistic/statistics-based) forms of learning that currently attract much attention. A possible initial mechanism for this would be to allow some features of what has been learnt empirically to trigger a change in the way structures or processes in the environment are represented – e.g. a change from lots of sensorimotor conditional probabilities to representing enduring 3-D objects moving around in a locally Euclidean space. That form of representation of processes will have strong implications for what is and is not possible. If distinctions between kinds of material are included in the representations (e.g. some things are impenetrable others not, some are rigid, others not) then properties of matter can play a role in some of the reasoning. For example if one end of a rigid rod is rotated in a plane then the far end must move in a circular arc. If one of two meshed gear wheels made of rigid impenetrable material is rotated, the other must rotate in the opposite direction. It is often thought that there are only two ways a young child or animal can discover useful affordances, namely either by empirical trial and error, or by learning from what someone else does (through imitation or instruction). However, our discussion shows that there is a third way, namely by working out the consequences of combining spatial processes in advance of their occurrence. This point seems to be missed by many developmental psychologists, e.g. even the excellent [8]. I suspect that further work will show that such learning processes vindicate the claim of Immanuel Kant (1781) (against David Hume) that mathematical knowledge is both non-empirical and non-analytic [30]. However, the representational and architectural requirements for a robot that can make such discoveries is still an open research question. (Piaget’s theory of “formal operations” was an attempt to address similar problems, but he lacked the conceptual tools required to design working systems.) A caveat: The claim that some discoveries turn out to be non-empirical implies neither that the knowledge is innate, nor that the discovery was somehow pre-programmed genetically (compare [27]). Further, the processes are not infallible: mistakes can occur and such discoveries may have “bugs” as Lakatos [16] demonstrated using the history of Euler’s theorem about polyhedra – though that is sometimes wrongly taken to imply that mathematical knowledge is empirical in the same way as knowledge about the physical properties of matter. Lakatos used the label “quasi-empirical”. The discovery of bugs in proofs and good ways to deal with them is an important feature of mathematical learning. This can be triggered by perceiving counter-examples in the environment, but often it is sufficient merely to think of a possible counter-example. That is not enough to refute an empirical theory, since what is thinkable need not be possible in practice. Noticing real or apparent exceptions to a mathematical generalisation, whether perceived or imagined, can lead a learner to investigate properties that distinguish different sub-cases.
Some Requirements for Human-Like Robots
273
Viewing human mathematical competence as a side-effect of evolutionary and developmental processes meeting biological needs can shed new light both on old philosophical problems about the nature of mathematical knowledge and on problems in developmental psychology and education, especially mathematical education. It also helps us understand requirements for human-like robots.
13
Conclusion
AI and Cognitive Science still have much to do before the visual and other cognitive mechanisms of robots can match the achievements of a typical human child or even a nest-building bird, or young ape or hunting mammal. There have been important advances in the design of physically embodied behaviour-based robots (some of it inspired by the work of Brooks and his colleagues) and there have been important advances in mechanisms and applications of symbolic AI. Achieving the longer term scientific and engineering goals of AI will require the two approaches to be brought together, along with a more general synthesis of research in different sub-fields of intelligence. What is important about embodiment (e.g. what drove the most significant evolutionary developments in primate and bird cognition) is not the specific morphology of the learner, but the need to be able to perceive and interact with 3-D structures and processes (including manipulating, assembling and disassembling 3-D structures) and the need to be able to think about spatially located events, processes and entities in the past, remote spatial regions, and the future. For many of the points made by J.J. Gibson all that is important is that we can see and can change our viewpoint and viewing direction, which is also true of many animals with very different shapes, e.g. birds and lions. In many cases there is no interaction or manipulation (e.g. we cannot interact with long dead ancestors, but we can think about them), and when interaction or manipulation does occur, it does not seem to be essential for human development whether it uses mouth, hands, feet, or some tool or prosthetic device, for what we learn about is not primarily about our own sensorimotor processes but about what can happen in our environment. This is also why, as mentioned in Section 7, children with various physical deformities and disabilities (e.g. thalidomide babies) can grow up able to communicate about the same environment as those without the disabilities, though the process of learning is different in its details. In contrast, much of the work on embodied cognition in robots has focused on the terribly narrow problem of learning about sensorimotor relationships in robots with very specific morphologies. (There are some exceptions.) If we are to make significant progress in developing robots that understand our world, it will be necessary for AI researchers to abandon factional disputes, and stop claiming that there is one key mechanism that is required, and instead study in far more detail than hitherto both the features of the environment that impose demands on animal and robot competences and the variety of ways in which biological evolution, including human evolution, has responded to those demands. In particular, it is important to understand that it is not easy to
274
A. Sloman
determine empirically what biological mechanisms are and how they work (e.g. brain mechanisms); instead, by working from carefully observed competences towards mechanisms, we may come up with important ideas generating questions to guide the researchers who attempt to study the mechanisms directly. After all, symbolic computation was a human competence long before AI began, and was the original inspiration for symbolic AI, so there must be biological mechanisms that make it possible (even if nobody knows what they are), and Brooks was right to suggest that a synthesis of complementary approaches might be essential to progress in the long run.
Acknowledgements I would like to thank Bernhard Sendhoff and Olaf Sporns for useful critical comments on an earlier draft. Many of the ideas reported in this paper were developed as part of the requirements analysis activities in the EU-funded CoSy robotic project: http://www.cognitivesystems.org, especially in discussions with Jeremy Wyatt. Jackie Chappell helped with biological examples, and nature-nurture tradeoffs.
References 1. Ambler, A.P., Barrow, H.G., Brown, C.M., Burstall, R.M., Popplestone, R.J.: A Versatile Computer-Controlled Assembly System. In: Proc. Third Int. Joint Conf. on AI, Stanford, California, pp. 298–307 (1973) 2. Berthoz, A.: The Brain’s sense of movement. Perspectives in Cognitive Science. Harvard University Press, London (2000) 3. Braitenberg, V.: Vehicles: Experiments in Synthetic Psychology. The MIT Press, Cambridge (1984) 4. Brooks, R.A.: Elephants Don’t Play Chess. Robotics and Autonomous Systems 6, 3–15 (1990), http://people.csail.mit.edu/brooks/papers/elephants.pdf 5. Chappell, J., Sloman, A.: Natural and artificial meta-configured altricial information-processing systems. International Journal of Unconventional Computing 3(3), 211–239 (2007), http://www.cs.bham.ac.uk/research/projects/cosy/papers/#tr0609 6. Cliff, D.: Biologically-Inspired Computing Approaches to Cognitive Systems: a partial tour of the literature. Technical Report HPL-2003-11, Hewlett-Packard Labs, Bristol, UK (2003), http://www.hpl.hp.com/techreports/2003/HPL-2003-11.html 7. Cutkosky, M.R., Jourdain, J.M., Wright, P.K.: Testing and Control of a Compliant Wrist. Technical Report CMU-RI-TR-84-04, Robotics Institute, Carnegie Mellon University, Pittsburgh, PA (March 1984), http://www.ri.cmu.edu/pubs/pub_73.html 8. Gibson, E.J., Pick, A.D.: An Ecological Approach to Perceptual Learning and Development. Oxford University Press, New York (2000) 9. Gibson, J.J.: The Ecological Approach to Visual Perception. Houghton Mifflin, Boston (1979)
Some Requirements for Human-Like Robots
275
10. Grush, R.: The emulation theory of representation: Motor control, imagery, and perception. Behavioral and Brain Sciences 27, 377–442 (2004) 11. Harnad, S.: The Symbol Grounding Problem. Physica D 42, 335–346 (1990) 12. Jablonka, E., Lamb, M.J.: Evolution in Four Dimensions: Genetic, Epigenetic, Behavioral, and Symbolic Variation in the History of Life. MIT Press, Cambridge (2005) 13. Kant, I.: Critique of Pure Reason. Macmillan, Basingstoke (1871); Translated by Norman Kemp Smith (1929) 14. King, R.D., Clark, D.A., Shirazi, J., Sternberg, M.J.: Inductive logic programming used to discover topological constraints in protein structures. In: Proc. International Conference on Intelligent Systems for Molecular Biology, pp. 219–226 (1994) 15. Kirsch, D.: Today the earwig, tomorrow man? Artificial Intellintelligence 47(1), 161–184 (1991), http://adrenaline.ucsd.edu/kirsh/articles/earwig/earwig-cleaned.html 16. Lakatos, I.: Proofs and Refutations. Cambridge University Press, Cambridge (1976) 17. Lakatos, I.: The methodology of scientific research programmes. In: Worrall, J., Currie, G. (eds.) Philosophical papers, vol. I. Cambridge University Press, Cambridge (1980) 18. Lungarella, M., Sporns, O.: Mapping information flow in sensorimotor networks. PLoS Computational Biolology 2(10:e144) (2006) 10.1371/journal.pcbi.0020144 19. McCarthy, J.: The Well Designed Child (1996), http://www-formal.stanford.edu/jmc/child1.html 20. Minsky, M.L.: The Society of Mind. William Heinemann Ltd., London (1987) 21. Minsky, M.L.: The Emotion Machine. Pantheon, New York (2006) 22. Neisser, U.: Cognition and Reality. W. H. Freeman., San Francisco (1976) 23. Nilsson, N.J.: Artificial Intelligence: A New Synthesis. Morgan Kaufmann, San Francisco (1998) 24. Pfeifer, R., Iida, F., Gomez, G.: Designing intelligent robots - on the implications of embodiment. Journal of Robotics Society of Japan 24(07), 9–16 (2006), http://www.robotcub.org/misc/review3/07_Pfeifer_Iida_Gomez_RSJ.pdf 25. Philipona, D.L., O’Regan, J.K.: Color naming, unique hues, and hue cancellation predicted from singularities in reflection properties. Visual Neuroscience 23(3-4), 331–339 (2006) 26. Poincar´e, H.: Science and hypothesis. W. Scott, London (1905), http://www.archive.org/details/scienceandhypoth00poinuoft 27. Rips, L.J., Bloomfield, A., Asmuth, J.: From Numerical Concepts to Concepts of Number. The Behavioral and Brain Sciences (in press) 28. Sauvy, J., Suavy, S.: The Child’s Discovery of Space: From hopscotch to mazes – an introduction to intuitive topology. Penguin Education, Harmondsworth (1974); Translated from the French by Pam Wells 29. Shanahan, M.P.: A cognitive architecture that combines internal simulation with a global workspace. Consciousness and Cognition 15, 157–176 (2006) 30. Sloman, A.: ‘Necessary’, ‘A Priori’ and ‘Analytic’. Analysis 26(1), 12–16 (1965), http://www.cs.bham.ac.uk/research/projects/cogaff/07.html#701 31. Sloman, A.: The Computer Revolution in Philosophy. Harvester Press (and Humanities Press), Hassocks (1978), http://www.cs.bham.ac.uk/research/cogaff/crp 32. Sloman, A.: Image interpretation: The way ahead?. In: Braddick, O.J., Sleigh, A.C. (eds.) Physical and Biological Processing of Images (Proceedings of an international symposium organised by The Rank Prize Funds, London, 1982), pp. 380–401. Springer, Berlin (1982), http://www.cs.bham.ac.uk/research/projects/cogaff/06.html#0604
276
A. Sloman
33. Sloman, A.: What enables a machine to understand? In: Proc. 9th IJCAI, Los Angeles, pp. 995–1001 (1985) 34. Sloman, A.: On designing a visual system (towards a gibsonian computational model of vision). Journal of Experimental and Theoretical AI 1(4), 289–337 (1989), http://www.cs.bham.ac.uk/research/projects/cogaff/81-95.html#7 35. Sloman, A.: Actual possibilities. In: Aiello, L.C., Shapiro, S.C. (eds.) Principles of Knowledge Representation and Reasoning: Proceedings of the Fifth International Conference (KR 1996), Boston, MA, pp. 627–638. Morgan Kaufmann, San Francisco (1996) 36. Sloman, A.: The Cognition and Affect Project: Architectures, ArchitectureSchemas, And The New Science of Mind. Technical report, School of Computer Science, University of Birmingham (2003) (revised, August 2008), http://www.cs.bham.ac.uk/research/projects/cogaff/03.html#200307 37. Sloman, A.: Cross-Disciplinary Reflections: Philosophical Robotics. Research Note: Draft chapter for a book on the CoSy project COSY-TR-0806, School of Computer Science, University of Birmingham (2008), http://www.cs.bham.ac.uk/research/projects/cosy/papers/#tr0806 38. Sloman, A.: Putting the Pieces Together Again. In: Sun, R. (ed.) Cambridge Handbook on Computational Psychology, ch. 26, pp. 684–709. Cambridge University Press, New York (2008), http://www.cs.bham.ac.uk/research/projects/cogaff/07.html#710 39. Sloman, A., Chappell, J.: The Altricial-Precocial Spectrum for Robots. In: Proceedings IJCAI 2005, Edinburgh, pp. 1187–1192. IJCAI (2005), http://www.cs.bham.ac.uk/research/cogaff/05.html#200502 40. Sloman, A.: Requirements for a Fully Deliberative Architecture (Or component of an architecture). Research Note COSY-DP-0604, School of Computer Science, University of Birmingham, Birmingham, UK (May 2006), http://www.cs.bham.ac.uk/research/projects/cosy/papers/#dp0604 41. Sloman, A.: Diversity of Developmental Trajectories in Natural and Artificial Intelligence. In: Morrison, C.T., Oates, T.T. (eds.) Computational Approaches to Representation Change during Learning and Development. AAAI Fall Symposium 2007, Technical Report FS-07-03, pp. 70–79. AAAI Press, Menlo Park (2007), http://www.cs.bham.ac.uk/research/projects/cosy/papers/#tr0704 42. Sloman, A.: Architectural and representational requirements for seeing processes and affordances. In: Computational Modelling in Behavioural Neuroscience: Closing the gap between neurophysiology and behaviour. Psychology Press, London (2008), http://www.cs.bham.ac.uk/research/projects/cosy/papers#tr0801 43. Sloman, A.: Architectural and representational requirements for seeing processes, proto-affordances and affordances. Research paper COSY-TR-0801a, School of Computer Science, University of Birmingham, UK, Also presented at Dagstuhl workshop on Logic and Probability for Scene Interpretation (March 2008), http://www.cs.bham.ac.uk/research/projects/cosy/papers#tr0801a 44. Sloman, A.: Kantian Philosophy of Mathematics and Young Robots. In: Autexier, S., Campbell, J., Rubio, J., Sorge, V., Suzuki, M., Wiedijk, F. (eds.) AISC 2008, Calculemus 2008, and MKM 2008. LNCS, vol. 5144, pp. 558–573. Springer, Heidelberg (2008), http://www.cs.bham.ac.uk/research/projects/cosy/papers#tr0802 45. Sloman, A.: Varieties of Meta-cognition in Natural and Artificial Systems. In: Cox, M.T., Raja, A. (eds.) Workshop on Metareasoning, AAAI 2008 Conference, pp. 12–20. AAAI Press, Menlo Park (2008), http://www.cs.bham.ac.uk/research/projects/cosy/papers/#tr0803
Some Requirements for Human-Like Robots
277
46. Sloman, A., Chappell, J.: Computational Cognitive Epigenetics (Commentary on [12]). Behavioral and Brain Sciences 30(4), 375–376 (2007), http://www.cs.bham.ac.uk/research/projects/cosy/papers/#tr0703 47. Sloman, A., Cosy-partners.: CoSy deliverable DR.2.1 Requirements study for representations. Technical Report COSY-TR-0507, The University of Birmingham, UK (2005), http://www.cs.bham.ac.uk/research/projects/cosy/papers/#tr0507 48. Strawson, P.F.: Individuals: An essay in descriptive metaphysics. Methuen, London (1959) 49. Sussman, G.J.: A computational model of skill acquisition. American Elsevier, Amsterdam (1975) 50. Tarsitano, M.: Route selection by a jumping spider (Portia labiata) during the locomotory phase of a detour. Animal Behaviour 72(6), 1437–1442 (2006), http://dx.doi.org/10.1016/j.anbehav.2006.05.007 51. Trehub, A.: The Cognitive Brain. MIT Press, Cambridge (1991), http://www.people.umass.edu/trehub/ 52. Turing, A.M.: Computing machinery and intelligence. Mind 59, 433–460 (1950); reprinted in Feigenbaum, E.A., Feldman, J. (eds.): Computers and Thought, pp. 11–35. McGraw-Hill, New York (1963) 53. Warneken, F., Tomasello, M.: Altruistic helping in human infants and young chimpanzees. Science, 1301–1303, March 3 (2006) doi:10.1126/science.1121448 54. Ziemke, T.: Situated and Embodied Cognition. Cognitive Systems Research 3(3) (2002) (Editor’s introduction to special issue)
Co-evolution of Rewards and Meta-parameters in Embodied Evolution Stefan Elfwing, Eiji Uchibe, and Kenji Doya Neural Computation Unit, Okinawa Institute of Science and Technology 12-22 Suzaki, Uruma, Okinawa 904-2234, Japan {elfwing,uchibe,doya}@oist.jp
Abstract. Embodied evolution is a methodology for evolutionary robotics that mimics the distributed, asynchronous, and autonomous properties of biological evolution. The evaluation, selection, and reproduction are carried out by cooperation and competition of the robots, without any need for human intervention. An embodied evolution framework is therefore well suited to study the adaptive learning mechanisms for artificial agents that share the same fundamental constraints as biological agents: self-preservation and self-reproduction. In this paper we propose a framework for performing embodied evolution with a limited number of robots, by utilizing time-sharing in subpopulations of virtual agents. Within this framework, we explore the combination of within-generation learning of basic survival behaviors by reinforcement learning, and evolutionary adaptations over the generations of the basic behavior selection policy, the reward functions, and meta-parameters for reinforcement learning. We apply a biologically inspired selection scheme, in which there is no explicit communication of the individuals’ fitness information. The individuals can only reproduce offspring by mating, a pair-wise exchange of genotypes, and the probability that an individual reproduces offspring in its own subpopulation is dependent on the individual’s “health”, i.e., energy level, at the mating occasion. We validate the proposed method by comparing the proposed method with evolution using standard centralized selection, in simulation, and by transferring the obtained solutions to hardware using two real robots. Keywords: Embodied Evolution, Evolutionary Robotics, Reinforcement Learning, Meta-learning, Shaping Rewards, Meta-parameters.
1
Introduction
Embodied evolution [21] is a methodology for evolutionary robotics [11] that mimics the distributed, asynchronous and autonomous properties of biological evolution. The robots reproduce offspring by mating, i.e., a pairwise exchange of genotypes, and the probability of a robot to reproduce offspring is regulated by the robot’s performance of the task. The evaluation, selection and reproduction are carried out by cooperation and competition of the robots. B. Sendhoff et al. (Eds.): Creating Brain-Like Intelligence, LNAI 5436, pp. 278–302, 2009. c Springer-Verlag Berlin Heidelberg 2009
Co-evolution of Rewards and Meta-parameters in Embodied Evolution
279
In this paper we propose a framework for performing embodied evolution with a limited number of robots. In the original embodied evolution formulation, one robot corresponds to one individual in the population. This may be the ideal case, but it makes the methodology inapplicable for most evolutionary computation tasks, because of the large number of robots that are required for an appropriate population size. For a limited number of robot hardware, we consider ”virtual agents” in each hardware and let each of them come to life and try to mate time-sharing. This study has been performed within the Cyber Rodent project [3] and the Cyber Rodent robotic platform was developed for embodied evolutionary purposes. The main objectives of the Cyber Rodent project is to understand the origins of our reward and affective systems by building artificial agents that share the same intrinsic constraints as natural agents: self-preservation and selfreproduction. The Cyber Rodent robot can recharge from external battery packs in the environment and exchange genotypes with other robots through its infrared communication port. The objective of this study is to explore the combination of within-generation learning of basic behaviors by reinforcement learning and evolutionary adaptations over the generations of parameters that modulate the learning of the behaviors. Our method consists of three parts: 1) a general embodied evolution framework, where each robot contains a subpopulation of virtual agents that are evaluated by time-sharing for the survival task; 2) a two-layered control architecture, where a top-layer neural network selects learning modules, corresponding to basic survival behaviors, according to the current environmental state and the virtual agent’s internal energy level; and 3) the reinforcement learning algorithm Sarsa, which is used for learning of the basic behaviors during the lifetimes’ of the virtual agents. The genotypes of the virtual agents consist of the weights of the top-layer neural network controller, and parameters that modulate the learning ability of the learning modules. The learning ability is evolutionarily optimized by tuning additional reward signals, implemented as potential-based shaping rewards, and global meta-parameters, shared by all learning modules, such as the learning rate α, the discount factor of future rewards, γ, the trace-decay rate, λ, and the temperature, τ , controlling the trade-off between exploration and exploitation in the action selection. We apply an implicit and biologically inspired selection scheme, where differential selection is achieved by that virtual agents that reproduce more offspring have higher probability to transfer their offspring to the next generation. In our selection scheme there is no explicit representation or communication of the individuals’ fitness information. Instead, a virtual agent can only reproduce offspring by mating with virtual agents controlling other robots in the environment. In addition, the probability that a virtual agent reproduces offspring in its own subpopulation is dependent on the virtual agent’s “health”, i.e., the virtual agent’s internal energy level, at the mating occasion. There have been few studies conducted in the field of embodied evolution apart from the pioneering work by Watson et al. [21]. They used 8 small mobile
280
S. Elfwing, E. Uchibe, and K. Doya
robots to evolve a very simple neural network controller for a phototaxis task. The controller had two input nodes. One binary input node, indicating which of the two light sensors was receiving more light, and one bias node. The input nodes were fully-connected to two output motor neurons, controlling the speed of the wheels, giving totally four integer weights. In their experiments, mating was not a directed behavior. Instead, an agent broadcasted its genotype according to a predefined scheme, and the other robots within the communication range could then pick up the genotype. Differential selection was achieved by that more successful agents broadcasted their genotypes more often and were less inclined to accept genotypes broadcasted by other agents. Nehmzow [8] used embodied evolution to evolve three different sensory-motor behaviors, using two small mobile robots. In the experiments, the two robots first evaluated their current behavioral strategies, and after a fixed amount of time the robots initiated a robot seeking behavior. The robots then performed an exchange of genetic material via IR-communication, and genetic operations were applied according to the fitness values. Each robot stored two strings: the currently active string and the best solution so far. If the genetic algorithm did not produce an improved result, then the best solution was used in the next generation. Shaping rewards is a frequently used way of accelerating reinforcement learning by providing an additional and richer reward signal. In this study we have used potential-based shaping rewards as defined by Ng et al.[10], which has several advantages from our perspective. The potential-based shaping rewards has a sound theoretical basis, guaranteeing convergence to the optimal policy for the original problem without shaping rewards. Their formulation does not add any extra parameters to the system and it is only dependent on the states of the agent, which makes it easy to approximate the potential function for the shaping rewards with standard function approximation methods used in reinforcement learning. From a biological point of view the evolutionary optimization of shaping rewards, can be seen as a form of the Baldwin effect [1,6,19,14,15], i.e., indirect genetic assimilation of learned traits. In this context the shaping reward represents an agent’s innate learning ability and the learning process represents the lifetime adaptation of the agent’s behavior. The main difference between our approach and the standard application of the Baldwin effect in evolutionary computation is that the genetic adaptations do not directly effect the agents’ behaviors, i.e., the genotypes do not code the policies of the behaviors explicitly. Instead, the genetic adaptations in the form of gradually improving shaping rewards increase the agents’ learning abilities by accelerating the learning process. In this regard this study is related to the approach used by Floreano and Mondada [5], and by Urzelai and Floreano [20] for evolving learning rules in neural controllers. Instead of evolving the weights of the neural controller directly, they evolved the learning rules and learning rates of the synaptic weights of the neural controller. Each weight of the neural network was changed continuously during an individual’s lifetime, according to one of four genetically determined Hebbian learning rules and the change of the weight was computed as a function of the
Co-evolution of Rewards and Meta-parameters in Embodied Evolution
281
activations of the presynaptic and postsynaptic units. An obvious difference between these approaches is algorithmical. In the evolution of learning rules, the weights of the neural controller are modified by Hebbian learning and the learning has no explicit goal of its own. In our approach the learning is accomplished by a specific reinforcement learning algorithm and each learning module has its own goal defined by a reward function. However, on a more abstract level the two approaches are similar as demonstrated in the work by Niv et al. [9] and by Ruppin [13], where they showed that optimal learning rules for a reinforcement learning agent can be evolved in a general Hebbian learning framework.
2 2.1
Background Basics of Reinforcement Learning
Reinforcement learning [18] is a computational approach to learning from interaction with the environment. An agent learns a policy, state-to-action mapping, based on scalar reward signals received from the environment. At time t the agent observes the current state st . The agent selects an action, at , according to the current stochastic policy, π(st , at ) = P (at |st ), i.e., the probability of selecting an action, at , given the current state, st . The environment makes a state transition from st to st+1 , according to the state transition probability P (st+1 |st , at ), and the agent receives a scalar reward rt . The goal is to learn a policy, π, that maximizes the cumulative discounted future reward. The value of a state, st , the state-value function, V π , under policy π is the solution to the Bellman equation, defined as V π (st ) = π(st , at ) P (st+1 |st , at ) [R(st+1 , st , at ) + γV π (st+1 )] , (1) at
st+1
where R(st+1 , st , at ) is the reward for taking action at in state st , resulting in the new state st+1 , and γ is the discount factor of future rewards. Similarly, the value of selecting action at in state st , the action-value function Qπ , is defined as Qπ (st , at ) = P (st+1 |st , at ) R(st+1 , st , at ) + st+1
γ
π(st+1 , at+1 )Qπ (st+1 , at+1 ) .
(2)
at+1
2.2
Sarsa(λ) with Tile Coding
In this study, we use the Sarsa reinforcement learning algorithm [12,17]. Sarsa is an on-policy reinforcement learning algorithm, which learns an estimate of the action-value function, Qπ , while the agent follows policy π. In the basic one-step version of Sarsa the Q-value for the current state, st , and action, at , is updated as
282
S. Elfwing, E. Uchibe, and K. Doya
Qπ (st , at ) ← Qπ (st , at ) + α rt + γQπ (st+1 , at+1 ) − Qπ (st , at ) ,
(3)
where α is the learning rate. Eligibility traces is a basic mechanism to increase the efficiency of reinforcement learning algorithms. For action-value based algorithms, each state-action pair is associated with a memory, the eligibility trace, et (s, a). The temporaldifference error (TD-error), δt = rt + γQ(st+1 , at+1 ) − Q(st , at ), is propagated back along the trajectory of state-action pairs leading to the current state, st , and action, at , decaying by γλ per time step, where λ is the trace-decay rate. The eligibility traces version of Sarsa is called Sarsa(λ), and updates the Q-values according to Qπ (s, a) ← Qπ (s, a) + αδt et (s, a)
for all s, a.
(4)
The implementation of the eligibility traces used in this study, as recommended by [16], is called replacing traces with optional clearing, and is defined as ⎧ if s = st and a = at ; ⎨1 if s = st and a = at ; for all s, a. et (s, a) = 0 (5) ⎩ γλet−1 (s, a) if s = st . The optional clearing (the second line) sets the traces for all non-selected actions from the revisited state to 0. To handle the continuous state information, received by the robots’ sensors, we use tile coding [17] to approximate the Q-values. Tile coding represents the value of a continuous variable as a large binary feature vector with many 0s and a few 1s. The idea is to partition the state space multiple times, where the partitions are called tilings and the elements of the tilings are called tiles. For each state exactly one tile is active in each tiling, corresponding to the 1s in the binary feature vector. The computation of the action-values is therefore very simple, by summation of the components of the parameters representing the approximated
Q-values, Θ, corresponding to the non-zero features, I s , in state s: Q(s, a) = i∈I s Θ(i, a). For Sarsa the policy, π, is derived from the action values. In this study, we use softmax action selection, where the actions are ranked and weighted according to their action values. The most common softmax method uses a Boltzmann distribution and selects an action a, with a probability of e P (a|s) =
Q(s,a) τ
b
e
Q(s,b) τ
,
(6)
where the positive meta-parameter τ is called the temperature. 2.3
Potential-Based Shaping Rewards
Shaping rewards is a popular technique for improving the performance of reinforcement learning algorithms. The agent is provided knowledge in the form of
Co-evolution of Rewards and Meta-parameters in Embodied Evolution
283
an additional reward signal that guides the agent to states with large rewards and/or from states with small rewards. However, if the shaping is not carefully designed the agent can be trapped in a sub-optimal behavior, i.e., the learning converges to a solution that is optimal in the presence of the shaping rewards, but sub-optimal for the original problem. Ng et al.[10] proved that any policy optimal for a Markov decision process (MDP) with potential-based shaping rewards, where the shaping rewards depend only on the difference of a function of successive states, will also be optimal for the MDP without shaping rewards. They define a potential function Φ(·) over states, where the shaping reward, F (st , st+1 ), for the transition from state st to st+1 is defined as F (st , st+1 ) = γΦ(st+1 ) − Φ(st ),
(7)
where γ is the discount factor of future rewards. The shaping rewards are added to the original reward function for every state transition, changing the update of the Q-values in Sarsa (3) into Qπ (st , at ) ← Qπ (st , at )+α rt +γ Qπ (st+1 , at+1 )+Φ(st+1 ) −Qπ (st , at )−Φ(st ) (8) In this study, we use normalized Gaussian radial basis networks to approximate the potential functions, Φ(·), of the shaping rewards, defined as Φ(s) =
i
vi · φi (s),
φi (s) =
e
−
j
|s−ci |2 2σ2 i
e
−
|s−cj |2 2σ2 j
,
(9)
where ci is the center position of φi with width σi . In this study, the weights, vi , of the radial basis networks are coded as real-valued genes and tuned by the evolutionary process. 2.4
Meta-parameters
The performance of reinforcement learning algorithms depends critically on a few meta-parameters that directly influence the learning updates or the exploration of the environment [2]. In this study, we use four meta-parameters that are coded as real-valued genes and are optimized by the evolution: – α (0 ≤ α ≤ 1), the learning rate in the updates of the Q-values (3). For larger values of α, the estimated target of the learning update, rt + γQ(st+1 , at+1 ), is weighted more strongly in the computation of the updated Q-value, but it also makes the learning more unstable due to stochastic variations in the state transitions. If α = 0, then the Q-values are not updated at all. If α = 1, then the current Q-value is replaced by the estimated learning target. In convergence proofs for reinforcement learning algorithms, a standard assumption is either that α is sufficiently small or that α is decreasing over time. – γ (0 ≤ γ ≤ 1), the discount factor of future rewards, see (1) and (2). γ determines the value of future rewards at the current state, and thereby
284
S. Elfwing, E. Uchibe, and K. Doya
how farsighted the agent is. If γ is zero, the agent only tries to maximize the immediate rewards, r. As γ approaches one, future rewards are more strongly weighted in the calculation of the expected accumulated discounted reward. – λ (0 ≤ λ ≤ 1), the trace-decay rate that controls the exponential decay of the eligibility traces (5). If λ = 0, then all traces are zero except for the trace corresponding to the current state and action, et (st , at ), and the update of the Q-values is equal to the one-step version of the learning, see (3) for Sarsa-learning). For increasing values of λ, but still λ < 1, the Q-values of preceding state-action pairs are changed more, but more temporally distant state-action pairs are changed less since the traces have decayed for more time steps. If λ = 1, then the traces are only decayed by γ per time step. – τ (τ > 0), the temperature that controls the trade-off between exploration and exploitation in softmax action selection (6). Higher temperatures decrease the differences between the action selection probabilities, and make the selection more stochastic. Lower temperatures increase the differences between the action selection probabilities and make the selection less stochastic. In the limit, τ → 0, the action selection becomes deterministic and the agent always selects the greedy action.
3
Cyber Rodent Robot
The Cyber Rodent robot platform (Fig. 1) was developed for the Cyber Rodent project [3]. The main objective of the Cyber Rodent project is to study adaptive mechanisms of artificial agents under the same fundamental constraints as biological agents, namely self-preservation and self-reproduction. The Cyber Rodent is a rat-like mobile robot, 22 cm in length and 1.75 kg in weight. The robot has a variety of sensors, including a wide-angle C-MOS camera, an infrared range sensor, seven infrared proximity sensors, gyros, and accelerometers. It has two wheels and a maximum speed of 1.3 ms−1 , and a magnetic jaw that latches onto battery packs. It also has a speaker, two microphones, a three-color LED for audio-visual communication, and an infrared communication port. The field of view of the visual system is approximately [−75o, 75o ], and within the angle range [−45o , 45o ] the robot can detect batteries (blue led) and the green tail LED of another robot up to approximately 1.2 m, and the red face of another robot up to 0.8 m. Outside this range the detection capability decreases rapidly, e.g., for the angles ±75o the robot can only detect the batteries, tail LEDs, and faces up to approximately 0.2 m. 3.1
Simulation Environment
In the evolutionary experiments, we used a simulation environment that was developed to mimic the properties of the real Cyber Rodent robot (see the figure used to represent the survival task in Fig. 2). In the experiment, we used four robots that can detect each other by the green tail LEDs, and by the red faces in the front of the robots, where the infrared ports for genotype exchanges are
Co-evolution of Rewards and Meta-parameters in Embodied Evolution
285
Fig. 1. The hardware environment with two Cyber Rodent robots and 6 external battery packs
located. The gray circles with blue centers represent the 8 battery packs used for recharging the virtual agents’ internal batteries. The two robots in the bottom of the environment have performed a successful mating and their LEDs are therefore turned off. After a successful capture of a battery pack, the battery pack was moved to new, randomly selected, position in the environment. The dimensions of the environment were set to 2.5 m × 2.5 m, and the simulated vision system used the estimated properties described above. In the simulator, Gaussian noise was added to the angle and distance state information received from the simulated visual system, with zero mean and a standard deviation of 1o for all angle states, and a standard deviation of 2 cm for all distance states. The infrared port for exchange of genotypes is located slightly to the right of the center in the front of the Cyber Rodent. In the simulator, the maximum range of the infrared communication was set to 1 m and the angle range was set to [−30o, 30o ] (calculated from the position of the infrared port). For a mating to be successful, at least one of the robots had to initiate the infrared communication and both of the robots had to be within each others mating range, both before and after they executed their actions.
4
Method
The proposed method consists of three, largely independent, parts: 1) an embodied evolution framework that utilizes time-sharing for evaluation of subpopulations of virtual agents inside each robot; 2) a two-layered control architecture, where a top-layer neural network controller selects behaviors, i.e., learning modules, according to the current state; and 3) learning modules that learn basic behaviors using the reinforcement learning algorithm Sarsa(λ). The optimization of the virtual agents’ overall behaviors occur on two levels: withingeneration learning of the basic behaviors by reinforcement learning, and evolutionary adaptations of the genotypes of the virtual agents over the generations.
286
S. Elfwing, E. Uchibe, and K. Doya
The genotype of a virtual agent controls the selection of basic behaviors, by optimizing the weights of the neural network controller, and modulates the learning of the basic behaviors by optimizing the reward function, implemented as potential-based shaping rewards, and the global meta-parameters, such as the learning rate α, the discount factor of future rewards, γ, the trace-decay rate, λ, and the temperature, τ , controlling the randomness of action selection. For a detailed description of the presented models see [4]. 4.1
Embodied Evolution Framework
Figure 2 shows an illustration of the proposed embodied evolution framework. In our scheme there is a population of robots, Nsub , each of which contains a subpopulation of Nva virtual agents. The virtual agents are evaluated, in random order, by time-sharing, i.e., taking control over the robot for a limited number of time steps, Tts . In each generation, each virtual agent performs Nts timesharings, which gives a total lifetime of Nts ×Tts time steps. The genotypes of the virtual agents consist of parameters that either control their behaviors directly or modulate the learning of their behaviors. In each time step, the embodied evolution framework invokes the control architecture to execute an action, update the learning parameters (in the case of lifetime learning of the behaviors), and update the energy level, of the virtual agent that controls the robot. If a virtual agent performs a successful mating, i.e., the virtual agent exchanges genotypes with a virtual agent controlling another robot, then the virtual agent will reproduce two offspring in its own subpopulation with a certain reproductive probability, e.g., depending on its energy level. The two offspring are created by applying genetic operations, such as crossover and mutation, to the virtual agent’s own genotype and the genotype received from the mating partner. The two offspring are then appended to the list of offspring reproduced in the current generation, O. Hereafter, a successful mating that also reproduce offspring is denoted a reproductive mating. Note that our mating scheme is asymmetrical. If two virtual agents perform a successful mating, they either both reproduce offspring, only one of the them reproduces offspring, or none of them reproduce offspring, depending on if each of the two virtual agents satisfies the reproduction condition. If the energy of a virtual agent is depleted, then the virtual agent is killed and it will not be evaluated for any more time steps. After all virtual agents in a subpopulation have been evaluated for a full lifetime or died a premature death, a new population is created by randomly selecting Nva offspring from the list of offspring, O. If the number of reproduced offspring is less than Nva , then the remaining part of the new population is created by adding randomly created genotypes. An important feature of our scheme is that there is no explicit computation of the fitness of the individuals. Differential selection is instead achieved by that virtual agents that perform more reproductive matings have higher probability of transferring their offspring to the next generation. In the experiments, we used 4 robots (Nsub ) with 20 virtual agents (Nva ) inside each robot. The number of time-sharings (Nts ) was set to 3, and each time sharing lasted for 400 time steps (Tts ), giving a total lifetime of 1200 time
Co-evolution of Rewards and Meta-parameters in Embodied Evolution
287
Fig. 2. Overview of the embodied evolution framework. Each robot contains a subpopulation of virtual agents of size Nva . The virtual agents are evaluated by time-sharing for the survival task, i.e., taking control over the robot for a limited period of time. If, during the evaluation, a virtual agent runs out of energy, then the agent dies and it is removed from its subpopulation. If a virtual agent performs a successful mating, i.e., an exchange of genotypes with a virtual agent controlling another robot, then the virtual agent will reproduce two offspring in its own subpopulation with a certain reproduction probability. The offspring are created by applying genetic operations to the virtual agent’s own genotype and the genotype received from the mating partner. The two offspring are then inserted in the list of offspring, O, of the subpopulation containing the virtual agent. When all virtual agents in a subpopulation have either been evaluated for a full lifetime or died a premature death, a new population is created by randomly selecting Nva offspring from the list of offspring.
steps. Each virtual agent had an internal battery with a maximum capacity, Emax of 500 energy units, which was initialized to 400 energy units in each generation. In each time step, the energy of a virtual agent controlling a robot was decreased by 1 energy unit, and if the robot captured a battery pack the energy level was increased by 100 energy units. We defined the reproductive probability, i.e., the probability that a virtual agent would reproduce offspring in its own subpopulation, as a linear function of the virtual agent’s internal energy level, E, at the time of mating. The probability, prep , that a mating was reproductive was defined as E prep = . (10) Emax
288
S. Elfwing, E. Uchibe, and K. Doya
For the reproduction of offspring, we applied one-point crossover with a probability of 1. Each gene was then mutated with a probability of 0.1, which added a value drawn from a Gaussian distribution with zero mean and a standard deviation of 0.1. 4.2
Control Architecture
Figure 3 shows an overview of the two-layered control architecture used in the experiments. The selection of reinforcement learning modules is controlled by a top-layer linear feed-forward neural network. In each time step, the neural network selects one of the two reinforcement learning modules, foraging or mating. The selected module then executes an action based on its learned behavior. It also updates its learning parameters, based on the global reward function (predefined and equal for all modules), the meta-parameters, (shared by all modules and genetically determined), and its shaping reward function (unique for each module and genetically determined).
State Input x1, x2, ..., xn Meta− Parameters Global Reward
w1, w2, ..., wn Σi(wixi) ≤0
>0
Foraging Shaping Rewards
Mating Shaping Rewards
Fig. 3. Overview of the two-layered control architecture used in the experiments. The top-layer linear feed-forward neural network uses the n-dimensional state space as input. In each time step, the neural controller selects one of the two reinforcement learning modules, foraging or mating, The selected module then executes an action based on its learned behavior. The learning is modulated by the predefined global reward function, the shaping rewards, unique for each module, and the meta-parameters, shared by all modules.
In the experiments, the input state was of five dimensions: 1) constant bias of 1; 2) normalized internal energy level, i.e., the virtual agent’s current energy level divided by the maximum energy level; 3) normalized distance to the closest battery pack; 4) normalized distance to the closest tail LED; and 5) normalized distance to the closest face of another robot. If a target (battery pack, tail LED, or face) was not visible, then the corresponding input was set to −1.
Co-evolution of Rewards and Meta-parameters in Embodied Evolution
4.3
289
Reinforcement Learning Modules
In the experiments, we used two reinforcement learning modules: mating, for learning to exchange genotypes with another robot, and foraging, for learning to capture external battery packs in the environment to charge the virtual agent’s internal battery. To learn the two behaviors, we used the Sarsa(λ) with tile coding, replacing traces with optional clearing, and potential-based shaping rewards, see Sect. 2. The time step of the learning was set to 240 ms. The state was defined by the angle and distance to the closest battery pack, tail LED, or face of another robot. The state information was normalized to the interval [0, 1]. In addition, each module used 3 extra discrete states for the situations where the target (tail LED and face for mating, and battery pack for foraging) was not visible: 1) if a target was visible at an angle < 0o in the last 3 time steps; 2) if a target was visible at an angle > 0o in the last 3 time steps; and 3) if a target has not been visible for more than 3 time steps. The virtual agents received a global reward, rt , of +1 for a successful battery capture, +1 for a successful mating, and 0 for all other state transitions. The tile coding consisted of 3 tilings offset by a constant amount in each state dimension. In each tiling, the angle dimension was divided into 11 equidistant tiles and the distant dimension was divided into 4 equidistant tiles. In addition, each module used 3 tiles representing the 3 states in which the target was not visible. The Q-values were initialized to 0 for all states and actions in the beginning of each generation. The state of the foraging behavior was of 2 dimensions: 1) angle to the closest battery pack, and 2) distance to the closest battery pack. The foraging behavior could execute 5 actions, pairs of velocities (mm/s) of the left and the right wheel: {(167, −167), (250, 167), (250, 250), (167, 250), (−167, 167)}. For the foraging behavior, the potential function of the shaping rewards was parameterized by 15 parameters: 12 for the states in which a battery was visible (4 basis functions in the angle dimension and 3 basis functions in distance dimension), and 3 parameters corresponding to the states in which no battery was visible. The state of the mating behavior was of 3 dimensions: 1) discrete state that was set to 1 if the face of another robot was visible, otherwise it was set to 2; 2) if a face was visible, then the angle to the closest face, otherwise the angle to the closest tail LED; and 3) if a face was visible, then the distance to the closest face, otherwise the distance to the closest tail LED. The mating behavior could execute 7 actions, pairs of velocities (mm/s) of the left and the right wheel: {(0, 0), (167, −167), (250, 167), (250, 250), (167, 250), (−167, 167), (−167, −167)}. The first action, (0,0), was defined as the mating action. If a virtual agent executed the mating action, then it would initiate a mating. If another robot was within the range of the infrared communication, then it would always answer the mating invitation, even if the virtual agent controlling the other robot was executing the foraging behavior. If both robots were within each others mating range, both before and after they had executed their actions, then the mating was considered successful, and the virtual agents controlling the mating robots would exchange genotypes. After a successful mating, a virtual agent could not execute the mating behavior again until it had captured a battery
290
S. Elfwing, E. Uchibe, and K. Doya
pack, or until 50 time steps had passed. During this time the tail LED was turned off. For the mating behavior, the potential function of the shaping rewards was parameterized by 27 parameters: 12 for the states in which the face of another robot was visible (4 basis functions in the angle dimension and 3 basis functions in distance dimension), 12 for the states in which only the tail LED of another robot was visible, and 3 parameters corresponding to the states in which no other robot was visible.
5
Simulation Results
To validate the effectiveness of our proposed method, we compared the result of our proposed selection scheme with experiments in which we used standard centralized selection. In the centralized scheme, there was no lists of offspring reproduced during the execution of the survival task. Instead, after all virtual agents in all subpopulations had been evaluated for a full lifetime (3 time-sharings of 400 time steps), the new population was created by elitism and tournament selection. The fitness was computed as the number of reproductive matings and the fittest 10% of the individuals were transferred directly to the new population (elitism). The remaining 90% of the new population was then selected by tournament selection with a tournament size of two, i.e., two individuals were randomly chosen from the old population and the fittest of the two was then transferred to the new population. The genotypes in the new population were randomly paired and one-point crossover was applied with a probability of 1, and each gene was mutated with a probability of 0.1 (identical to the genetic operations in our proposed framework). The genotypes in the new population were then randomly placed in the four subpopulations. Figures 4 and 5 show the overall performance of the evolutionary experiments. The presented results are the average performance over 20 simulation runs of all individuals in the four subpopulation for both our proposed embodied evolution framework and the evolution with centralized selection. For the number of reproductive matings (Fig. 4), the average results are very similar. The performance increases steadily from approximately 0.3 reproductive matings in the first generation to approximately 7 reproductive matings after 100 generations. The performance then slowly increases to reach the final level of approximately 9 reproductive matings after 300 generations. The only notable difference between the two selection schemes is that the performance of our proposed framework increases more rapidly in the first 100 generations. The constant dotted black line with a value of 0.5 indicates the number of reproductive mating that is needed for a sustainable population, i.e., that each virtual agent reproduces on average at least one offspring. Our proposed method reaches this threshold after approximately 5 generations, while the evolution with the centralized selection scheme needs approximately 10 generations to reach this performance level. The explanation of this difference is most likely that, in our proposed method, if the number of reproduced offspring is less than the number of virtual agents in the subpopulations, Nva , then random genotypes are added to new subpopulation. In the centralized selection scheme, random genotypes are only created at
Average number of reproductive matings
Co-evolution of Rewards and Meta-parameters in Embodied Evolution
291
16 14 12 10 8 6 4 2 Centralized selection Proposed method
0 −2 0
100
200
300
400
Generation
Fig. 4. The average number of reproductive matings over 20 simulation runs for the proposed embodied evolution framework (black lines) and the centralized selection scheme (gray lines), with standard deviations (dashed lines)
Average number of battery captures
80 70 60 50 40 30 20 10 Centralized selection Proposed method
0 −10 0
100
200
300
400
Generation
Fig. 5. The average number of battery captures over 20 simulation runs for the proposed embodied evolution framework (black lines) and the centralized selection scheme (gray lines), with standard deviations (dashed lines)
the initialization of the genotypes, which means that there is less variations in the genotypes for the genetic operations to work on in the beginning of the evolution. The average number of batteries captured per virtual agent (Fig. 5) increases rapidly to about 42 batteries with a large variance in the populations, and it then stays at this level for the remaining time of the evolution. Our proposed method reaches this stable level after about 45 generations and the evolution with centralized selection reaches the stable level after about 75 generations. These results are very encouraging and show that our proposed selection scheme, which limits the number of possible mating partners a virtual agent can reproduce offspring with, does not reduce the evolutionary performance. At least, the results give strong evidence that the settings used in the experiments,
292
S. Elfwing, E. Uchibe, and K. Doya
four robots and 3 time-sharings per virtual agent, do not have a negative effect on the evolutionary performance. For the remaining part of this section we only present the results for the proposed embodied evolution framework. 5.1
Obtained Neural Network Selection Policy
To be able to visualize the obtained selection policy of the neural network controller (see Fig. 6), we divide the input space, x, into five distinct state types: – – – – –
S1 : S2 : S3 : S4 : S5 :
A A A A A
tail LED is visible (bottom left panels). battery and a tail LED are visible (top left panels). battery is visible (top middle panels). battery and a face of another robot are visible (top right panels). face of another robot is visible (bottom right panels).
For the state types in which the face of another robot is visible (S4 and S5 ), we assume that the tail LED of the same robot is visible and that the angle to the tail LED is the same as for the face. Figure 6 shows the average selection policy for generation 10, 40, and 400, as energy thresholds for mating, Em , in each distinct state type. If the current energy level is less than or equal to the threshold value, then the virtual agent selects foraging, otherwise the virtual agent selects mating. In the figure, dark blue color means that the agent always selects mating and dark red means that the agent always selects foraging. Note that since the virtual agents on average captured a battery every 30th time step, the energy levels very rarely fell below, say, half the maximum energy level, 250 units, during the experiments. The selection policy of the neural controller converges relatively quickly. After about 40 generations the average obtained policy is very similar to the final obtained solution, which corresponds to the time when the average number of captured batteries reaches a stable level (see Fig. 5). The obtained average selection module selection policy can be summarized as – If a face of another robot is visible, S4 and S5 (the two right panels), then select mating. – Else if only a battery is visible, S3 (top middle panel), then select foraging. The only exception is if the energy level is very high, above approximately 495 energy units (not visible in the figure). This means that in first few time steps after a virtual agent has recharged fully it will try to look for a mating partner, by selecting the mating module, even if there is a battery pack close at hand and no other robot is visible. – Else if a tail LED is visible (the two left panels), and if the energy level is high, then wait for the potential mating partner to turn around by selecting mating. A virtual agent is more likely (lower energy thresholds) to wait for the mating partner for shorter distances to the tail LED and shorter distances to the battery pack. For example, for the states in which in a tail LED and a battery pack are visible, S2 (top left panel), the energy thresholds are 337,
Co-evolution of Rewards and Meta-parameters in Embodied Evolution
293
Fig. 6. The evolution of the neural network module selection policy, shown as the average energy threshold for mating, Em , in the 5 different state types, Si , at generation 10, 40, and 400. The y-axis corresponds distance to the battery in cm, the positive xaxis corresponds to distance the face in cm, and the negative x-axis corresponds to the distance to tail LED in cm. In each state type a virtual agent selects the foraging module if the current energy level is less than or equal to the threshold value, and the mating modules if the current energy level is greater than the threshold value.
400, and 490 energy units for the distances 0, 40, and 120 cm to both targets. For the states in which only the tail LED is visible, S1 (bottom left panel), the energy threshold is shifted upwards: 440, 471, and 490 energy units for the distances 0, 40, and 120 cm to the tail LED. 5.2
Obtained Shaping Rewards
Figure 7 shows the average potential function of the shaping rewards, Φ(·), for the foraging module in generation 50, 100, 200, and 400. The obtained potential function seems intuitively correct, higher potential values for shorter distances and smaller absolute angles to the battery pack. This shape is already approximately obtained after 50 generations, although, this is difficult to see in the figure because the difference between the largest average potential (abatt ≈ −12o ,
294
S. Elfwing, E. Uchibe, and K. Doya
dbatt = 0 cm) and the smallest average potential (abatt = −75o, dbatt = 120 cm) is only 0.07. The amplitude of the potential function is gradually increasing throughout the evolutionary process, and in the last generation the difference between the maximum average potential (abatt ≈ 0o , dbatt = 0 cm) and the minimum average potential (abatt = ±75o , dbatt = 120 cm) is 0.385. dbatt(cm) 120
Generation 50 0.6 0 120
0.5 Generation 100 0.4 0 120
Φ 0.3
Generation 200
0
0.2
120
0.1 Generation 400
o
o
−75
0
0 o 75
a
batt
Fig. 7. The average potential function, Φ(·), for the foraging module in generation 50, 100, 200, and 400. The y-axis corresponds distance to the battery in cm and the x-axis corresponds the angle to the battery.
For the potential functions of the shaping rewards for the mating module there are variations in the obtained solutions, especially in the potential function for the states in which only a tail LED is visible. In the last generation, the average number of matings ranges from 8.1 ± 4.6 to 12.4 ± 5.1 in the 20 simulation runs. We, therefore, only present the average results of the best simulation run (see Figure 8). The right column shows the potential functions for the states in which the face of another robot is visible, and the left column shows the potential functions for the states in which only the tail LED is visible. The dashed black lines in right panels represent the angle limits of the infrared communication for the exchange of genotypes. The curvatures of the lines are caused by fact that
Co-evolution of Rewards and Meta-parameters in Embodied Evolution dled(cm)
295
dface(cm)
120 80
Generation 50 0.4 0
0
120 80
0.2
Generation 100
0
Φ
0
120
0 80
Generation 200
0
0
−0.2
120 80
Generation 400
o
−75
o
0
aled
0 o 75
o
−75
o
0
0 o 75
aface
Fig. 8. The average potential functions, Φ(·), for the mating module in generation 50, 100, 200, and 400 in the best simulation run. The right column shows the potential functions for the states in which the face of another robot is visible. The y-axis corresponds distance to the face in cm and the x-axis corresponds the angle to the face. The left column shows the potential functions for the states in which only the tail LED is visible. The y-axis corresponds distance to the tail LED in cm and the x-axis corresponds the angle to the tail LED.
the infrared port is located slightly ahead and to the right of the camera. For the states in which the face is visible, the average obtained potential function in the last generation gives higher potential for longer distances and negative angles to the face of the other robot. The angles of the maximum potentials ranges from approximately −10o to −32o for the minimum distance (0 cm) and the maximum distance (80 cm), respectively. That the evolution assigns higher potentials to negative angles is explained by that the infrared port for the exchange of genotypes is located to right of the camera system. The preference for longer distances to the face is probably explained by that for longer distances between the two robots it is less likely that one of virtual agents will move its robot outside the angle range of the infrared communication of the mating partner. A successful mating requires successful cooperation between two virtual agents. A potential benefit of assigning the highest potentials to the maximum
296
S. Elfwing, E. Uchibe, and K. Doya
distance is, therefore, that it will move the robot out of visible distance range of the mating partner’s face if the mating partner executes a bad mating policy. During the first half of the evolution (the top three right panels) the amplitude of the potential function is gradually increasing and the shape of function is almost opposite to the obtained potential function for foraging, giving higher potentials for longer distances and smaller absolute angles. During the last half of evolution (bottom right panel) the potential function becomes less dependent on the distance to the face. In the last generation, for distance between 10 cm and 70 cm, the states with the highest potentials correspond roughly to the states within the angle range of the infrared communication. For the states in which only the tail LED is visible (left panels), it is more difficult to interpret the obtained potential function in the last generation. Positive angles and longer distances correspond to higher potentials, with the highest potentials between approximately 80 cm and 120 cm. Not surprisingly, the obtained potential functions always prefer states in which the face is visible. The potential values are in the ranges of -0.31 to 0.05 and 0.14 to 0.44, when only the tail LED is visible and when the face is visible, respectively. During the first half of the evolution (the top three left panels) the potential function seems to change rather randomly. In generation 50 and 200, there is also a considerable overlap in the values for the potential function for the states in which the face is visible and the states in which only the tail LED is visible. 5.3
Obtained Meta-parameters
Figures 9 and 10 show the average values of the four meta-parameters. An interesting result is the evolution of τ , controlling the trade-off between exploration and exploitation in the softmax selection (black line in Figure 9). τ very quickly drops to approximately 0 (in median, τ is exactly zero after generation 194). This result means that action selection tends to be equal to the greedy action selection strategy, and always selects the action corresponding to the largest Q-value. This result indicates that in the presence of sufficiently good shaping rewards, additional exploration in the form of a stochastic policy is unnecessary and actually decreases the learning performance. This result is consistent with the theoretical analyzes performed by Laud and DeJong [7], which shows that shaping rewards can reduce the amount of exploration required for efficient learning. They analyzed shaping in terms of a reward horizon, which is a measure of the number of decisions the agent must take before experiencing accurate reward feedback. For example, if the agent only receives a delayed reward at the goal state, then the reward horizon is equal to the task length. If the potential function of the shaping rewards is equal to the optimal value function, V ∗ , then the reward horizon is equal to 1. However, it is surprising that the τ -value tends to be zero so early on in the evolutionary process, when the potential functions of the shaping rewards have not converged. Note that τ = 0 does not mean that the action selection is deterministic in the early stages of the learning. For each state, the action selection becomes deterministic after a virtual agent has experienced positive reward feedback. The Q-values are uniformly initialized to
Co-evolution of Rewards and Meta-parameters in Embodied Evolution 1
297
α τ
0.8
0.6
0.4
0.2
0 0
100
200
300
400
Generation
Fig. 9. Evolution of the average α- and τ -values over the 20 simulation runs, with standard deviations (dashed lines)
1
0.8
γ λ
0.6
0.4
0.2
0 0
100
200
300
400
Generation
Fig. 10. Evolution of the average γ- and λ-values over the 20 simulation runs, with standard deviations (dashed lines)
0 in each generation, which means that before the agent receives positive reward feedback it will select a random action among the actions that have not been tested. The results for the evolution of α (gray line in Fig. 9) and γ (gray line in Fig. 10) are similar. The two meta-parameters reach relatively stable levels of 0.24 ± 0.09 for α and 0.94 ± 0.08 for γ, after about 150 generations. The average value of λ (black line in Fig. 10) decreases gradually throughout the evolutionary process, from approximately 0.5 in the first generation to 0.3 in the last generation. The variance in the λ-values is very large and remains relatively constant during the last half of the evolution, with an approximate standard deviation of 0.21. A possible explanation for the gradual decrease in the average λ-value is that it corresponds to the gradually more optimized shaping rewards. More optimized shaping rewards suggest that the shaping rewards, γΦ(st+1 ) − Φ(st ), adapt to become more equal to the discounted difference in optimal state-values, V ∗ (s), between successive states, γV ∗ (st+1 ) − V ∗ (st ). This
298
S. Elfwing, E. Uchibe, and K. Doya
Average number of matings in generation 400
means that the agent receives a more accurate reward feedback in each state transition, and, therefore, reduces the need to propagate the received reward feedback to preceding state-action pairs. This hypothesis has support in the experimental data. Figure 11 shows the negative correlation (r = −0.68 and p = 0.0009) in the last generation between the average λ-value and the average number of matings, used as an estimate of the efficiency of the obtained shaping rewards for mating in each simulation run. The circles correspond to the average obtained values in each simulation run and the line corresponds to the mean squares linear fit of the data.
14 13 12 11
r = −0.68 p = 0.0009
10 9 8 0
0.2
0.4
0.6
0.8
Average λ in generation 400
1
Fig. 11. Correlation in the last generation between the average λ-value and the average number of matings, used as an estimate of the efficiency of the obtained shaping rewards for mating in each simulation run. The circles correspond to the average obtained values in the last generation in each simulation run and the line corresponds to the mean squares linear fit of the data.
6
Hardware Results
To validate the performance of the genotypes that were evolutionarily obtained in simulation, we performed hardware experiments using two robots, a smaller environment (1.25 m ×1.75 m), and 6 battery packs (see Fig. 1). In our experience, it is very difficult to evolve the virtual agents from scratch when there are only two subpopulations. We, therefore, first evolved the individuals for 100 generations in simulation, under the same conditions as in the previous described experiments. Two of the subpopulations were then moved to the smaller simulation environment and evolved for 100 additional generations. The best performing individuals in the two subpopulations in the last generation were then transferred to the real Cyber Rodent robots. The main difference between the hardware and simulation environments is that there is more uncertainty in the distance states corresponding to the tail LED and the face of the other robot in the hardware setting. In the idealized case in the simulator, the estimated distances to the LED and face are independent
Co-evolution of Rewards and Meta-parameters in Embodied Evolution
299
Average number of matings
10
8
6
4
2
0
SR
No SR
Hardware
SR
No SR
Simulation
Average number of captured batteries
Fig. 12. The average number of matings, with and without the obtained shaping rewards, in the hardware and the simulation environments
120 CR1 CR2 100 80 60 40 20 0
SR
No SR
Hardware
SR
No SR
Simulation
Fig. 13. The average number of battery captures for the two robots (CR1 and CR2), with and without the obtained shaping rewards, in the hardware and the simulation environments
of the relative angle of the other robot. However, in the hardware, where we used blob detection to estimate the distances, the extracted blob size corresponding to the face decreases with larger relative angles to the other robot. The camera system located in the front of the Cyber Rodent can limit the view of the tail LED, and thereby decrease the extracted blob size corresponding to the tail LED. Another difference is that it is not possible to move a battery pack to new random position after a successful battery capture in the hardware setting. To prevent the robots from capturing the same battery pack repeatedly, we forced the robots to move away from a captured battery pack. After a successful battery capture, the robot was first moved backwards for a small distance and then rotated approximately 180o in random direction. In the hardware experiments, we tested the two evolutionarily obtained solutions for a lifetime, 1200 time steps (approximately 5 minutes in real time). To
300
S. Elfwing, E. Uchibe, and K. Doya
investigate the importance of the shaping rewards we also performed experiments without the obtained shaping rewards included in update of the Q-values. To be able to validate the hardware results, we also performed simulation experiments under the same conditions. The results from these experiments are summarized in Fig. 12 and 13. The hardware results are averages over 10 experiments and simulation results are averages over 100 experiments. Overall, the experimental results are very encouraging. The learning performance is similar in the simulation environment and the hardware environment. With shaping rewards, the robots perform 6.1 ± 3.1 successful matings in hardware and 6.3 ± 3.4 successful matings in simulation. Without shaping rewards, the robots perform 1.6 ± 2.5 successful matings in hardware and 2.1 ± 2.4 successful matings in simulation. The only exception is that one of the robots, CR1, captures significantly more batteries in the hardware environment. The main cause for this result is probably due to differences in the camera properties and in the calibration of the parameters for the blob detection, which made it more difficult for CR1 to detect the tail LED and the face of the other robot, compared to CR2. A minor cause of the difference in the number of captured batteries between the two robots in the hardware environment is probably due to differences in the obtained neural network selection policy and meta-parameters, because CR1 also captured more batteries, both with and without shaping rewards, in the simulation environment. The results also show the great impact of the evolutionarily optimized shaping rewards. With shaping rewards, the robots capture almost twice as many battery packs and perform three times as many successful matings.
7
Discussion
In this paper we propose an embodied evolution framework that utilizes timesharing of subpopulations of virtual agents inside a limited number of robots. Within this framework we combine within-generation learning of basic behaviors, by reinforcement learning, and evolutionary adaptation of parameters that modulate the learning. A top-layer neural network controller selects basic behaviors according to the current environmental state and the virtual agent’s internal energy level. The basic behaviors are learned by the reinforcement learning algorithm Sarsa(λ) and the learning is evolutionarily optimized by tuning an additional reward signal, implemented as potential-based shaping rewards, and the global meta-parameters, such as the learning rate, α, the discount factor of future rewards, γ, the trace-decay rate, λ, and the temperature, τ , controlling the randomness of the action selection. We apply a biologically inspired selection scheme, in which there is no explicit communication of the fitness information. The virtual agents can only reproduce offspring by mating with virtual agents controlling other robots in the environment, and the probability that a virtual agent will reproduce offspring in its subpopulation is dependent on the virtual agent’s internal energy level at the mating occasion. Very encouragingly, the simulation results showed that the evolutionary performance of the proposed method was as good as the performance of evolution
Co-evolution of Rewards and Meta-parameters in Embodied Evolution
301
with standard centralized selection. The notable difference was that the evolutionary performance increased more rapidly in the initial 100 generations for the proposed method. The experimental results show that the shaping rewards can reduce the amount of required exploration. The temperature τ , which controls the trade-off between exploration and exploitation in softmax action selection, tends to be zero very early on in the evolutionary process and makes the action selection greedy. There is also a highly significant negative correlation between the trace-decay rate λ and the obtained mating performance, which indicates that more optimized shaping rewards reduce the need to propagate reward feedback to preceding state-action pairs. We also transferred the best evolutionarily obtained individuals to a hardware environment, using two real robots. The performance in the hardware environment was similar to the performance in simulation under the same environmental conditions. An interesting result is that there are variations in the obtained shaping rewards for the mating behavior in different simulation runs. Since the shaping rewards are gradually improving throughout the evolution, this suggests a feedback dynamics between the within-generation learning of the mating behavior and the evolutionary adaptations of the learning ability for mating behavior, which will be explored in subsequent work. If this is related to Baldwinian evolution, then the result of the within-generation learning of the mating behavior should influence the shape of the potential function of shaping rewards in subsequent generations. One of the main points of the original embodied evolution methodology, as formulated by [21], was that it should be applied to a colony of physical robots. A natural extension of this study is, therefore, to implement our proposed framework in hardware. One obstacle to achieve this goal is that it requires that the virtual agents keep the real robots “alive” for an extended amount of time, by tuning the time the robots recharge from the external battery packs according to the robots’ internal energy levels. Technical limitations make monitoring of the robots’ internal battery levels difficult to achieve in the current hardware platform. We are now developing the second generation hardware platform. One of the design objectives of the new platform is that the robots should have the ability to monitor their internal battery levels, and, thereby, the possibility to create a sustainable population of physical robots. We will also produce considerable more second generation robots than the four Cyber Robots robots that the project currently has in its possession. This will most likely reduce the number of virtual agents required in the subpopulations to achieve efficient evolution, and, thereby, reduce the total evolution time. Another obstacle to the implementation of the framework in hardware is the relatively long time needed for evaluating the learning performance of the basic behaviors. Therefore, we need to investigate the settings of the parameters in our framework required to achieve efficient evolution, such as the number of virtual agents in each subpopulation, the number of time-sharings, and the length of each time-sharing.
302
S. Elfwing, E. Uchibe, and K. Doya
References 1. Baldwin, J.: A New Factor in Evolution. American Naturalist 30, 441–451 (1896) 2. Doya, K.: Metalearning and Neuromodulation. Neural Networks 15, 4 (2002) 3. Doya, K., Uchibe, E.: The Cyber Rodent Project: Exploration of Adaptive Mechanisms for Self-Preservation and Self-Reproduction. Adaptive Behavior 13(2), 149– 160 (2005) 4. Elfwing, S., Uchibe, E., Doya, K., Christensen, H.I.: Darwinian Embodied Evolution of the Learning Ability for Survival. Adaptive Behavior (in Press) 5. Floreano, D., Mondada, F.: Evolution of Plastic Neurocontrollers for Situated Agents. In: International Conference on Simulation of Adaptive Behavior, pp. 401– 410 (1996) 6. Hinton, G., Nowlan, S.: How Learning can Guide Evolution. Complex Systems 1, 495–502 (1987) 7. Laud, A., DeJong, G.: The Influence of Reward on the Speed of Reinforcement Learning: An Analysis of Shaping. In: International Conference on Machine Learning, pp. 440–447 (2003) 8. Nehmzow, U.: Physically Embedded Genetic Algorithm Learning in Multi-Robot Scenarios: The PEGA algorithm. In: International Workshop on Epigenetic Robotics and Robotics (2002) 9. Niv, Y., Joel, D., Meilijson, I., Ruppin, E.: Evolution of Reinforcement Learning in Uncertain Environments: A Simple Explanation for Complex Foraging Behaviors. Adaptive Behavior 10(1), 5–24 (2002) 10. Ng, A., Harada, D., Russell, S.: Policy Invariance under Reward Transformations: Theory and Application to Reward Shaping. In: International Conference on Machine Learning, pp. 278–287 (1999) 11. Nolfi, S., Floreano, D.: Evolutionary Robotics. MIT Press, Cambridge (2000) 12. Rummery, G.A., Niranjan, M.: On-line Q-learning using Connectionist Systems. Technical report CUED/F-INFENG/TR 166, Cambridge University Engineering Department (1994) 13. Ruppin, E.: Evolutionary Autonomous Agents: A Neuroscience Perspective. Nature Review Neuroscience 3, 132–141 (2002) 14. Paenke, I., Sendhoff, B., Kawecki, T.: Influence of Plasticity and Learning on Evolution under Directional Selection. American Naturalist 170(2), 1–12 (2007) 15. Paenke, I., Kawecki, T., Sendhoff, B.: The Influence of Learning on Evolution - A Mathematical Framework. Artificial Life (2008) 16. Singh, S.P., Sutton, R.S.: Reinforcement Learning with Replacing Eligibility Traces. Machine Learning 22(1-3), 123–158 (1996) 17. Sutton, R. S.: Generalization in Reinforcement Learning: Successful Examples Using Sparse Coarse Coding. In: Advances in Neural Information Processing Systems 8, pp. 1038–1044 (1996) 18. Sutton, R., Barto, A.: Reinforcement Learning: An Introduction. MIT Press, Cambridge (1998) 19. Turney, P., Whitley, D., Anderson, R.: Introduction to the Special Issue: Evolution, Learning, and Instinct: 100 Years of the Baldwin Effect. Evolutionary Computation 4(3), iv–viii (1996) 20. Urzelai, J., Floreano, D.: Evolutionary Robotics: Coping with Environmental Change. In: Genetic and Evolutionary Computation Conference, pp. 941–948 (2000) 21. Watson, R., Ficici, S., Pollack, J.: Embodied Evolution: Distributing an Evolutionary Algorithm in a Population of Robots. Robotics and Autonomous Systems 39(1), 1–18 (2002)
Active Vision for Goal-Oriented Humanoid Robot Walking Mototaka Suzuki1 , Tommaso Gritti2 , and Dario Floreano3 1
Mahoney-Keck Center for Brain & Behavior Research, Columbia University Medical Center, New York 10032, USA
[email protected] 2 Video Processing & Analysis, Philips Research 5656 AB, Eindhoven, The Netherlands
[email protected] 3 Laboratory of Intelligent Systems Ecole Polytechnique F´ed´erale de Lausanne (EPFL), CH-1015 Lausanne, Switzerland
[email protected] Abstract. Complex visual tasks may be tackled with remarkably simple neural architectures generated by a co-evolutionary process of active vision and feature selection. This hypothesis has recently been tested in several robotic applications such as shape discrimination, car driving, indoor/outdoor navigation of a wheeled robot. Here we describe an experiment where this hypothesis is further examined in goal-oriented humanoid bipedal walking task. Hoap-2 humanoid robot equipped with a primitive vision system on its head is evolved while freely interacting with its environment. Unlike wheeled robots, bipedal walking robots are exposed to largely perturbed visual input caused by their own walking dynamics. We show that evolved robots are capable of coping with the dynamics and of accomplishing the task by means of active, efficient camera control.
1
Introduction
Machine vision today can hardly compete with biological vision despite the enormous progress of computing power. One of the most remarkable and often neglected differences between machine vision and biological vision is that computers are often asked to process an entire image in one shot and produce an immediate answer whereas animals take time to explore the image searching for features and dynamically integrating information over time. Active vision is the sequential and interactive process of selecting and analyzing parts of a visual scene [1,2,3]. Feature selection instead is the development of sensitivity to relevant features in the visual scene to which the system selectively responds [4]. Each of these processes has been separately investigated and used in machine vision. However, the combination of active vision and feature selection is still largely unexplored. An intriguing hypothesis is that B. Sendhoff et al. (Eds.): Creating Brain-Like Intelligence, LNAI 5436, pp. 303–313, 2009. c Springer-Verlag Berlin Heidelberg 2009
304
M. Suzuki, T. Gritti, and D. Floreano D) system behavior
E) vision behavior
F) A) visual neurons C) proprioceptive neurons retina
B) visual scene
Fig. 1. The neural architecture of the active vision system is composed of A) a grid of visual neurons with non-overlapping receptive fields whose activation is given by B) the grey level of the corresponding pixels in the image; C) a set of proprioceptive neurons that provide information about the movement of the vision system; D) a set of output neurons that determine the behavior of the system (pattern recognition, car driving, robot navigation); E) a set of output neurons that determine the behavior of the vision system; F) a set of evolvable synaptic connections. The number of neurons in each sub-system can vary according to the experimental settings.
co-evolution of active vision and feature selection could greatly simplify the computational complexity of vision-based behavior by facilitating each other’s task. In recent work [5] we investigated the co-development of active vision and receptive fields within the same time scale using behavioral robotic systems equipped with a primitive retinal system and deliberately simple neural architectures (Fig. 1). The input layer of the neural network consisted of an array of visual neurons spanning a rudimentary rectangular retina. The output layer was divided into two parts, one controlling the movement of the retina on the visual field and the other the behavior of the robotic system. Therefore, the network had to select the features necessary to perform a given task and at the same time control the vision system. We deliberately did not provide any hidden units to the neural network in order to keep its computing and representational abilities as simple as possible. The synaptic strengths of the network were encoded in a binary string and evolved with a genetic algorithm while the robotic system was free to move in the environment. In a first set of experiments, we showed that sensitivity to very simple linear features is co-evolved with, and exploited by, active vision to perform size and position invariant shape discrimination. We also showed that such discrimination problem is very difficult for an architecturally similar neural system without active behavior. In a second set of experiments, we applied the same co-evolutionary method and architecture for driving a simulated car over mountain roads and showed that active vision is exploited to locate and fixate
Active Vision for Goal-Oriented Humanoid Robot Walking
305
Fig. 2. Previous studies on co-evolution of active vision and feature selection applied to shape discrimination (top left), car driving (top right) and indoor navigation of a wheeled robot (bottom)
simple road features that are used to control the car steering and acceleration. In a third set of experiments, we applied once again the same co-evolutionary method and architecture to an autonomous robot equipped with a pan/tilt camera that was asked to navigate in an arena located in an office environment. Evolved neural controllers exploited active vision and simple features to direct their gaze at invariant features of the environment and perform collision-free navigation. The present chapter describes another set of experiments where coevolution of active vision and feature selection is applied to goal-oriented humanoid bipedal walking task. Despite the steady progress of humanoid robot vision over the last decade [6,7,8], active vision of bipedal walking robots is still largely unexplored. It would become really crucial if bipedal robots need, for example, to robustly detect an important visual target while their body is largely moving. Active vision may allow robots to select and process only behaviorally-relevant visual features against such image perturbation. We show that evolved robots with active vision can cope with the walking dynamics and perform goal-oriented navigation while avoiding obstacles. In doing so, they actively move their camera horizontally such that they can detect potential obstacles while scanning the goal beacon with the retina.
306
2
M. Suzuki, T. Gritti, and D. Floreano
Robotic Setup and Neural Architecture
A humanoid robot Hoap-2 (Fujitsu Automation Co., Ltd.) was precisely modeled in a physics-based simulation WebotsT M , a commercially available software package that models gravity, mass, friction and collisions (Fig. 3, top). The neural network controlling the head camera movement and the walking behavior was developed by using artificial evolution. The robot was asked to reach a goal location by detecting the beacon (white window) while avoiding obstacles and walls (see Fig. 3, bottom).
Fig. 3. A bipedal humanoid robot Hoap-2 (25cm(W)×16cm(L)×50cm(H), Fujitsu Automation Co., Ltd.) with a camera in its head is modeled in simulation and asked to reach the goal (white square) in a 4.6×4.6 m2 walled arena containing black, cylindrical obstacles. The location and orientation of the robot as well as the position of each obstacle are randomly initialized at the beginning of each trial. The environment and robot are simulated with WebotsT M simulator (http://www.cyberbotics.com).
The neural network is an extended version of the original architecture shown in Fig. 1. It has been incrementally developed based on our previous investigations [5,9,10,11]. It is characterized by a feedforward architecture with evolvable thresholds and discrete-time, fully recurrent connections at the output layer (Fig. 4). A set of visual neurons, arranged on a grid, with non-overlapping receptive fields receives information about the gray level of the corresponding pixels in the image provided by the camera on the robot. The size of receptive field of each unit can be dynamically changed at every time step according to the
Active Vision for Goal-Oriented Humanoid Robot Walking
307
Zoom Filter Pan Tilt Dir Speed
Bias
Hidden neurons
Proprioceptive neurons
Memory units
Visual neurons
Fig. 4. The neural architecture which controls the humanoid robot in the goal-oriented walking task. This architecture is an extended version of the original architecture shown in Fig. 1. It is composed of: a) A grid of visual neurons with nonoverlapping receptive fields whose activation is given by the grey level of the corresponding pixels in the image; b) A set of proprioceptive neurons that provide information about the movement of the head camera with respect to the upper torso of the robot; c) A set of output neurons that determine at each sensory motor cycle the filtering used by visual neurons, the zooming factor, the new pan and tilt speeds of the head camera, and the walking direction and speed of the robot; d) A set of memory units whose outgoing connection strengths are equivalent to recurrent connections among output units; and e) A bias neuron whose outgoing connection weights represent the thresholds of the output neurons.
output value of the “Zoom” neuron. We can think of the total area spanned by all receptive fields as the surface of an artificial retina changing in the interval [60 × 60, 240 × 240] pixels. The activation of a visual neuron, scaled between 0 and 1, is given by the average gray level of all pixels spanned by its own receptive field (averaging filter) or by the gray level of a single pixel located at the center of the receptive field (sampling filter). The choice between these two activation methods, or filtering strategies, can be dynamically changed at each time step according to the output value of the “Filter” neuron. Two proprioceptive units provide input information about the measured horizontal (pan) and vertical (tilt) angles of the head camera. These values are in the interval [−60, 60] and [−25, 25] degrees for pan and tilt, respectively. Each value is scaled to the interval [−1, 1] so that activation 0 corresponds to 0 degrees (camera pointing forward parallel to the floor). A set of memory units store the values of the output neurons at the previous sensory motor cycle step and send them back to the output units through a set of connections, which effectively act as recurrent connections among output units [12]. The bias unit has a constant value of −1 and its outgoing connections represent the adaptive thresholds of output neurons [13].
308
M. Suzuki, T. Gritti, and D. Floreano
Output neurons use the sigmoid activation function f (x) = 1/(1 + exp(−x)), where x is the weighted sum of all inputs. Output neurons encode the motor commands of the active vision system and of the robot for each sensory motor cycle. One neuron determines the filtering strategy used to set the activation values of visual neurons for the next sensory motor cycle. Two neurons control the movement of the camera, encoded as horizontal and vertical speeds relative to the current position. The remaining two neurons encode the walking direction and speed of the robot. Activation values of the direction neuron above and below 0.5 stand for left and right turning, respectively. Activation values of the speed neuron above and below 0.5 stand for forward and backward walking speed, respectively. Our interest in this experiment did not reside in the algorithm of bipedal walking itself, but rather in the visuo-motor coordination against the large movements of the body and head while walking. Therefore the neural architecture was designed to control the macroscopic behavior of the robot, such as walking straight, turning left/right, as well as the pan/tilt camera movement. The stable walking was instead realized by loading and following the joint trajectories which had been pre-calculated by Zero Moment Point walking algorithm [14]. Evolution of stable walking has been studied in [15] with a significantly simplified model of a bipedal robot.
3
Evolution of Neural Controllers of Hoap-2 Humanoid Robot
The neural network has 147 evolvable connections that are individually encoded on five bits in the genetic string (total length=735 bits). A population of 50 genomes is randomly initialized by the computer. Each individual genome is then decoded into the connection weights of the neural network and tested on the robot while its fitness is computed. The best 20% of the population (those with the highest fitness values) are reproduced, while the remaining 80% are discarded. Equal number of copies of the selected individuals are made to create a new population of the same size. The new genomes are randomly paired, crossed over with probability 0.1 per pair, and mutated with probability 0.01 per bit. Crossover consists in swapping genetic material between two strings around a randomly chosen point. Mutation consists in toggling the value of a bit. Finally two copies of the best genomes of the previous generation are inserted in the new population at the places of the randomly chosen genomes (elitism) in order to improve the stability of the evolutionary process. The fitness function was designed to select robots for their ability to arrive at the goal at the end of each trial. Each individual is tested for four trials, each trial lasting for 200 sensory motor cycles. A trial can be truncated earlier if the operating system detects an imminent collision into the obstacles and walls. For more details of the fitness function, see the appendix. At the beginning of each trial the position and orientation of the robot as well as the position of each obstacle are randomized. We performed five evolutionary runs, each starting
Active Vision for Goal-Oriented Humanoid Robot Walking
309
1 0.9 0.8
Fitness
0.7 0.6 0.5 0.4 0.3 0.2 0.1 0
50
Generations
100
150
Fig. 5. Evolution of the performance of neural controllers for the Hoap-2 humanoid robot in the goal-oriented navigation task. The thick line shows the fitness values of the best individuals, while the thin line shows the average fitness value of the population. Each data point is the average of five evolutionary runs with different random initializations of the population. Vertical bars show standard error.
Goal
Start
Horizontal camera movement
Left max angle Straight Right max angle
50
100 150 Time steps
200
250
Fig. 6. Behavior of the best evolved robot. Top: The dotted line shows a trajectory of the best evolved robot in the arena. Black disks represent obstacles. Bottom: Active camera movement of the robot during navigation.
310
M. Suzuki, T. Gritti, and D. Floreano
1 0.9 0.8
Fitness
0.7 0.6 0.5 0.4 0.3 0.2 0.1 0
50
Generations
100
150
Fig. 7. Evolution without active vision. The robot was asked to perform the same task without camera movement. The thick line shows the fitness values of the best individuals, while the thin line shows the average fitness value of the population. Each data point is the average of five evolutionary runs with different random initializations of the population. Vertical bars show standard error.
with a randomly initialized population. In all cases the fitness reached stable values in less than 120 generations (Fig. 5) which corresponded to collision-free trajectories toward the goal. Notice that the fitness can never be one because the robot is stopped as soon as it touches the goal beacon on the wall. The behavioral strategy observed in the evolved robots was different in the evolutionary runs. However, there was a common characteristic in those strategies. Evolved robots accomplished the task by means of active horizontal camera movements in order to detect not only the goal beacon but also potential obstacles during the navigation (Fig. 6)1 . At the beginning of each trial the robot turned on the spot until it found the goal beacon. As soon as the robot found the goal, it started moving forward to the goal while scanning the beacon with the retina. Active horizontal camera movements allowed the robot to detect and avoid obstacles on the way to the goal. Evolved robots which were not allowed to move their gaze direction instead scored lower fitness values because they often collided with obstacles which were out of sight (Fig. 7).
4
Discussion
We have described an experiment where coevolution of active vision and feature selection is applied to goal-oriented humanoid bipedal walking task. The underlying hypothesis is that simple nervous systems may solve a variety of complex 1
A video clip showing the goal-oriented behavior of the best evolved bipedal robot is available at: http://lis.epfl.ch/research/projects/ActiveVision/videos/Humanoid2.avi
Active Vision for Goal-Oriented Humanoid Robot Walking
311
visual tasks as long as they are generated by a coevolutionary process of active vision and feature selection. We have shown that evolved humanoid robots successfully perform the goal-oriented navigation by means of active, horizontal camera movement that enable them to detect the goal beacon and potential obstacles. Initially we predicted that evolved robots might move their camera so as to compensate for the movement of upper torso such that the visual input becomes stable during walking. Indeed we observed such compensatory camera movements in early stages of evolutionary runs but they all disappeared in later generations, suggesting that compensatory camera movements were not necessary to solve the present navigation task. Instead the best evolved robots actively moved their gaze horizontally to detect potential obstacles while scanning the goal beacon. We plan to further explore the behavioral and environmental conditions under which compensatory eye movements provide significant functional advantage. In the experiment presented here the artificial retina was fixed with respect to the head, at the center of the camera image. A series of experiments not reported in this chapter were conducted in order to assess the role of independent camera and retina movement. The results of these experiments suggest that the retina movement does not provide a significant advantage for the solution of this task. This could be because the head camera movement was sufficient to solve this task. Adding one more degree of freedom of the retina might instead increase the dimension of the search space, while providing little functional advantage. We plan to conduct further experiments to determine the limits of the complexity of the task that can be solved with camera movement alone, and to investigate if the additional retina movement can be useful in this case. A limitation of our approach comes from the fact that the neural architecture, or topology of the neural network, must be carefully designed for each task, although we deliberately kept the architecture as simple as possible in the present experiments. A promising research direction could therefore be to evolve neural architecture together with synaptic weights. This approach would be mandatory in applications where it is not known a priori what type of neural architecture could solve the assigned task. Several promising algorithms have already been proposed for the joint evolution of the neural architecture and of its synaptic weights [16,17]. In order to study how active vision copes with the walking dynamics, we realized the bipedal walking by loading and following the joint trajectories which had been pre-calculated by Zero Moment Point walking algorithm [14]. As the next step it would be interesting to coevolve active vision and bipedal walking within the same time scale. A possible approach is the combination of the kind of central pattern generators evolved by [15] and of our active vision system.
5
Conclusion
In this chapter, coevolution of active vision and feature selection was studied in the application to goal-oriented humanoid bipedal walking task. Evolved humanoid robots successfully coped with the walking dynamics and accomplished this task by means of active head camera control.
312
M. Suzuki, T. Gritti, and D. Floreano
We have focused only on the goal-oriented navigation so far. However humanoid robots can also be used to study more cognitive, complex tasks where, for instance, whole body coordination is crucial. Future work may address the hand-eye coordination of humanoid robots, and coevolution of active vision and bipedal walking within the same time scale.
Acknowledgments The robotic simulation and data analysis described in this chapter were carried out while Mototaka Suzuki and Tommaso Gritti were at the Laboratory of Intelligent Systems headed by Dario Floreano, Ecole Polytechnique F´ed´erale de Lausanne (EPFL). Thanks to Claudio Mattiussi for enhancing the readability of this chapter. Thanks also goes to two anonymous reviewers for their constructive comments on an earlier version of this chapter.
References 1. Aloimonos, J., Weiss, I., Bandopadhay, A.: Active vision. International Journal of Computer Vision 1(4), 333–356 (1987) 2. Bajcsy, R.: Active perception. Proceedings of the IEEE 76, 966–1005 (1988) 3. Ballard, D.H.: Animate vision. Artificial Intelligence 48(1), 57–86 (1991) 4. Hancock, P.J.B., Baddeley, R.J., Smith, L.S.: The principal components of natural images. Network 3, 61–70 (1992) 5. Floreano, D., Kato, T., Marocco, D., Sauser, E.: Coevolution of active vision and feature selection. Biological Cybernetics 90(3), 218–228 (2004) 6. Kagami, S., Nishiwaki, K., Kuffner, J., Kuniyoshi, Y., Inaba, M., Inoue, H.: Online 3D vision, motion planning and bipedal locomotion control. In: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 2557–2562 (2002) 7. Seara, J.F., Schmidt, G.: Intelligent gaze control for vision-guided humanoid walking: methodological aspects. Robotics and Autonomous Systems 48(4), 231–248 (2004) 8. Sabe, K., Fukuchi, M., Gutmann, J.S., Ohashi, T., Kawamoto, K., Yoshigahara, T.: Obstacle avoidance and path planning for humanoid robots using stereo vision. In: Proceedings of IEEE International Conference on Robotics and Automation, pp. 592–597 (2004) 9. Suzuki, M., Floreano, D., Di Paolo, E.A.: The contribution of active body movement to visual development in evolutionary robots. Neural Networks 18(5/6), 656– 665 (2005) 10. Floreano, D., Suzuki, M., Mattiussi, C.: Active vision and receptive field development in evolutionary robots. Evolutionary Computation 13(4), 527–544 (2005) 11. Suzuki, M., Floreano, D.: Evolutionary active vision toward three dimensional landmark-navigation. In: Nolfi, S., Baldassarre, G., Calabretta, R., Hallam, J.C.T., Marocco, D., Meyer, J.-A., Miglino, O., Parisi, D. (eds.) SAB 2006. LNCS, vol. 4095, pp. 263–273. Springer, Heidelberg (2006) 12. Elman, J.: Finding structure in time. Cognitive Science 14, 179–211 (1990) 13. Hinton, G.E., Sejnowski, T.J.: Unsupervised Learning. The MIT Press, Cambridge (1999)
Active Vision for Goal-Oriented Humanoid Robot Walking
313
14. Takanishi, A., Takeya, T., Karaki, H., Kumeta, M., Kato, I.: A control method for dynamic walking under unknown external force. In: Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems, Tsuchiura, Japan, pp. 795–801 (1990) 15. Reil, T., Husbands, P.: Evolution of central pattern generators for bipedal walking in a real-time physics environment. IEEE Transactions on Evolutionary Computation 6(2), 159–168 (2002) 16. Stanley, K.O., Miikkulainen, R.: Evolving neural networks through augmenting topologies. Evolutionary Computation 10(2), 99–127 (2002) 17. Mattiussi, C., Floreano, D.: Analog genetic encoding for the evolution of circuits and networks. IEEE Transactions on Evolutionary Computation 11(5), 596–607 (2007)
Appendix: Fitness Function The fitness criterion F is intended to select robots for their ability to arrive at the goal position at the end of each trial. More precisely, F is defined as follows: E 1 F = fi E i 1 − d/Dmax fi = 0
(1) : if d < Dmax : otherwise.
where E is the number of trials (four in these experiments); d is the distance between the end position of the robot and the goal position; Dmax is the maximum value of d (four meters in these experiments).
Cognitive Adequacy in Brain-Like Intelligence Christoph S. Herrmann1 and Frank W. Ohl2 1
Otto-von-Guericke-University Magdeburg Institute for Psychology Biological Psychology Lab PO box 4120 39016 Magdeburg Germany
[email protected] 2 Leibniz Institute for Neurobiology Brenneckestrasse 6 39118 Magdeburg Germany
[email protected] Abstract. A variety of disciplines have dealt with the design of intelligent algorithms – among them Artificial Intelligence and Robotics. While some approaches were very successful and have yielded promising results, others have failed to do so which was — at least partly — due to inadequate architectures and algorithms that were not suited to mimic the behavior of biological intelligence. Therefore, in recent years, a quest for ”brain-like” intelligence has arosen. Soft- and hardware are supposed to behave like biological brains — ideally like the human brain. This raises the questions of what exactly defines the attribute ”brain-like”, how can the attribute be implemented and how tested. This chapter suggests the concept of cognitive adequacy in order to get a rough estimate of how ”brain-like” an algorithm behaves.
1
Artificially Intelligent Systems
The field of Artificial Intelligence (AI) was born at a conference at Dartmouth College in 1956. The participants of that conference later became the leaders of the newborn field – among them sounding names like John McCarthy, Marvin Minsky, Allen Newell and Herbert Simon. Numerous AI laboratories were founded and AI research projects funded. The early AI researchers were extremely optimistic about the future of AI and formulated challenging expectations: In 1965, Herbert Simon wrote: ”machines will be capable, within twenty years, of doing any work a man can do” [1]. The following decades revealed that these predictions were too optimistic. AI did not achieve its ambitious goals and the first winter of AI started [2]. The philosopher Hubert Dreyfus argued that the approach of AI was mislead and that due to specific differences between computers and human beings the ambitious goals will never be achieved by computers [3]. One of the problems for AI at B. Sendhoff et al. (Eds.): Creating Brain-Like Intelligence, LNAI 5436, pp. 314–327, 2009. c Springer-Verlag Berlin Heidelberg 2009
Cognitive Adequacy in Brain-Like Intelligence
315
that time was the hardware that it was supposed to be implemented on. The von Neumann architecture of computers which we still have in our desktop PCs today works strictly serial and one main processor is responsible for all aspects of computation. This is completely different in biological brains. Many specialized brain areas process different aspects of information simultaneously which seems to be one of the advantages over computers. In each of these brain areas, a large number of neurons work as simple processing units. Later approaches within AI have tried to take this discrepancy into regard and have adapted certain aspects of biological information processing. Of these approaches, especially artificial neural networks (ANNs) have proven very efficient [4]. The idea of ANNs is to simulate the behavior of a single neuron which is thought to be rather simple as compared to a microprocessor and gain processing power by combining many of theses simple processsing elements in a parallel fashion. The success of ANNs resulted in the notion that the architecture of AI — be it soft- or hardware — needs to be more biologically plausible in order to yield intelligent behavior. This and other approaches have led to the desire to develop ’brain-like’ intelligence in computers.
2
What Is Brain-Like Intelligence?
When trying to achieve this goal, one is confronted with the question: ‘What does the attribute ’brain-like’ stand for? Certainly, one could consider something ’brain-like’ if its outward appearance resembled that of a biological brain. However, this is not what we desire for an artificially intelligent system. Instead, we want our intelligent system to reveal information processing features of biological brains – ideally of the human brain. The following is an incomplete list of desired features: 2.1
Perception and Action
Brains have evolved over millions of years during an interaction with their environment. Random mutations occurred to the genes which code for the proteins from which neural tissue is built resulting in different degrees of being able to cope with the challenges of life. Those mutations which resulted in processes which were better adapted to the environment survived. During evolution, even simple single-cell lifing creatures (e.g. ciliates) have developed mechanisms of perceiving certain aspects of their environment and to act within it. Ciliates are able to detect chemical transients and to move their hairs (cilia). Human and animal brains are extremely effective in perceiving sensory input from multiple modalities, i.e. visual, auditory, somatosensory, olfactory, and gustatory. At the same time they have developed effector systems to act upon their environment, i.e. the motor system in general as well as very specialized and highly automatic motor functions of locomotion and speech. At first glance, one might think that perception and action are low-level functions of biological brains that do not relate to intelligence. The opposite may,
316
C.S. Herrmann and F.W. Ohl
however, be true: Perception and action have clearly adapted to the environment in which an animal lives. It has repeatedly been argued that perception and action cannot be regarded separately from each other [5]. The term actionperception-cycle has been coined to denote this fact [6]. Perception and action have each evolved as a function of the other and must be considered together when either understanding or reproducing them. The intelligent information processing that we are interested in, is integrated into this action-perception-cycle and thus strongly influenced by it. If we were to have completely different senses and effectors, we would probably also have evolved different intellectual abilities. This notion of a strong interaction between perception, action, and intelligence has lead to the development of embodied artificial intelligence [7]. 2.2
Learning and Memory
One of the key features of animals and humans is their ability to learn. In fact, brains learn during their whole lifetime mostly without much effort. However, human memory is quite different from typical computer memory. A harddisk or RAM chip can be empty and both have a limited storage capacity. In the human brain, knowledge representation and learning are intimately interwoven. The distributed representation together with a strong interconnection of its elements (neurons) allows flexible learning. We can learn new knowledge items within seconds as in the case of remembering a phone number from reading it in the phone book until dialing it. However, such contents of short term memory are also quickly forgotten if they are not transferred into long term memory by a process called consolidation. Memory is stored in the synaptic connections of neurons and the interaction of several brain systems mediates consolidation [8]. Even early stages of sensory processing are susceptible to learning. The socalled edge detectors in mammalian visual cortex, for example, develop during a critical period in infancy only if edges are actually perceived in the visual environment. This demonstrates how closely processing and storage of information are linked in the neural representation. Even more cognitive processes, like concept formation and category learning, are based on neural mechanisms that coexist with those of elementary stimulus processing in primary sensory areas [9]. 2.3
Focussing
The amount of information which enters a biological brain at any given point in time exceeds the capacity of information which can be processed or stored. Thus, mechanisms are required to focus on subsets of the data which are relevant. Focussing can happen at early stages of input processing via attentional mechanisms that can operate automatically or can be driven in a voluntary fashion [10]. However, also the abovementioned consolidation of memory can be seen as a focus onto relevant information. Only data which has either been attributed high emotional salience, behavioral relevance, or has been repeated many times will be consolidated. Also for the processes of planning an action, similar mechanism are needed to focus onto one out of many possible alternative actions at
Cognitive Adequacy in Brain-Like Intelligence
317
any given time. If artificially intelligent systems process large amounts of data, similar mechanisms of focussing are required. 2.4
Motivation
Biological brains do not perceive and act in order to perceive or act but in order to survive and to reproduce. They have motivational systems with drives that are genetically determined and controlled by the current behavioral context. Two essential motivations are to avoid pain and to increase pleasure. Drives resulting from these motivations are for example hunger, thirst, and sexual drive. Consequently, motivation has a profound influence on perception and action. If, for example, a strategy does not lead to the desired goal, i.e. reducing hunger, a biological brain will eventually switch the strategy. Thus, subgoals can adaptively be changed depending on whether a drive is reduced or not. This is important also for technical systems, in order not to run into dead ends by trying to stick to one strategy [11].
3
Neurobionics
One approach to achieve ’brain-like’ behavior of technical systems is to more or less copy the functional anatomy or physiology of the biological brain in case they are known. This has lead to the field of neurobionics which takes biological examples in order to build systems that come up with the same functionality [12]. One example are artificial neural networks, especially biologically plausible ones. As mentioned above, the biological neural networks offer powerful learning and memory capacity. Using artificial neurons as building blocks for technical systems has resulted in astounding machines that can recognize patters in a fraction of a second or can move robot arms with the precision of a human. Another example for the advantages of imitating the biological model is the field of neuroprosthetics [13]. For example, cochlea implants resemble the way in which the human cochlea transforms auditory input into nerve impulses. In patients with damaged cochlears, artificial nerve impulses are generated from the auditory signal of a microphone which are then used to stimulate the acoustic nerve. While this might be regarded peripheral rather than intelligent processing, newer stimulation techniques try to stimulate the cortex directly [14]. This, of course, requires to understand not only peripheral processing and coding in the cochlea but also in cortex which seems to be fundamentally different. If a technical system is derived from a biological model like the brain, it will most probably also behave very much like a biological brain. Thus, one approach to implement ’brain-like’ intelligence is to copy the biological model. However, for many aspects of intelligent behavior or information processing we do not know yet which anatomical or physiological mechanism are necessary. In such cases, it may help to observe the behavior of the technical system and to compare it to its biological model. If a technical systems behaves in a very similar way as the biological brain, it is plausible to assume that it might also work based upon the same mechanisms. This leads to the concept of cognitive adequacy.
318
4
C.S. Herrmann and F.W. Ohl
Cognitive Adequacy
Here, we want to introduce a new idea for the achievement of ’brain-like’ intelligence based on cognitive adequacy which compares the behavior of the intelligent computer system to that of the biological system. The concept of cognitive adequacy in AI came up after Shastri and Ajjanagadde formulated the paradox of AI, describing the discrepancy in performance of AI systems which do not require more processing time to process hard problems as compared to easy ones [15]. Consider the task of determining whether or not a picture is symmetric with respect to the vertical meridian. For a human observer, it will take longer to detect the asymmetry in a picture which is almost symmetric except for a few details as compared to a completely asymmetric picture. A simple computer algorithm which counts pixels to the left and to the right of the meridian would yield the results always after the same processing time and would thus be considered inadequate. The field of psychophysics has gathered a wealth of data on human reaction times from various cognitive tasks. Some authors have used such reaction times in order to draw conclusions about the way in which knowledge is represented in the human brain. For example, psychological experiments revealed that logical theorems can be clustered into different classes of difficulty according to the human performance in solving them [16]. Based on these and other results, adequacy has been defined in AI for deduction methods stating that, roughly speaking, a method performs adequate, if for any given knowledge base, the method solves simpler problems faster than more difficult ones [17]. Thereupon, various AI researchers have developed adequate AI approaches [18] and stated that adequacy implies massive parallelism [19]. This has lead to the idea of introducting cognitive adequacy also into the field of ANNs [20].
5
Cognitive Adequacy and ’Brain-Like’ Intelligence
Here, we want to suggest using the concept of cognitive adequacy in order to determine whether an algorithm displays ’brain-like’ intelligence. Following the above concepts of cognitive adequacy, we would require the following: An artificially intelligent system or algorithm performs cognitively adequate, if it reveals the same relative performance measures as a biological system solving the same task. Since performance is hard to measure on an absolute scale, relative measures are applied in cognitive science, i.e. at least two measures are recorded and compared. For example, one could record the reaction time which is needed by a human observer in order to correctly classify two letters as same or different either based on physical identity (A = A, but A = a) or phonetic identity (A = A and A = a). The two measures would then be the two reaction times for the physical identity task and the phonetic identity task. It has been demonstrated that it takes longer to determine phonetic identity than physical identity [21].
Cognitive Adequacy in Brain-Like Intelligence
319
Psychologists use such findings to infer possible knowledge representations for letters in the human brain. For the above example it has been concluded that, at first, a visual representation is built up and only subsequently a more abstract phonetic representation is established. Thus, physical identity can already be determined on the visual representation which is built up first — leading to shorter reaction times. 5.1
Measures of Cognitive Adequacy
Since we want to determine cognitive adequacy behaviorally rather than physiologically, we suggest behavioral measures to be used. Among these, the most commonly used are reaction times and error rates which can be found in cognitive science literatur for a wide variety of tasks such that the computer scientist or engineer involved in creating ’brain-like’ intelligence is not required to perform behavioral experiments. Good starting points for the search of psychophysical data are a chapter on ’The methods of cognitive neuroscience’ by [22] or one of two books [23] and [21]. Reaction Times. As introduced above, reaction times are commonly recorded for at least two cognitive experimental conditions [24]. Since there is substantial variability in reation times, they are usually averaged across multiple trials in each individual. The experiment is carried out in many individuals such that a mean reaction time as well as a standard deviation can be computed. Of course, two averaged reaction times will hardly ever be absolutely identical. However, slight differences which go in one or the other direction depending on the individual are not indicative of different degrees of processing. In order to say that the reaction times of two experimental conditions are significantly different, statistical measures are applied. For cognitively adequate algorithms we would require that they take more time to process such conditions for which also humans need more processing time. Importantly, only the difference or ratio but not the absolute value is important. An example for visual target detection can illustrate this point. In a psychophysical experiment, human subjects were asked to detect a target image out of four different images which were composed of two features [25]. One feature was the number of inducing elements (three versus four, cf. Fig. 1) while the other feature was the orientation of the corners of the inducing elements (inside versus outside).1 In task 1, the Kanizsa square served as target and required a response with the right hand. The other three images were considered non-targets and required responses with the left hand. Out of these non-target images, two shared one feature with the target and one shared no feature. 1
Note, that inside orientation leads to the perception of illusory figures — so-called Kanizsa figures. However, even though we used Kanizsa figures, we will not consider the processing of such illusory figures but whether or not the figures served as targets in the paradigm.
C.S. Herrmann and F.W. Ohl
reaction time
task 2 target
reaction time
task 1
target
320
Fig. 1. Reaction times in response to four different images. Left: In task 1, the Kanizsa square required the longest reaction times, since it was the target of the task. Responses to the non-Kanizsa triangle were fastest, since it is dissimilar to the target in both stimulus features. Right: In task 2, the non-Kanizsa square required the longest reaction time, since this image was now the target.
The reaction times reveal that subjects took longer to identify the target than any other image. The image which shares no features with the target is processed fastest while the two images that share one feature require intermediate reaction times. This indicates a certain mechanism of target detection in human visual cortex. The images are not compared to a memorized template of the target as a whole but two comparison processes seem to be operating separately for the two features. As soon as one of the two comparison processes detects a difference between the presented image and the template of the target, a nontaget response can be given. However, only if both comparisons yield positive results, the detection of the target can be assumed — which takes the longest processing time. For a technical system which were to perform such a target detection task with ’brain-like’ intelligence, we would require it to show the same pattern of processing times which depends upon the task more than on the stimuli, hoping to gain the flexibility of the system to adapt to the behavioral requirements. In a second task with the same stimuli [26], subjects viewed the same images but were required to identify the non-Kanizsa square as the target (cf. Fig. 1, right). Now—despite identical stimulation—the pattern of reaction times changed with respect to the stimuli. However, it remained the same with respect to the behavioral relevance of the stimuli. Again, the target required the longest reaction time and the most dissimilar image was processed fastest. Thus, the pattern of reaction times must be attributed to mechanisms of high-level information processing not to low-level stimulus processing. Psychologists speak of top-down processes when processing depends on internal mechanisms of information processing and of bottom-up processes when they depend on attributes of stimuli.
Cognitive Adequacy in Brain-Like Intelligence
321
The fact that human information processing shows such top-down processes, demonstrates the flexibility of the human brain. Despite identical stimulation, the behavior is easily adapted to the modified task requirements between tasks 1 and 2. Neural correlates of such task-dependence are also traceable already in primary sensory areas in living brains [27]. Error Rates. In a similar vein, error rates can be applied to find out interesting details about human information processing. In the so-called Stroop task [28], humans are asked to name the color word which is presented to them irrespective of the word itself. This poses no problem as long as the word is not associated with a color. As soon as the word represents the name of a color, the task becomes challenging. If the word meaning and the color are congruent (word ’red’ written in red color), human subjects make as few errors as with non-color words. If, however, the word meaning and the color are incongruent (word ’red’ is written in blue color), subjects start to erroneously say the word meaning instead of the color which the word is written in. For the example in brackets, they would say ’red’ which is the word meaning instead of saying ’blue’ which is the color the word is written in. Subsequently, many will correct themselfes: ’Oups, I mean blue’ after they became aware of their mistake. Psychologists have used such findings to infer processing mechanisms of the brain. For the above example, it is assumed that the meaning of words is processed in a highly automated fashion even if this is not required for the task at hand. In addition, for the described naming task the outcome of the task-irrelevant, automatic word processing seems to be available first. The taskrelevant generation of the word for the color in which the presented word was presented only becomes available later. For cognitively adequate algorithms we would require that they are more likely to produce an error in conditions in which also humans are more likely to perform erroneously. Again, only the relative but not absolute values are important. As an example, let us consider another case of perception. In an experiment, out of three colored disks, one at a time was presented to human participants [29]. The disks were presented in red, light green and dark green. In an easy task, subjects had to press the right hand when they perceived the red disk and the left hand when they perceived any of the two green disks. In a harder task, they had to respond with their right hand when perceiving the light green disk and with the left hand in case of the other two disks. The error rates differed significantly between the two tasks — even though the identical stimuli were used in both tasks. Thus, procesing times cannot result from physical stimulus differences but from differential processing in the two tasks. In the easy task, similar physical stimulus features were assigned to the same response hands (right: red, left: both types of green). In the hard task, two different hands were required for the two similar stimuli (right: light green, left: dark green). This implies that a task-irrelevant feature was automatically processed — even though it resulted in suboptimal performance. This feature
322
C.S. Herrmann and F.W. Ohl
error rate
hard task
error rate
easy task
color:
red
light green
dark green
red
light green
response hand:
right
left
left
left
right
dark green left
Fig. 2. Error rates in response to three different images. In the easy task, similar colors required responses with the same hand. In the hard task, similar colors required responses with different hands. Despite identical stimuli, the behavioral requirements of the tasks can influence the pattern of error rates.
was the similarity of the colors. For technical systems supposed to discriminate colors with ’brain-like’ intelligence, cognitive adequacy would thus require that similarity were automatically processed — even if it leads to processing errors. However, the processing of similarity which can lead to errors represents an important feature of human color processing. If a color is perceived which is similar to a color X, the system is more likely to produce a response as when color X is perceived which in real life avoids errors due to changing illumination [30]. Perception Measures. Reaction times and error rates are not the only measures that may be taken into regard. The outcome of a processing step may serve as a measure as well. E.g. in the case of a visual perception task, a human observer may perceive a visual illusion or ambiguity [31]. For cognitively adequate algorithms, we would require that the results of their perceptual processing can show similar illusory or ambiguous percepts as those in human perception. Why would an engineer who builds a ’brain-like’ intelligent system want to come up with an ambiguous percept or an illusion rather than one definite percept out of multiple possibilities? Let us again consider a visual example. Fig. 3 (left) shows the so-called Necker cube. When we perceive such ambiguous stimuli, our perception alternates between multiple percepts. In case of the Necker cube, one can either see the interpretation 1 (Fig. 3, center) or 2 (Fig. 3, right) and
Cognitive Adequacy in Brain-Like Intelligence
Necker cube
interpretation 1
323
interpretation 2
Fig. 3. The ambiguous Necker cube and two alternative perceptions of it
Limit cycle attractor
state variable 2
state variable 2
Chaotic attractor
state variable 1
state variable 1
Fig. 4. Dynamical system interpretation of perception. The state space is defined by two arbitrary state variables. Left: Multistable perception of ambiguous figures is represented by a chaotic attractor. Right: In case of perceiving an unambiguous visual scene the state trajectory approaches a limit cycle. Since the trajectory does not converge to a fixpoint, it can be attracted by another limit cycle if the system is slightly perturbed.
our perception will alternate every few seconds while watching the ambiguous stimulus. In an electrophysiological experiment [32], we were able to demonstrate that the percept of one of two alternatives of an ambiguous pattern destabilizes over time leading to the switch of perception. Then, the alternative percept again destabilizes over time until our perception switches back to the first percept. Typical pattern recognition algorithms do not show this dynamical behavior. They rather converge more and more towards one final percept of a visual scene. However, the brain needs to be viewed as a dynamical system [33]. This brings along ambiguous percepts but offers the possibility to find alternative perceptions when the visual scene is actually ambiguous which in real life is often the case [34].
324
C.S. Herrmann and F.W. Ohl
Consider the trajectory of two arbitrary variables of visual perception in Fig. 4 (left) which represents a dynamical systems view of the ambiguous perception switching between two alternatives [35]. The trajectory approaches one of two quasi-periodic orbits of a chaotic attractor. After some time, the trajectory spontaneously is attracted by another orbit. The case of a less ambiguous visual scene is illustrated in Fig. 4 (right). Here, the trajectory approaches a limit cycle and remains in its orbit. This offers the possibility for the perception to change to a different percept represented by a different attractor more easily as if the trajectory had run into a fixpoint where it would then remain.2 This important feature of visual perception can be inferred from ambiguous patterns and should be implemented in ’brain-like’ systems.
6
Discussion
As we layed out above, we believe that cognitive adequacy is one strategy which could potentially lead to the creation of ’brain-like’ intelligence if the relation between structure and function is not yet known. It is based on an assumption which has already been formulated in AI research: If the program’s input/output and timing behaviors match corresponding human behaviors, that is evidence that some of the program’s mechanisms could also be operating in humans. [1] 6.1
Does Cognitive Adequacy Guarantee ’Brain-Like’ Intelligence?
In this chapter, we have suggested to use the concept of cognitive adequacy in order to test whether an artificially intelligent system behaves like a biologically intelligent system. In cognitive science and neuroscience this is a common test in order to find out whether a mechanism might be the one underlying an observed behavior. Thus, it seems plausible to assume that a technical system that behaves cognitively adequate resembles ’brain-like’ intelligence in the sense that the underlying mechanism is probably similar to the biological one. 6.2
Criticism of Behavioral Tests
Of course, we can imagine that some people would argue that the behavior of a system does not tell us anything about its internal processes. A similar argument was brought up against the Turing test as a means to determine the intelligence of an artificial or living system by observing only its behavior. Searle used his Chinese room argument to suggest that a mere input/output analysis of a system is not adequate to determine the system’s internal functional principles [37]. In the Chinese room, a person who does not understand Chinese is handed chinese characters and has to hand out answers written in chinese characters. He 2
The simulations were computed in MATLABTM with the software that accompanies a book on neural modelling [36].
Cognitive Adequacy in Brain-Like Intelligence
325
does so by following syntactic rules without understanding the semantics of the questions and answers. Searle argues that despite an input/output behavior of the Chinese Room which is indistinguishable from that of a Chinese, it cannot be concluded that the person in the room understands Chinese. Our approach of using cognitive adequacy to determine whether a system shows ’brain-like’ intelligence may be compared to the Turing test. While we agree with Searle that the person in the Chinese Room does not understand the questions, we would nevertheless argue that the intelligence required to answer the questions is implicitly present, since the syntactic rules that guide the behavior of the person inside the room must have been formulated by someone who does understand Chinese. Thus, we would maintain that it seems plausible to use an approach like cognitive adequacy which is based on behavior in order to test ’brain-like’ intelligence. 6.3
Levels of Analysis
One can analyze a system at various levels. In the Chinese Room scenario, behavior is analyzed at a very high cognitive level. Our approach of using cognitive adequacy can be applied at multiple levels of analysis. The suggested examples of reaction times and error rates are already at a lower level than the language level of Searle. It is plausible, that the usefulness of our approach can be increased by adding intermediate levels of analysis. This, however, requires an initial understanding of the mechanism which is supposed to be modeled in a ’brain-like’ fashion.
Acknowledgements We would like to acknowledge the support from the Bernstein group for computational neuroscience and the Center for Behavioral and Brain Sciences (CBBS), Magdeburg. Also, we thank Stefanie Th¨ arig for designing the figures and Ingo Fr¨ und for fruitful discussions.
References 1. Russell, S., Norvig, P.: Artificial Intelligence: A modern approach. Prentice-Hall, Englewood Cliffs (1995) 2. Crevier, D.: AI: The Tumultuous Search for Artificial Intelligence. Basic Books (1993) 3. Dreyfus, H.: What Computers Can’t Do: The Limits of Artificial Intelligence. Harpercollins (1978) 4. Rumelhart, D.E., McClelland, J. (eds.): Parallel Distributed Processing: Explorations in the Microstructure of Cognition. MIT Press, Cambridge (1987) 5. Fuster, J.M.: Prefrontal cortex and the bridging of temporal gaps in the perceptionaction cycle. Ann. N. Y. Acad. Sci. 608, 318–329 (1990) 6. Merleau–Ponty, M.: The structure of behavior. Beacon (1942)
326
C.S. Herrmann and F.W. Ohl
7. Iida, F., Pfeifer, R., Steels, L., Kuniyoshi, Y. (eds.): Embodied artificial intelligence. LNCS, vol. 3139. Springer, Heidelberg (2004) 8. Squire, L.R., Kandel, E.R.: Memory. From mind to molecules. Scientific American Library (1999) 9. Ohl, F.W., Scheich, H., Freeman, W.J.: Change in pattern of ongoing cortical activity with auditory category learning. Nature 412(6848), 733–736 (2001) 10. Parasuraman, R.: The attentive brain. MIT Press, Cambridge (1998) 11. Minsky, M.L.: The emotion machine: commonsense thinking, artificial intelligence, and the future of the human mind. Simon and Schuster (2006) 12. Bothe, H.W., Samii, M., Eckmiller, R. (eds.): Neurobionics: An Interdisciplinary Approach to Substitute Impaired Functions of the Human Nervous System. Elsevier, Amsterdam (1993) 13. Ohl, F.W., Scheich, H.: Chips in your head. Scientific American Mind, 64–69 (April/May 2007) 14. Abbott, A.: Neuroprosthetics: in search of the sixth sense. Nature 442(7099), 125– 127 (2006) 15. Shastri, L., Ajjanagadde, V.: From associations to systematic reasoning: A connectionist representation of rules, variables, and dynamic bindings using temporal synchrony. Behavioural Brain Sciences 16, 417–494 (1993) 16. Johnson-Laird, P.N., Byrne, R.M.J.: Deduction. Lawrence Erlbaum Ass., Mahwah (1991) 17. Bibel, W.: Perspectives on automated deduction. In: Automated reasoning: Essays in the honor of Woody Bledsoe, pp. 77–104. Kluwer Academic, Dordrecht (1991) 18. H¨ olldobler, S., Thielscher, M.: On the adequateness of AI-systems. In: Plander, I. (ed.) International Conference on AIICSR (1994) 19. Beringer, A., H¨ olldobler, S.: On the adequateness of the connection method. In: AAAI National Conference on Artificial Intelligence (1993) 20. Herrmann, C.S., Reine, F.: Considering adequacy in neural network learning. In: Proceedings of ICNN 1996 (IEEE International Conference on Neural Networks) (1996) 21. Posner, M.I.: Chronometric explorations of mind. Oxford University Press, Oxford (1986) 22. Gazzaniga, M.S.: Cognitive neuroscience. The biology of the mind. Norton (2002) 23. Anderson, J.R.: Rules of the mind. Lawrence Earlbaum Associates (1993) 24. Luce, R.D.: Response times. Oxford Science Publications (1986) 25. Herrmann, C.S., Mecklinger, A.: Magnetoencephalographic responses to illusory figures: early evoked gamma is affected by processing of stimulus features. Int. J. Psychophysiol. 38(3), 265–281 (2000) 26. Herrmann, C.S., Mecklinger, A.: Gamma activity in human EEG is related to high-speed memory comparisons during object selective attention. Visual Cognition 8(3/4/5), 593–608 (2001) 27. Ohl, F.W., Scheich, H.: Learning-induced plasticity in the auditory cortex. Current Opinion in Neurobiology 15, 470–477 (2005) 28. Stroop, J.R.: Studies of interference in serial verbal reactions. Journal of Experimental Psychology 18, 643–662 (1935) 29. Senkowski, D., Herrmann, C.S.: Effects of task difficulty on evoked gamma activity and erps in a visual discrimination task. Clin. Neurophysiol. 113(11), 1742–1753 (2002) 30. Kraft, J.M., Brainard, D.H.: Mechanisms of color constancy under nearly natural viewing. Proc. Natl. Acad. Sci. USA 96(1), 307–312 (1999)
Cognitive Adequacy in Brain-Like Intelligence
327
31. Block, J.R., Yuker, H.E.: Can you believe your eyes? Over 250 illusions and other visual oddities. Tylor and Francis (1989) 32. Str¨ uber, D., Herrmann, C.S.: Meg alpha activity decrease reflects destabilization of multistable percepts. Brain Res. Cogn. Brain Res. 14(3), 370–382 (2002) 33. Freeman, W.J.: Neurodynamics. Springer, Heidelberg (2000) 34. Haken, H.: Synergetic Computers and Cognition. A top-down approach to neural nets. Springer, Heidelberg (2004) 35. F¨ urstenau, N.: A chaotic attractor model of cognitive multistability. IEEE International Conference on Systems, Man, and Cybernetics 1, 853–859 (2004) 36. Wilson, H.R.: Spikes, decisions, and actions. Oxford University Press, Oxford (1999) 37. Searle, J.R.: Minds, Brains and Science. Hardvard University Press (1984)
Basal Ganglia Models for Autonomous Behavior Learning Hiroshi Tsujino, Johane Takeuchi, and Osamu Shouno Honda Research Institute Japan Co., Ltd. 8-1 Honcho, Wako-shi, Saitama 351-0188, Japan {tsujino,johane.takeuchi,shouno}@jp.honda-ri.com
Abstract. We propose two basal ganglia (BG) models for autonomous behavior learning: the BG system model and the BG spiking neural network model. These models were developed on the basis of reinforcement learning (RL) theories and neuroscience principals of behavioral learning. The BG system model focuses on problems with RL input selection and reward setting. This model assumes that parallel BG modules receive a variety of inputs. We also propose an automatic setting method of internal reward for this model. The BG spiking neural network model focuses on problems with biological neural network architecture, ambiguous inputs and the mechanism of timing. This model accounts for the neurophysiological characteristics of neurons and differential functions of the direct and indirect pathways. We demonstrate that the BG system model achieves goals in fewer trials by learning the internal state representation, whereas the BG spiking neural network model has the capacity for probabilistic selection of action. Our results suggest that these two models are a step toward developing an autonomous learning system. Keywords: System architecture, basal ganglia, reinforcement learning, modular learning system, reward, spiking neuron, input space selection, execution timing.
1 Introduction The objective of our research is to develop an autonomous behavior learning system for machines. Flexible operation of the output of a system is essential for future intelligent machines to work and interact with people under real world situations. The most advanced flexible operation in animal behavior is autonomous selection of behaviors in a dynamically changing environment. In the future, people will demand that the machine selects the best option by assessing the situation. However, the current level of machine intelligence lags far behind that of animals. A prominent feature in animal intelligence is the ability to learn to select a relevant behavior autonomously, whereas current machine learning systems typically require supervisory support from a human in designing the system, selecting the training data, choosing the algorithm, setting the parameters and determining the goal of learning. To explore the secrets of animal learning systems, we have investigated the learning mechanism underlying behavior selection in the animal brain, in the basal ganglia (BG) in particular. We have focused on the BG because of its functional and evolutional importance. B. Sendhoff et al. (Eds.): Creating Brain-Like Intelligence, LNAI 5436, pp. 328–350, 2009. © Springer-Verlag Berlin Heidelberg 2009
Basal Ganglia Models for Autonomous Behavior Learning
329
The BG function in learning and selection of behaviors. The association of cognitive and affective components can be observed in primate BG disorders, including Parkinson’s disease, Huntington’s chorea, schizophrenia, attention deficit disorder, Tourette’s syndrome and various addictive behaviors. Although there are other brain regions that perform learning and selection of behaviors, the BG are phylogenetically old. The basic neural circuit architecture of the BG begins phylogenetically at the level of fish and it persists up to the level of human [1]. This not only proves its evolutional importance but also provides us with a new methodology for developing a computational model: investigation of several evolutionary levels of the circuits in animals. The primate brain is a very complex system in which many part of the brain interact with one another. However, it is based on the brain system of reptiles and rodents. Thus, we can extract the essence of the animal learning system by selecting and combining evidence from animal brains ranging from reptile to human. For this reason, we think the BG provide stronger support for exploration of animal learning systems than other parts of the brain. Animal learning can be seen as the process through which animals are able to use past and current events/states to predict the future state. In classical Pavlovian conditioning, animals learn to predict what outcomes are contingent on which events/states. In instrumental conditioning, animals learn to predict the consequences of their actions and use this prediction to maximize the likelihood of rewards, minimize the occurrence of punishment, or achieve goals. Barto’s [2] reinforcement learning (RL) research led to the proposal of a temporal difference (TD) learning model to explain both Pavlovian and instrumental conditioning. The recently identified phasic activity of dopamine neurons in the BG [3][4][5][6][7][8] enhances the link between RL and the BG, has contributed to the evolution of RL models, and has opened new questions [9][10]. In this chapter, we introduce our BG model of an autonomous behavior learning system. We briefly review related studies in RL and BG models in section 2 and then describe our models in section 3. Finally, a discussion regarding open questions is presented in section 4.
2 Related Works Recent active interaction between machine learning (reinforcement learning, RL) research and neuroscience (basal ganglia, BG) research clarify both consistency and difference between them. In this section, we firstly review the RL research, secondly pick up the findings in neuroscience that relate to the RL, and lastly list up the open problems to develop the autonomous behavior learning system by referring both RL research and neuroscience research. 2.1 Reinforcement Learning Research Although RL was originally born out of mathematical psychology as just trial-anderror reward learning, current RL research is actively progressing with evidence from the neuroscience field. Current RL models can be divided into two broad classes, model-based and model-free [11].
330
H. Tsujino, J. Takeuchi, and O. Shouno
Model-based RL uses experience to construct an internal model of state transitions by exploring alternatives and achieves outcomes in the environment using the internal model. Model-based RL supports goal-oriented behaviors. This class of algorithm includes Dyna [12] and real-time dynamic programming [13]. The advantage of model-based RL is that it can solve complex structured tasks. The major problems of model-based RL are how to acquire and use the hierarchical structured model to solve the task and how to extend the system to deal with a variety of tasks or learn a new situation. Model-free RL uses experience to directly learn one or two simpler quantities (state/action values or policies), which can then achieve the optimal behavior without learning a world model. This class of algorithm includes TD learning [14] and Qlearning [15]. The advantage of the model-free RL over model-based RL is that model-free RL can be used to solve unknown tasks. However, it is a less efficient learning system because information from the environment is treated too simply to select relevant information. Hence, it cannot adapt with the appropriate speed to changes in contingency and outcome utilities. 2.2 Neuroscience Findings Behavioral neuroscience of goal-oriented actions suggests that the dorsomedial striatum, prelimbic prefrontal cortex, the orbitofrontal cortex, the medial prefrontal cortex, and parts of the amygdala play a key role in model-based RL [16][17][18][19]. Neuromodulatory systems in the dorsolateral striatum and the amygdala are thought to be involved in model-free RL [18]. Anatomical findings suggest that the BG are connected to functionally disparate regions of the neocortex. An important component of the general architecture of this system is parallel, segregated, closed-loop projections [20][21][22]. Characteristics of TD prediction are seen in the phasic activity of dopaminergic neurons [5][6][7][8][23][24], and the lateral habenula suppresses activity of dopaminergic neurons [25]. Dopaminergic neurons in the substantia nigra receive inputs from the striatum, as well as from other parts of the brain, such as the subthalamic nucleus, pedunculopontine tegmental nucleus and the superior colliculus [26]. Interactions between dopamine and acetylcholine regulate striatal dopamine release [27][28], and serotonin release from the dorsal raphe nucleus is thought to control the time scale of reward prediction in the striatum [29]. 2.3 Open Questions The Relationship between the Model-based RL and the Model-free RL As described above, neuroscience suggests that model-based RL and model-free RL involve different parts of the brain. However, these brain areas are interconnected by parallel loops. A dual-controller reinforcement-learning model with both model-based RL and model-free RL has been proposed [11]. However, the two controllers in this model, the prefrontal cortex and dorsolateral striatum, are rather independent. Daw suggests that competition between the model-based controller and the model-free controller might best be viewed as competition between the dorsomedial and dorsolateral corticostriatal loops, rather than between the cortex and striatum. Graybiel [30] reviewed the mechanism of habits and suggested that many motor or cognitive
Basal Ganglia Models for Autonomous Behavior Learning
331
repetitive behaviors (e.g. habits, fixed action patterns, Tourette’s syndrome and obsessive compulsive disorder) are built up in part through the action of BG–based neural circuits, which can iteratively evaluate contexts, select actions and then form chunked representations of action sequences that can influence both cortical and subcortical brain structures. Recently, it has been found that the BG identifies new “state-reward associations” at the earliest stage of learning and trains the frontal cortex during later stages of learning [31]. Computation on the Parallel Loop Several models for parallel processing have been proposed. Redgrave et al. [32] proposed that the BG arbitrates between distributed, parallel processing functional systems within the brain. On the basis of this theory, Gurney and others [33][34][35][36] have produced computational models in which the BG governs action selection rather than learning. Based on RL motivation decomposing a complex task into multiple domains in space and time, Doya et al. [37] have proposed a multiple modular architecture. In their model, selection is performed by a simple softmax function known as the Bayes-optimal selection method. Computation on this parallel loop requires a process control that modifies the competitiveness of parallel modules or enhances the propensity for exploration depending on difficulty, immediacy, achievement needs, satiety, etc. Parallel loop computation provides an advantage when it is combined with hierarchical architecture. One can hypothesize that autonomous learning of hierarchical behavior spans from primitive motor movement to high-level planning. However, the working input space representation for the interface between different hierarchical levels would occupy a very large spatio-temporal dimension if the task to be solved is not specified by hierarchical levels. A self-organizing method with a representation that can adaptively select the required input space is not available. The Role of Neuromodulators Elucidating the role of neuromodulators is a more long-range problem due to its complexity. Dopamine neurons signal the TD prediction providing strong neuroscience evidence for RL. However, dopamine acts differently depending on the target. Even if dopamine neurons express the TD prediction, they may not be used in TD learning. The reward in RL is given in an environment that can be modified. However, we usually cannot modify the environment to achieve targets when solving real world problems. What the external reward cannot cover should be executed by an internal reward or by controlling the level of the reward. In primates, the internal reward is given by a process that combines intrinsic drives and strategy, which is provided by the neocortex. How dopamine, acetylcholine and serotonin interact is not well understood. Neural Mechanisms of the Timing There is a dearth of data regarding timing. Although the timing of action execution is the most important factor, the neural mechanisms underlying this process are unclear. The cerebellum is critical for a wide variety of timing tasks. Some of the cardinal symptoms of cerebellar dysfunction (i.e. dysmetria and intentional tremor) have been attributed to a loss in coordination of the temporal pattern between antagonist muscles [38][39]. During a time perception task, the cerebellum is activated and regional blood flow increases in the BG [40]. Furthermore, Parkinson's disease patients show greater
332
H. Tsujino, J. Takeuchi, and O. Shouno
variability in their inter-tap intervals than controls [41]. This behavioral phenomenon is alleviated by L-dopa, and Parkinson’s disease patients with asymmetric symptoms have more variable intervals with the limb that is most affected. The BG model proposed by Lo et al. can modulate execution time [42]. However, selection and timing mechanisms in this type of model are basically dependent on the difference in statistical properties of the inputs to each channel. Thus, this model does not provide a mechanism for autonomous acquisition of timing. The lateral intraparietal area (LIP) firing rate exhibits the timing of proactive arm movement [43]. However, how these different brain areas interact to govern timing is unclear.
3 Basal Ganglia Models We have used system- and neuron-level modeling to address the open questions described above. For a large-scale system, it is important to view and analyze the system from the top-down and the bottom-up. The system-level model focuses on the relationship between model-based RL and model-free RL. To combine these two RLs, we have proposed a parallel modular neural network model that assumes parallel BG modules receive a variety of inputs from other parts of brain. We introduced intrinsic reward to analyze its contribution to learning efficiency. Neuron-level modeling focuses on problems in the biological neural network architecture, ambiguous input streams and the mechanism of timing. We propose a biological neural circuit model that takes the neurophysiological characteristics of BG neurons into account to demonstrate the neural mechanism of timing with ambiguous inputs. These are explorative research models that can be used to solve our open questions. Our final goal is to create an autonomous behavior learning system. The proposed models share the modular parallel loop concept, however, we have not integrated these two models into a single system at this time. We will develop these models further for integration by sharing an architectural concept. In the following sections, we first describe our system-level model and then describe the neuron-level model. 3.1 Basal Ganglia System Model As described above, model-based RL can be used to solve complex structured tasks and efficiently learn a goal-directed behavior, however, acquiring an assumed internal model is difficult with this approach. On the other hand, model-free RL can solve unknown tasks, but it is less efficient for learning and is weak when it comes to changes in problem domains. A system theory for combining these two types of RLs is needed to advance a system RL model for learning/using goal-directed behavior, exploring/using the sensor-action relationship, and immediately acquiring/using the sensor-action relationship. As we reviewed above, neuroscience studies suggest that model-based RL involves the dorsomedial striatum, several prefrontal areas, and parts of the amygdala, whereas model-free RL involves the dorsolateral striatum and the amygdala. The neural substrates of these two systems are interconnected by parallel loops. This schematic
Basal Ganglia Models for Autonomous Behavior Learning
333
mechanism relates strongly with a system model that combines model-based RL and model-free RL, in which the two types of RL complement each other to yield autonomous learning, which cannot be achieved by model-based RL alone. Using a combination of model-based RL and model-free RL based on functional activation in the cortico-BG circuit [30], a parallel-loop organization that may contribute to learning of a hierarchical structured model can be constructed. However, only a little modeling of the combination and interaction of the two types of RL has been carried out [11], and only a few theoretical system-level hypotheses have been generated [44]. We begin our system model with the following neuroscience data: (1) Anatomical evidence suggests there is a micromodular neural network structure in the BG that runs in parallel [45] and that there are parallel loops between the neocortex and BG [20][46]. (2) Behavioral and physiological evidence suggests that the activity of dopamine neurons matches that of the TD prediction [3][4][5][6][7][8]. Doya et al. [37] proposed a multiple model-based reinforcement learning (MMRL) architecture, which is a modular RL architecture for control tasks. MMRL reflects the two points listed above. The MMRL can learn to decompose a non-linear and/or nonstationary task through competition and cooperation of multiple learning modules that perform model-based prediction and RL. We proposed the Self-organizing Modular Reinforcement Learning model [47] as an RL model performing model-free RL that also reflects the two points listed above. Our model is based on the hypothesis that the BG are running model-free RL alone and the BG is also running model-based RL by getting modeled, or structured, input from the cortex through parallel loops. We proposed the use of neural networks for our model based on the hypothesis that the BG select various inputs and subsequently perform and gain knowledge. In other words, the BG not only select parts of the input but also refer to the past history and internal state when the input lacks sufficient information. However, when a variety of information is being processed (see above), standard tabular representations in RL are inadequate to approximate value functions without predefining the information as states. The neural network solves the problem. The second point we proposed in [47] is a mechanism of reward setting. Reward is an excellent motivator for learning new things and is introduced in RL to obtain an unsupervised learning characteristic. However, in RL, the design of the reward setting often results in a supervised design of the steps toward the goal of learning. Rewards can be classified as either external or internal. With conventional RL, the reward is external; it is provided from an environment that a designer can manage. We usually cannot modify the environment to achieve the target during real world problems. Thus, the external reward should be simple. What the external reward cannot cover is executed by an internal reward. In primates, the neocortex underlies a process that combines intrinsic drives and strategy to generate the internal reward. The mechanism(s) controlling internal reward in the brain are complex and involve several neural transmitters, including dopamine, serotonin, and noradrenalin, which are mostly regulated by brainstem nuclei.
334
H. Tsujino, J. Takeuchi, and O. Shouno
Fig. 1. A schematic diagram of the BG system model. The predictor network in each module k and the previous action . predicts the current state ̂ from the previous observed state Each predictor network contains one hidden layer (six nodes). The outputs of the controller layer in each module, ( ), are Q-function vector values for primitive actions. The inputs of each controller layer are the observed state , the previous action , and the history information . The history information is represented with an echo state network (ESN).
In the following subsections, we first describe our system model, then present our simulation results, and finally discuss the system model. Description of the BG System Model Figure 1 shows a schematic diagram of the constructed model system, which we call the “BG system model”. It is composed of n modules, each of which consists of a state predictor layer and an RL controller layer, similar to MMRL [37]. The observed state , the previous action , and the recurrent layer are input sources of the , is represented by controller layer in each module. The history information, means of an echo state reservoir that is a randomly connected recurrent neural network [48]. We used the softmax function for the action selector. For the RL controller in each module, we adopted the gradient-descent SARSA algorithm [49]. The combination of off-policy learning and function approximation
Basal Ganglia Models for Autonomous Behavior Learning
335
may potentially cause divergence [49]. On the other hand, on-policy learning with a linear approximation such as the gradient-descent SARSA rarely meets with this difficulty. We utilized a non-linear approximation with two-layered neural networks for the RL controller. Each output layer node of the neural network implements a nonlinear function for its outputs; · , where w is the weights, x is the input vector to the network composed of s, a, and h, and c is the constant value. The connection weights are updated for module as follows. (1) where ,
(2)
.
(3)
is an external reward at time , is an internal reward, is the discount factor ( 0.1 ), α is an update state constant and is an eligibility trace constant. is the -function value for the action . It is the component of -function vector of the winner module m. The eligibility trace (Eq.3) is updated only for the winner module m, which is selected by mnSOM [50] using the result of predictors. The predictor in each module predicts the current state. We adopted a conventional backpropagation algorithm. The update rule for the weight u is as in the following. 1 2
,
(4)
The module with the minimum prediction error is selected as the winner module by mnSOM. We introduced internal rewards as well as external rewards, for the reinforcement signal (Eq.2). The modular system that decomposes the observed state space in a selforganizing manner stabilizes the internal rewards calculated from prediction errors. The internal reward is calculated as follows; 0 0 , ∆ ∆
(5)
∆,
(6)
,
(7)
where is the calculation of the moving average over n times, and b is the negative constant value. Simulation Results We analyzed our proposed model by setting a couple of decision process environments (Fig.2). In the simulations, each state was represented by a fifteendimensional normalized vector consisting of random values.
336
H. Tsujino, J. Takeuchi, and O. Shouno
Fig. 2. Test environments. a) The test continuing MDP environment. An external reward = 1 is earned after sequential events. Dashed lines indicate probabilistic transitions. b) The test continuing partially observable environment. The optimum action selection when the agent starts from the state s0 is a a a a a (A), whereby the agent receives and then a a a a a (B) whereby the agent receives another .
MDP (Markov Decision Process) Figure 2a shows the MDP environment. The number of primitive actions was ten, that is a 0 ,a 1 ,…,a 9 , and the number of states was also ten. At the initial state , if the agent selected actions sequentially in the order of the subscriptions, external rewards were earned, otherwise, the agent received no positive external reward. We imposed continuing tasks in which the transition repeated from the initial state even after an external reward was earned. We tested the constructed model using probabilistic transitions. The probability transition from to with an action selection a0 was 0.3. Other transitions of the correct sequence to earn external rewards were set as 0.9. In addition, the observed states were probabilistically changed. For example, in the state , the agent could observe a state signal or with a was represented by and , so the number of probability of 0.5. Each state observed states was twenty in this environment. Any state signals were fifteen dimensional vectors that were randomly generated, therefore, and for the same were not correlated except coincidentally. The test transition is a simple model of actually possible situations in which different observed states indicate the same state. Figure 3a shows learning curves for the average performance of 1000 trial sets (each set consisting of 1000 trials) including the results of a plain tabular SARSA for comparison. Each trial set was simulated with different state vector values. Each trial started with the last states of the adjacent trials, and stopped either when the agent was rewarded, or when time ran out.
Basal Ganglia Models for Autonomous Behavior Learning
337
Fig. 3. Learning curves for the test environment. The learning curves of our BG model are compared to three tabular SARSA methods, in which each SARSA used different input combinations. The s(t), s(t-1), a(t-1), and h(t-1) denote the current sensory signal, the previous sensory signal, the previous action, and the previous history, respectively. a) The test continuing MDP environment. b) The test continuing HOMDP environment.
For the plain SARSA, we required that the table directory represented all observed states . In contrast, our BG system model developed their representations from indirect observation of vector signals in a self-organizing manner from tabula rasa. The learning curves illustrate that our model achieved good performances without appropriate adjustments of the input. The plain tabular SARSA achieved good performance when the current state input was used. However, performance was worse when it received additional input, the previous action or the previous state. HOMDP (High Order MDP) We also investigated differences in performance using the partially observable environments (Fig.2b). In this case, if the agent succeeded in earning an external reward, the unobservable state of the environment changed to another state. Thus, because of the unobservable state, the state transitions were changed and the agent had to select another sequence of actions to earn another external reward. The number of true states was six and the number of observed states was twelve. Although the number of states was less than for the MDP case, the same observed states could be different for action selections in this case. Each trial ended when the agent received two external rewards so that the shortest number of transitions was ten, this being the same as the MDP case. The agent had to pass two probabilistic alternating transitions of 0.3 to earn two external rewards. If the agent failed to transit this transition, the state transited to as shown in the transition A in Fig.2b ( for B); thus, before attempting again to transit to , the agent was required to output an arbitrary action to transit to ( for B). The memory information to solve the partially observability was the previous
338
H. Tsujino, J. Takeuchi, and O. Shouno
actions at the transition from to . When the agent was in the state , the agent could not distinguish the hidden states from the previous action because the previous actions were the same in both hidden states. History information is inevitably necessary to distinguish the situation. We refer to this environment as HOMDP. Figure 3b shows the learning curves of our BG system model and the plain SARSA. The fastest results were obtained with our BG model. The learning speed of the plain SARSAs was slower than the BG model, but the plain SARSA converged to shorter steps. These results of the plain SARSA are corollary. The table representation of this case was large (12 states 6 actions 12 previous states) so learning required a longer time. The representations of previous states were, however, more direct in comparison with the representation of the recurrent neural network. Hence, learning was more stable than with our BG model. The learning curves of plain SARSAs with different input sources reflected the characteristics of this test HOMDP environment. Discussion of the BG System Model These results demonstrated that our model can solve both MDP and HOMDP without adjusting the architectures and parameters for each transition. The neural networks in our model worked for both environments. They are also insensitive to input dimensions, at least to some extent, in nature. By decomposing a learning target with the module structure, neural networks are efficient even with excess inputs. Traditional model-free RL, such as a tabular SARSA, uses experience to learn the state/action values, rather than the states themselves. The simulation results reconfirmed that performance depends on the design of input states. Our model provides an autonomous state design in a self-organizing way, thus, as a natural consequence, our model performed stably with different problem environments. Autonomous state learning is a key for a flexible interface between model-based RL and model-free RL when the system is based on the hypothesis that the BG are running model-free RL independently, as well as running model-based RL by receiving modeled, or structured, input from cortex. Using autonomous state learning, model-free RL can efficiently organize the optimal input state to each new problem, which requires a well-defined internal model. Our model uses an internal reward defined by the prediction error and small negative reward (Eq. 5). This simple internal reward generation method uses the error values of predictions for the observed states. This type of internal reward helps to detect novel transitions for the learning system. However, prediction error-based internal rewards generally encumber learning in probabilistic transitions. The state predictor converges to the average values of each probabilistic transition, so the error of predictions never becomes zero, which disturbs learning. One solution for overcoming restrictions inherent to error-based internal rewards is to introduce another predictor that predicts the errors of the state predictor [51][52]. This method drives learning toward improvement of predictions, thereby avoiding the increase of prediction errors. Hence, the direction of learning differs from task achievement. We decomposed the predictor to overcome these problems, similar to MMRL which decomposed the modules on the basis of RL motivation. Whereas MMRL decomposed a complex task into multiple domains in space and time, our system enlarges the use of decomposition for the internal reward generation.
Basal Ganglia Models for Autonomous Behavior Learning
339
Fig. 4. The effect of the error-based positive internal reward
We found that it was more efficient to provide the learning system small negative rewards, b, for executing each action in the tested environment. The positive internal rewards (Eq. 6) were favorable for learning as ‘appropriate noise’. Average learning speed with a positive internal reward based on the state prediction error was slightly better than without the positive internal reward (Fig. 4). Generally, providing appropriate random noise is necessary for adjustment. Because of the modular decomposition of the predictors, it was much easier to adjust internal rewards than random noise for earning appropriate results. However, this result depends on the test environments that we prepared, because the decision processes in these tests require less information about future states. Thus, further RL research and BG neuroscience research of the autonomous reward setting method is required. 3.2 Basal Ganglia Spiking Neural Network Model RL models, which have been developed in the machine learning field, have begun to account for the functions of the BG. These models suggest that neural activity in the striatum and the dopaminergic system represent parameters described by RL models. However, there is a discrepancy as to how input space is handled by RL models for machine learning and by the BG of animals living in the real world. In RL models of machine learning, the basic strategy is to discretize an input space into states and learn expected values of all states. The number of dimensions for an input space is kept low to avoid a major difficulty known as the curse of dimensionality. In contrast, animals usually receive multi-dimensional datastreams from their environment through multimodal sensors. Furthermore, cognitive processing in the cortex can provide information about multiple aspects of a single perceived object. Richness and redundancy of the input space may be critical for the adaptive ability of animals. The BG receive high-dimensional datastreams from a wide range of the cortex. In most cases, the input space dimensions required for solving a task are not too large.
340
H. Tsujino, J. Takeuchi, and O. Shouno
Fig. 5. A schematic diagram of the spiking neuron model of the basal ganglia. Circles represent the populations of spiking neurons (red, excitatory; blue, inhibitory; green, dopaminergic). The number of neurons in each population is indicated in the circle. Each type of neurons shows characteristic membrane dynamics. External inputs to the model are homogeneous or inhomogeneous Poisson spike trains. STF: Short term facilitation. all-to-all: Each neuron in the source population projects to all neurons in the target. 1-to-1: Each neuron in the target population receives a projection from a selected neuron in the source. 1-to-5, or 1-to-10: Each source neuron projects to a distinct group of 5 or 10 neurons in the target. r5-to-1: Each target neuron receives projections from 5 randomly selected neurons in the source. 1-to-r10: Each source neuron projects to 10 randomly selected neurons in the target.
Therefore, input streams to the BG contain lots of irrelevant dimensions for facing a task, indicating that the input streams are noisy and ambiguous. However, it is unclear how to handle these high-dimensional input streams in RL models. Here, we propose a spiking neural network model of the BG that can select and initiate an action for trial and error in the presence of noisy, ambiguous input streams, and then adaptively tune selection probability and timing of actions. Description of the BG Spiking Neural Network Model Our model consists of two channels (Fig. 5). Each channel regulates an action downstream of it. There are two pathways for each channel, the direct pathway and
Basal Ganglia Models for Autonomous Behavior Learning
341
indirect pathway. The direct pathway is composed of sequentially arranged groups of spiking neurons, namely the striatum and the substantia nigra pars reticulata (SNr). In the indirect pathway, two groups of neurons, the globus pallidus external (GPe) and the subthalamic nucleus (STN), have been inserted between the striatum and SNr. In the striatum, there are two types of neurons, medium spiny (MS) neurons, which are projection neurons, and fast-spiking (FS) inhibitory interneurons. Each neuron model reproduces unique properties of membrane dynamics of the original neuron. The computational hypothesis of BG function is that the indirect pathway selects an action and the direct pathway initiates the selected action. This model features two characteristics of the BG: (1) Self-sustained, oscillatory burst activity that spontaneously emerges on the closed circuit of the GPe and STN [53]. This membrane property permits a randomly connected GPe-STN network to generate self-sustained, random bursting activity patterns. (2) A projection from a portion of GPe neurons to striatal interneurons, including FS interneurons. Through this projection, activity in the GPe can have an impact on neuronal activity in the striatum. We assumed that there are two external input pathways: the cortico-striatal pathway for sensory input and the cortico-subthalamic pathway (hyperdirect pathway) for motivational input. Motivational input can be intrinsically generated based on drives, such as hunger or thirst, or extrinsically induced by a learned stimulus. Motivational inputs facilitate BG function, but these inputs are nonselective, thus they do not specify the selection or initiation of an action. Simulation Results We used homogeneous Poisson spike trains with the same, constant mean firing rate and same strength to model noisy, ambiguous, high-dimensional cortico-striatal sensory inputs, which are spatially and temporally nonselective (Fig. 6A). Motivational input to STN neurons was inhomogeneous Poisson spike trains whose rate parameters (Fig. 6A) and synaptic strength were common among STN neurons. We implemented and validated our spiking neuron model of the BG in computer simulations using NEST [54]. A typical simulation result is shown in Figures 6B and 6C. Before the onset of the motivational inputs, spontaneous firing of GPe and STN neurons is relatively low. MS neurons receive tonic, nonselective cortico-striatal excitatory inputs that usually exceeded the firing threshold of MS neurons. Tonic inhibitory input from FS neurons, however, silences MS neurons. SNr neurons fire spontaneously at a high rate, preventing activation of superior colliculus (SC) neurons, which induces an action. After the onset of motivational inputs, oscillating burst activity begins to emerge in the GPe and STN. Increased GPe activity reduces FS activity in the indirect pathway (FSi), allowing cortical drive to activate MS neurons in the indirect pathway (MSi). At this point, sensory input can activate MSi neurons, thus reducing GPe neuronal activity via inhibitory projections from MSi neurons to the GPe. Mutual inhibitory connections between the GPe of each channel lead to selection of a single channel based on GPe activity. Activity in GPe and STN neurons in the non-dominant channels is reduced, whereas the channel that ‘wins’ fires oscillating bursts. Consequently, SNr neurons
342
H. Tsujino, J. Takeuchi, and O. Shouno
Fig. 6. Performance of the full network model with nonselective sensory input and without any abrupt changes. (A) Excitatory input to the network. Each channel receives sensory input with common statistical properties. Except for the hyperdirect input, excitatory input does not show any abrupt changes. (B) Population firing rates for MSd, FSd, MSi, FSi, GPe, STN, SNr and SC. (C) Corresponding raster plots for the population firing rates shown in B. The vertical axes correspond to the neuron index. Each panel exhibits neuronal activity for all of the neurons in the nucleus indicated. The lower and upper halves of each panel correspond to neurons from channel 1 and channel 2, respectively. (D) Histograms of the SC response time for each channel. (E) Record of individual choices of trials.
receive differential modulation from the indirect pathway: SNr neuronal activity in the winning channel is suppressed to a level where initiation of its downstream action is facilitated, but not induced. In contrast, SNr activity in the losing channel is enhanced, depressing initiation of its downstream action. Once GPe neurons that project to FS neurons in the direct pathway (FSd) enter a bursting period, FSd activity is suppressed. As a consequence, MS neurons in the direct pathway (MSd) are released from FSd suppression and are then capable of responding to cortical inputs. Synergy between incoming corticostriatal inputs and random GPe activity induces random, episodic discharge of MSd neurons. Activated MSd neurons are able to further reduce SNr activity in the same channel, resulting in increased SC neuron activity, then in turn the action downstream of the channel is initiated.
Basal Ganglia Models for Autonomous Behavior Learning
343
Fig. 7. Selection probability and action timing. (A) Selection probability depends on differences in the strength of excitatory synaptic inputs to MSi neurons between channels 1 and 2. Other conditions are the same as Figure 6. (B) An abrupt increase in the synaptic strength of sensory input to the MSd reduces variation of the SC response time. Upper panels: Time course of mean conductance of excitatory synaptic input to MSd neurons for each channel. Note the abrupt change in mean conductance of excitatory synaptic input to MSd1. Other conditions are the same as Figure 6. Bottom panel: Histograms of the SC response initiation times. Compare the sharp distribution of the initiation time for channel 1 with the temporal increase in synaptic strength. In contrast, the distribution for channel 2 was broad and lacked temporal modulation.
Episodic discharges of MS neurons that precede the selection or initiation of an action are critical for these functions and reflect information carried by a portion of incoming cortico-striatal inputs during the discharge. In other words, sets of information are clipped from continuous noisy input streams in an action-relevant manner and are then ready to be learned through dopamine-dependent synaptic plasticity. The distribution of initiation times for SC responses indicates that the initiation timing of actions in this model is under temporally constant cortico-striatal input (Fig. 6D). The history of individual choices of trials shows random action selection in this input condition (Fig. 6E). These output properties of the model enable different actions with different timing to be tested without salient inputs selective to an action or to a specific time. Modulation of the synaptic strength of cortico-striatal inputs to MSi neurons suggests that the episodic discharge firing rate of MSi neurons increases before symmetry breaking between the GPe activity of different channels affects selection probability of an action (Fig. 7A). A phasic increase of synaptic strength of cortico-striatal inputs to MSd neurons reduces the initiation time variation (Fig. 7B). Dopamine-dependent synaptic plasticity of cortico-striatal synapses may adaptively tune selection probabilities and action timing. Discussion of the BG Spiking Neural Network Model We emphasize three characteristics of our BG spiking neural network model. The first property is that before learning the operation of the model depends principally on intrinsic activity rather than external drives. This ensures autonomous operation, which is not independent of the statistical properties of the input information. The
344
H. Tsujino, J. Takeuchi, and O. Shouno
second property is that intrinsic activity is random. Thus, this model should be capable of probabilistic action selection and variable execution times. The third property is that segmentation of input space is mechanistically linked to the executed action, supporting action-relevant segmentation of state space. We stress that the selection and initiation of an action depend on synergy between intrinsic random activity and external inputs. Even when sensory inputs are ambiguous and there are no salient inputs selective for an action or timing, the model can select and initiate an action randomly mainly due to intrinsic random activity. If there are salient, selective inputs, the model also can exploit these inputs. Dopaminedependent plasticity may modulate the balance between intrinsic random activity and external inputs and thus shift behavior from exploration to exploitation.
4 Discussion Here, we have described BG models that work toward an autonomous behavior learning system. To develop the system, we had to overcome several open problems in RL systems. The BG system model focuses on the problems associated with relating model-based RL and model-free RL in a single system. The multiple modular system architecture and input representation by a self-organizing map were designed using ideas from the parallel loop architecture between the BG and the cortex. The internal reward was introduced to achieve a goal in an uncertain environment. The BG spiking neural network model focused on the problems of input space representation in the parallel loop and the neural mechanism of timing. Because we have little evidence about these phenomena, we proposed hypotheses to solve these problems by simulating the biological neural network of the BG. In this section, we discuss our models in reference to the open questions we described in section 2 and attempt to put our model into perspective. Architecture We think that architecture is one key for developing an autonomous learning system that can be used for real applications. Current machine learning systems typically assume supervisory support from a human in designing the system, selecting the training data, choosing the algorithm, and setting the parameters and the goal of learning. An escape from this heavy-handed dependence on human supervision is required. We think that understanding the mechanisms that relate model-based RL and model-free RL will increase our understanding of the computational architecture of an autonomous learning system. We have used these core ideas by introducing cortico-BG parallel loops into the architecture of our models. The BG system model demonstrates that autonomous selection of input space can be achieved using neural networks with our architecture. An important advance from previous work is that we combine the network learning rule with multiple modular RL. The system works as an autonomous controller of parallel learning modules and can learn to select the relevant input space even if it faces different types of problems, such as MDP and HOMDP. The effectiveness of multiple modules running in parallel has been demonstrated by previous studies in BG models [33][34][35][36][37]. Our model demonstrates that it is possible to autonomously select input space that is
Basal Ganglia Models for Autonomous Behavior Learning
345
relevant to each module. However, both the complexity and dimension of the input space are still low in our simulation. In order to develop real-world applications, the system requires both a high-level representation of input and advanced selection from high-dimensional input. Based on the assumption that there is hierarchical processing in the cortex and that the related cortico-BG circuit shows behavioral hierarchy, high-level representation may be achieved with hierarchical parallel loops [30]. In the future, we hope to combine a hierarchical cortex model with our BG model via parallel loops. The BG spiking neural network model proposes the novel idea of data selection from high-dimensional input. The selection learning is performed in an output-driven manner [55]. That is, sensory-action relationship learning in this model is initiated by the action. This differs from attentional learning in which sensory learning occurs in response to a specific feature, such as a cue, and then the cue is related to the relevant action later. The advantage of the selection method is that it can find a potential sensory cue that temporally correlates with the successful action. This mechanism should produce the timely execution of an action even when the system has difficulty finding a salient cue for initiating the action. This method may also explain perceptual learning without perception [56], because the system learns sensory input when it succeeds an action regardless of its relevance to the task. The Role of Neuromodulators Modulatory influences in the nervous system play an important role in learning. The effectiveness of an internal reward for enhancing learning speed is also demonstrated by our BG system model by the automatic internal reward generation method which uses the error values of predictions for observed states. Intrinsically motivated reinforcement learning (IMRL) [57[58] is also effective. However, the events that generate intrinsic rewards in these studies are hand-wired, and the amount of the intrinsic reward is proportional to the error in its prediction of the salient event, according to the learned option model for that event. The definition of intrinsic reward is still unclear. The term intrinsic reward has been used in psychology to describe a reward internal to a person, such as satisfaction or an accomplishment. The psychological understanding is that intrinsic rewards contribute to exploration, play and other curiosity-driven behaviors in the absence of an explicit reward. While these behaviors have generally been examined in humans, internally generated rewards are effective in other animals including dogs, cats and rats, among others. The internally generated rewards in these animals rely more on ‘drives’. An infant playing with a toy car requires the intrinsic rewards generated by the prediction error. On the other hand, a dog playing with a moving ball requires the internal reward to track a moving object. The latter may be accomplished by brainstem level processing through the “superior colliculus-substantia nigra” pathway, in which a salient visual stimulus detected by the superior colliculus activates dopamine neurons in the subtantia nigra [26]. The internal reward we proposed in this paper falls somewhere in between the two examples described above. It works on a predictive process beyond the brainstem. The prediction is performed in the BG after some posterior experience. More
346
H. Tsujino, J. Takeuchi, and O. Shouno
sophisticated predictions are accomplished in the cortex and might connect to autonomous setting of a target in the prefrontal cortex as a virtual prediction. Neural Mechanisms of the Timing The BG spiking neural network model proposes a mechanism for deciding execution timing. However, the timescale for execution is limited (Fig. 6D). The time scale for an animal to perform a complex cognitive task in a changing environment would differ. The cortico-BG loop is involved in reward prediction at different time scales. What is the detailed neural network operating on multiple timescales? This important question requires further study to generate new hypotheses in both experimental neuroscience and computational neuroscience. Perspective: Associative Interacting Intelligence We need an autonomous intelligent system that grows up by learning "sensorbehavior linkages" through an active interaction with a real world environment. Such intelligence could be achieved by the non-linear interaction of embodied elements on a heterogeneous system architecture. We call this intelligence “Associative interacting intelligence” because it is based on closely connected interactions in a system. By examining the process of brain evolution, we can hypothesize that the brain has a development strategy that repeats a two-stage evolution processes. The first process is an expansion of the scale. The brain begins developing from the brainstem. The brainstem has integrative functions comprised of cardiovascular system control, respiratory control, control of pain sensitivity, alertness, and consciousness, and it is thought to be a central contributor to embodiment. An animal can survive if it has a complete brainstem. To survive further in environments, the evolution strategy first expands the scale of the brainstem to process more information. The second process is the creation of a new brain structure. Homogeneous processes have limitations so evolution invents the cerebellum, which has different neural structures than the brainstem. It brings to the brain a new quality; it performs more sophisticated and accurate real-time movements. Then the evolution strategy again uses the first process, expansion of the cerebellum. While the scale of the cerebellum expands, it interacts with other parts of the brain and develops new functions through experience. By following this strategy, the primate brain finally has a variety of different neural structures, including the brainstem, cerebellum, thalamus, hypothalamus, basal ganglia, amygdala, hippocampus, and neocortex. The evolutionally newer structures, such as the hippocampus and neocortex, have strong memory abilities and they take charge of more high-level intelligence. Evolutionally older parts of the brain take charge of more embodied intelligence because they have a closer interaction with the body. A living entity is not given prior specifications for controlling its body. Thus, the living entity can adaptively determine the specifications that are relevant in its living environment. This process of determination of specification is the process of embodiment and is the strategy living entities use to survive in a variety of environments. A necessary aspect of this process is performance of actions in the environment. The living entity actively acquires embodied behaviors while making trial and error decisions on its nature to invoke a behavior spontaneously. We think that the older parts of the brain require an open architecture system to support this emergent output while interacting with the environment [55][59].
Basal Ganglia Models for Autonomous Behavior Learning
347
Problems can be solved if basic knowledge is well understood. Acquisition of the above mentioned embodied behaviors can support "sensor-behavior linkages" as the basic knowledge for living. We think that the function of clearing up applied problems is performed by evolutionally new parts of the brain. These new parts of the brain determine the optimal behavior for a problem by combining basic knowledge. Because a combinatorial number is explosive, unlike spontaneous behavior selection in old parts of the brain, trial and error is inefficient. Therefore, new parts of the brain create several hypotheses, perform internal simulation to evaluate these hypotheses and design an optimal behavioral strategy [60]. This behavioral strategy is not a specific behavior plan but rather combinations of various assumed situation and behaviors, which could be taken for each situation. We think this strategy is sent from new parts to old parts of the brain. Then, the evolutionally old parts can adaptively select behaviors on the basis of strategy, which may include longer term planning. The models proposed in this chapter focus on small sets of problems associated with autonomous behavior learning. They are moving toward associative interacting intelligence. Although the scale of the models is small, they have the required close interactions at the intraneuron and intramodule levels. These models have helped to clarify some of the open questions in our field.
5 Conclusion We have proposed BG models for autonomous behavior learning. This work is still early in development and has great potential. The BG system model illustrates the effectiveness of internal state representation and internal reward for achieving a goal in shorter trials. The BG spiking neural circuit model has the capacity for probabilistic selection of action and also shows that selection probability and execution timing can be modulated. As we discussed in the previous section, there are still many areas to be addressed and resolved. In order for our models to be fully tested, they must be morphed onto a body in order to show autonomous behavior learning. Future success will depend on multi-disciplinary collaborations and advances in each of these research areas.
Acknowledgement We thank Professors Kenji Doya, Tomoki Fukai, and Edgar Koerner for valuable discussions and comments.
References 1. Reiner, A., Medina, L., Veenman, C.L.: Structural and functional evolution of the basal ganglia in vertebrates. Brain Res. Brain Res. Rev. 28(3), 235–285 (1998) 2. Barto, A.G., Sutton, R.S., Anderson, C.: Neuron-like adaptive elements that can solve difficult learning control problems. IEEE Trans. on Systems, Man, and Cybernetics, SMC 13, 834–846 (1983)
348
H. Tsujino, J. Takeuchi, and O. Shouno
3. Montague, P.R., Dayan, P., Sejnowski, T.J.: A framework for mesencephalic dopamine systems based on predictive Hebbian learning. J. Neurosci. 16, 1936–1947 (1996) 4. Schultz, W., Dayan, P., Montague, P.R.: A neural substrate of prediction and reward. Science 275(5306), 1593–1599 (1997) 5. Berns, G.S., McClure, S.M., Pagnoni, G., Montague, P.R.: Predictability modulates human brain response to reward. J. Neurosci. 21, 2793–2798 (2001) 6. Haruno, M., Kuroda, T., Doya, K., Toyama, K., Kimura, M., Samejima, K., Imamizu, H., Kawato, M.: A neural correlate of reward-based behavioral learning in caudate nucleus: a functional magnetic resonance imaging study of a stochastic decision task. J. Neurosci. 24, 1660–1665 (2004) 7. McHaffie, J.G., Jiang, H., May, P.J., Coizet, V., Overton, P.G., Stein, B.E., Redgrave, P.: A direct projection from superior colliculus to substantia nigra pars compacta in the cat. Neurosci. 138, 221–234 (2006) 8. Balleine, B.W., Delgado, M.R., Hikosaka, O.: The role of the dorsal striatum in reward and decision-making. J. Neurosci. 27, 8161–8165 (2007) 9. Niv, Y., Schoenbaum, G.: Dialogues on prediction errors. Trends Cogn. Sci. 12(7), 265– 272 (2008) 10. Dayan, P., Niv, Y.: Reinforcement learning: The Good. The Bad and The Ugly, Curr. Opin. Neurobiol. 18(2), 185–196 (2008) 11. Daw, N.D., Niv, Y., Dayan, P.: Uncertainty-based competition between prefrontal and dorsolateral striatal systems for behavioral control. Nat. Neurosci. 8, 1704–1711 (2005) 12. Sutton, R.S.: Integrated architectures for learning, planning, and reacting based on approximating dynamic programming. In: Proc. of the Seventh International Conference on Machine Learning, Austin, TX (1990) 13. Barto, A.G., Bradtke, S.J., Singh, S.P.: Learning to act using real-time dynamic programming. Artif. Intell. 72(1), 81–138 (1995) 14. Sutton, R.S.: Learning to predict by the method of temporal differences. Machine Learning 3(1), 9–44 (1988) 15. Watkins, C.J.C.H., Dayan, P.: Q-learning. Machine Learning 8(3), 279–292 (1992) 16. Coutureau, E., Killcross, S.: Inactivation of the infralimbic prefrontal cortex reinstates goal-directed responding in overtrained rats. Behav. Brain Res. 146, 167–174 (2003) 17. Balleine, B.W., Killcross, A.S., Dickinson, A.: The effect of lesions of the basolateral amygdale on instrumental conditioning. J. Neurosci. 23, 666–675 (2003) 18. Balleine, B.W.: Neural bases of food-seeking: affect, arousal and reward in corticostriatolimbic circuits. Physiol. Behav. 86, 717–730 (2005) 19. Valentin, V.V., Dickinson, A., O’Doherty, J.P.: Determining the neural substrates of goaldirected learning in the human brain. J. Neurosci. 27, 4019–4026 (2007) 20. Alexander, G.E., et al.: Parallel organization of functionally segregated circuits linking basal ganglia and cortex. Annu. Rev. Neurosci. 9, 357–381 (1986) 21. Parent, A., Hazrati, L.N.: Functional anatomy of the basal ganglia.1. The cortico–basal ganglia–thalamo–cortical loop. Brain Res. Rev. 20, 91–127 (1995) 22. Middleton, F.A., Strick, P.L.: Basal ganglia and cerebellar loops: motor and cognitive circuits. Brain Res. Rev. 31, 236–250 (2000) 23. Montague, P.R., Dayan, P., Sejnowski, T.J.: A framework for mesencephalic dopamine systems based on predictive Hebbian learning. J. Neurosci. 16, 1936–1947 (1996) 24. Schultz, W., Dayan, P., Montague, P.R.: A neural substrate of prediction and reward. Science 275(5306), 1593–1599 (1997) 25. Matsumoto, M., Hikosaka, O.: Lateral habenula as a source of negative reward signals in dopamine neurons. Nature 447, 1111–1115 (2007)
Basal Ganglia Models for Autonomous Behavior Learning
349
26. Comoli, E., Coizet, V., Boyes, J., Bolam, J.P., Canteras, N.S., Quirk, R.H., Overton, P.G., Redgrave, P.: A direct projection from superior colliculus to substantia nigra for detecting salient visual events. Nat. Neurosci. 6(9), 974–980 (2003) 27. Zhou, F.M., Liang, Y., Dani, J.A.: Endogenous nicotinic cholinergic activity regulates dopamine release in the striatum. Nat. Neurosci. 4(12), 1224–1229 (2001) 28. Partridge, J.G., Apparsundaram, S., Gerhardt, G.A., Ronesi, J., Lovinger, D.M.: Nicotinic acetylcholine receptors interact with dopamine in induction of striatal long-term depression. J. Neurosci. 22(7), 2541–2549 (2002) 29. Tanaka, S.C., Doya, K., Okada, G., Ueda, K., Okamoto, Y., Yamawaki, S.: Prediction of immediate and future rewards differentially recruits cortico-basal ganglia loops. Nat. Neurosci. 7(8), 887–893 (2004) 30. Graybiel, A.M.: Habits, Rituals, and the Evaluative Brain. Annu. Rev. Neurosci. 31, 359– 387 (2008) 31. Pasupathy, A., Miller, E.K.: Different time courses of learning-related activity in the prefrontal cortex and striatum. Nature 433, 873–876 (2005) 32. Redgrave, P., Prescott, T.J., Gurney, K.: The basal ganglia: a vertebrate solution to the selection problem? Neurosci. 89, 1009–1023 (1999) 33. Gurney, K., Prescott, T.J., Redgrave, P.: A computational model of action selection in the basal ganglia. II. Analysis and simulation of behaviour. Biol. Cybern. 84, 411–423 (2001) 34. Prescott, T.J., Gurney, K., Montes-Gonzalez, F., Humphries, M.D., Redgrave, P.: The robot basal ganglia: action selection by an embedded model of the basal ganglia. In: Nicholson, L., Faull, R. (eds.) Basal Ganglia VII, pp. 349–356. Plenum Press 35. Humphries, M.D., Stewart, R.D., Gurney, K.N.: A physiologically plausible model of action selection and oscillatory activity in the basal ganglia. J. Neurosci. 26(50), 12921– 12942 (2006) 36. Bogacz, R., Gurney, K.: The Basal Ganglia and Cortex Implement Optimal Decision Making Between Alternative Actions. Neural. Compu. 19, 442–477 (2007) 37. Doya, K., Samejima, K., Katagiri, K., Kawato, M.: Multiple model-based reinforcement learning. Neural. Comput. 14(6), 1347–1369 (2002) 38. Hallett, M., Shahani, B., Young, R.: EMG analysis of patients with cerebellar lesions. Journal of Neurology, Neurosurgery, and Psychiatry 38, 1163–1169 (1975) 39. Hore, J., Wild, B., Diener, H.C.: Cerebellar dysmetria at the elbow, wrist, and fingers. J. Neurophysiol. 65, 563–571 (1991) 40. Jeuptner, M., Rijntjes, M., Weiller, C., Faiss, J.H., Timmann, D., Mueller, S., Diener, H.C.: Localization of cerebellar timing processes using PET. Neurology 45, 1540–1545 (1995) 41. O’Boyle, D.J., Freeman, J.S., Cody, F.W.J.: The accuracy and precision of timing of selfpaced, repetitive movements in subjects with Parkinson’s disease. Brain 119, 51–70 (1996) 42. Lo, C.-C., Wang, X.-J.: Cortico–basal ganglia circuit mechanism for a decision threshold in reaction time tasks. Nat. Neurosci. 9, 956–963 (2006) 43. Maimon, G., Assad, J.: A cognitive signal for the proactivetiming of action in macaque LIP. Nat. Neuro. 9(7), 948–955 (2006) 44. Doya, K.: What are the computations of the cerebellum, the basal ganglia, and the cerebral cortex. Neural Netw. 12, 961–974 (1999) 45. Romanelli, P., Esposito, V., Schaal, D.W., Heit, G.: Somatotopy in the basal ganglia: experimental and clinical evidence for segregated sensorimotor channels. Brain Res. Brain Res. Rev. 48, 112–128 (2005)
350
H. Tsujino, J. Takeuchi, and O. Shouno
46. Middleton, F.A., Strick, P.L.: Basal ganglia and cerebellar loops: motor and cognitive circuits. Brain Res. Brain Res. Rev. 31, 236–250 (2000) 47. Takeuchi, J., Shouno, O., Tsujino, H.: Modular neural networks for reinforcement learning with temporal intrinsic rewards. In: Proc. of 2007 International Joint Conference on Neural Networks (IJCNN) (2007) 48. Jaeger, H.: The ‘echo state’ approach to analysing and training recurrent neural networks. GMD report 148, German National Research Center for Information Technology (2001) 49. Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction. MIT Press, Cambridge (1998) 50. Nishida, S., Ishii, K., Furukawa, T.: An online adaptation control system using mnSOM. In: King, I., Wang, J., Chan, L.-W., Wang, D. (eds.) ICONIP 2006. LNCS, vol. 4232, pp. 935–942. Springer, Heidelberg (2006) 51. Schmidhuber, J.: Curious model-building control system. In: Proc. International Joint Conference on Neural Networks (IJCNN 1991), pp. 1458–1463 (1991) 52. Oudeyer, P.Y., Kaplan, F., Hafner, V.V.: Intrinsic motivation systems for autonomous mental development. IEEE Trans. Evol. Comput. 11(1), 265–286 (2007) 53. Plenz, D., Kitai, S.T.: A basal ganglia pacemaker formed by the subthalamic nucleus and external globus pallidus. Nature 400, 677–682 (1999) 54. Diesmann, M., Gewaltig, M.-O.: NEST: An Environment for Neural Systems Simulations. Forschung und wisschenschaftliches Rechnen, Beiträge zum Heinz-Billing-Preis 2001. Ges. für Wiss. Datenverarbeitung, 43–70 (2002) 55. Matsumoto, G., Tsujino, H.: Design of a brain computer using the novel principles of output-driven operation and memory-based architecture. In: Ono, T., Matsumoto, G., Llinas, R., Berthoz, A., Norgen, R., Nishijo, H., Tamura, R. (eds.) Cognition and Emotion in the Brain, pp. 529–546. Elsevier Science B.V, Amsterdam (2003) 56. Watanabe, T., Nanez, J.E., Sasaki, Y.: Perceptual learning without perception. Nature 413, 844–848 (2001) 57. Barto, A.G., Singh, S., Chentanez, N.: Intrinsically motivated learning of hierarchical collection of skills. In: Proc. of the 3rd International Conference on Developmental Learning (ICDL) (2004) 58. Singh, S., Barto, A.G., Chentanez, N.: Intrinsically motivated reinforcement learning. In: Advances in Neural Information Processing Systems, vol. 17, pp. 1281–1288. MIT Press, Cambridge (2005) 59. Tsujino, H.: Output-driven operation and memory-based architecture principles embedded in a real-world device. J. Integr. Neurosci. 3(2), 133–142 (2004) 60. Koerner, E., Tsujino, H., Masutani, T.: A Cortical-type Modular Neural Network for Hypothetical Reasoning. Neural Netw. 10, 791–814 (1997)
Author Index
Deco, Gustavo 31 Doya, Kenji 278
Rao, Rajesh P.N. 103 Ritter, Helge 84 Rohlfing, Katharina J. 139 Rolls, Edmund T. 31
Eggert, Julian 215 Elfwing, Stefan 278 Floreano, Dario
Sagerer, Gerhard 139 Sendhoff, Bernhard 1 Shouno, Osamu 328 Sloman, Aaron 248 Sporns, Olaf 1, 15 Steil, Jochen J. 84 Suzuki, Mototaka 303
303
Goerick, Christian 192 G´ omez, Gabriel 66 Grimes, David B. 103 Gritti, Tommaso 303 Hanheide, Marc 139 Haschke, Robert 84 Herrmann, Christoph S. Howard, Matthew 151 Jost, J¨ urgen
314
Takeuchi, Johane 328 Toussaint, Marc 151 Tsujino, Hiroshi 328
51
K¨ orner, Edgar
1
Ohl, Frank W.
314
Petkos, Giorgios 151 Pfeifer, Rolf 66
Uchibe, Eiji
278
Vijayakumar, Sethu Wersing, Heiko 215 Wrede, Britta 139
151