i
An Imitation-Based Approach to Modeling Homogenous Agents Societies Goran Trajkovski Towson University, USA
IDEA GROUP PUBLISHING Hershey • London • Melbourne • Singapore
ii
Acquisitions Editor: Development Editor: Senior Managing Editor: Managing Editor: Copy Editor: Typesetter: Cover Design: Printed at:
Michelle Potter Kristin Roth Jennifer Neidig Sara Reed Holly Powell Marko Primorac Lisa Tosheff Yurchak Printing Inc.
Published in the United States of America by Idea Group Publishing (an imprint of Idea Group Inc.) 701 E. Chocolate Avenue Hershey PA 17033 Tel: 717-533-8845 Fax: 717-533-8661 E-mail:
[email protected] Web site: http://www.idea-group.com and in the United Kingdom by Idea Group Publishing (an imprint of Idea Group Inc.) 3 Henrietta Street Covent Garden London WC2E 8LU Tel: 44 20 7240 0856 Fax: 44 20 7379 0609 Web site: http://www.eurospanonline.com Copyright © 2007 by Idea Group Inc. All rights reserved. No part of this book may be reproduced, stored or distributed in any form or by any means, electronic or mechanical, including photocopying, without written permission from the publisher. Product or company names used in this book are for identification purposes only. Inclusion of the names of the products or companies does not indicate a claim of ownership by IGI of the trademark or registered trademark. Library of Congress Cataloging-in-Publication Data Trajkovski, Goran, 1972Imitation-based approach to modeling homogenous agents societies / Goran Trajkovski. p. cm. ISBN 1-59140-839-3 (hardcover) -- ISBN 1-59140-840-7 (softcover) -- ISBN 1-59140-841-5 (ebook) 1. Artificial intelligence--Mathematical models. 2. Artificial intelligence--Social aspects. 3. Fuzzy mathematics. 4. Fuzzy systems. 5. Cognitive science. I. Title. Q335.T728 2007 006.3--dc22 2006010086 British Cataloguing in Publication Data A Cataloguing in Publication record for this book is available from the British Library. All work contributed to this book is new, previously-unpublished material. The views expressed in this book are those of the authors, but not necessarily of the publisher.
iii
An Imitation-Based Approach to Modeling Homogenous Agents Societies Table of Contents
Foreword ......................................................................................................... ix Preface ............................................................................................................. xi Acknowledgments .................................................................................... xxiv Section I: Theory Chapter I On Our Four Foci ........................................................................................... 1 Abstract ................................................................................................... 1 Introduction ............................................................................................ 1 Agent(s) ................................................................................................... 2 Environmental Representation(s) in Agents ........................................ 6 Multi-Agent Systems and Societies of Agents ...................................... 8 Imitation as a Phenomenon ................................................................... 9 Problems Discussed in This Book ......................................................... 9 References ............................................................................................ 10
iv
Chapter II On Agency ..................................................................................................... 14 Abstract ................................................................................................. 14 Introduction .......................................................................................... 14 The Key Notions: Expectancy and Interaction .................................. 15 Surprised Agents .................................................................................. 17 Learning and Using the Expectancies ............................................... 18 Conclusion ............................................................................................ 19 References ............................................................................................ 19 Chapter III On Drives ....................................................................................................... 21 Abstract ................................................................................................. 21 Introduction .......................................................................................... 21 On Hunger ............................................................................................ 22 Action and Control ............................................................................... 23 Tinbergenn’s Models ........................................................................... 24 Motivation ............................................................................................. 25 References ............................................................................................ 28 Chapter IV On IETAL, Algebraically ............................................................................ 30 Abstract ................................................................................................. 30 Introduction .......................................................................................... 30 Basics of the Algebraic Formalization ............................................... 31 Unlearnable Environments ................................................................. 36 Perceptual vs. Cognitive Aliasing ....................................................... 37 References ............................................................................................ 39 Chapter V On Learning .................................................................................................. 40 Abstract ................................................................................................. 40 Introduction .......................................................................................... 40 Building the Intrinsic Representation ................................................. 41 In Quest for Food ................................................................................ 47 Context? ................................................................................................ 48 Emotional Context ................................................................................ 49 References ............................................................................................ 51 Chapter VI On IETAL, Fuzzy Algebraically ................................................................. 53 Abstract ................................................................................................. 53 Introduction .......................................................................................... 53
v
Structures in Drives, Motivations, and Actions ................................ 54 The New Definition of Agent ............................................................... 56 References ............................................................................................ 61 Chapter VII On Agent Societies ...................................................................................... 62 Abstract ................................................................................................. 62 Introduction .......................................................................................... 62 Imitation Revisited ................................................................................ 63 Mirror Neurons as a Biological Wiring for Imitation ...................... 65 Social Agents ........................................................................................ 66 The Agent Societies .............................................................................. 67 References ............................................................................................ 69 Chapter VIII On Concepts and Emergence of Language .............................................. 71 Abstract ................................................................................................. 71 Introduction .......................................................................................... 71 Modern Theories of Concept Development ....................................... 73 The Environmental Function ............................................................... 75 Beyond Objects .................................................................................... 78 Emergent Order of Concept Formation ............................................. 80 Physical Feature Identification Methods ........................................... 81 Representation of Concepts ................................................................ 82 On Language ........................................................................................ 83 Communication and Social Taxonomies ............................................ 84 Functional Taxonomy .......................................................................... 86 Language as a Category in MASIVE ................................................. 87 Acknowledgments ................................................................................. 90 References ............................................................................................ 90 Chapter IX On Emergent Phenomena: If I’m Not in Control, Then Who Is? The Politics of Emergence in Multi-Agent Systems ............... 93 Abstract ................................................................................................. 93 Introduction .......................................................................................... 94 Ambiguously Emerging ........................................................................ 97 “Bottom-Up” Programming and the Promise of Emergence ......... 102 The Emergence of the New Economy ............................................... 106 Conclusion .......................................................................................... 110 References .......................................................................................... 112
vi
Section II: Cases Chapter X On MASim: A Gallery of Behaviors in Small Societies ....................... 116 Abstract ............................................................................................... 116 Introduction ........................................................................................ 116 Requirements and Design Specifications: Agents ........................... 118 Environment ........................................................................................ 123 Classes ................................................................................................ 124 Graphical User Interference ............................................................ 125 Agents Lost, Agents Found ................................................................ 125 Chapter XI On a Software Platform for MASIVE Simulations ................................ 136 Abstract ............................................................................................... 136 Introduction ........................................................................................ 137 Interface .............................................................................................. 137 Running the Simulation ..................................................................... 143 System Architecture ............................................................................ 146 Agent Control ..................................................................................... 151 World Rendering Representation and Generation ......................... 152 Development Environment and Java 1.5 Migration ....................... 153 Customizing the Agent Simulator ...................................................... 161 Acknowledgment ................................................................................ 166 References .......................................................................................... 166 Chapter XII On a Robotic Platform for MASIVE-Like Experiments ..................... 167 Abstract ............................................................................................... 167 Introduction ........................................................................................ 167 Hardware Design ............................................................................... 168 Hardware Requirements .................................................................... 170 Software Design ................................................................................. 171 Software Requirements ...................................................................... 173 Hardware Implementation ................................................................. 176 Software Implementation ................................................................... 177 Acknowledgment ................................................................................ 183 References .......................................................................................... 183 Chapter XIII On the POPSICLE Experiments ............................................................ 184 Abstract ............................................................................................... 184 Introduction ........................................................................................ 184 Emulating Agents on Humans ........................................................... 185
vii
The Experimental Setup ..................................................................... 186 Frustration in Subjects ...................................................................... 190 e-POPSICLE ....................................................................................... 191 Architecture and Specifications ........................................................ 192 Improving the Motivation of Subjects Using Gaming Strategies ....................................................................................... 195 Technical Aspects of the 3D-POPSICLE Project ............................ 197 Creating a World Environment ......................................................... 197 Creating a Room ................................................................................ 198 Loading Textures ................................................................................ 201 Setting Up a Camera and Display .................................................... 205 Lighting the World Environment ....................................................... 208 Acknowledgment ................................................................................ 209 References .......................................................................................... 209 Section III: Alternatives Chapter XIV On an Evolutionary Approach to Language .......................................... 212 Abstract ............................................................................................... 212 Evolutionary Algorithms .................................................................... 213 Genetic Algorithms ............................................................................. 214 Agent Communication Languages .................................................... 215 Limitations of Speech-Act-Based Agent Communication Languages ......................................................... 216 Overview of KQML Specifications ................................................... 217 Evolving Vocabularies ....................................................................... 218 Definitions .......................................................................................... 219 Properties ............................................................................................ 222 Phases of the Evolution of a System with an Evolving Vocabulary .................................................................... 221 Specifications of Evolving Vocabularies ......................................... 222 Experimental Design .......................................................................... 223 Environments ...................................................................................... 223 Agent Behavior ................................................................................... 224 Communication ................................................................................... 224 Agent Interaction ................................................................................ 224 Selection and Variation ..................................................................... 225 Variables ............................................................................................. 226 Results and Discussion ...................................................................... 226 Future Work ....................................................................................... 228 Related Work ...................................................................................... 228 Acknowledgment ................................................................................ 229 References .......................................................................................... 229
viii
Chapter XV On Future Work ........................................................................................ 231 Abstract ............................................................................................... 231 Heterogenous Agents ......................................................................... 231 Representations .................................................................................. 232 Emergent Societies through Multi-Agent Systems ........................... 234 The Myth of Baba Yaga and Izbushka ............................................ 240 Significance ........................................................................................ 241 References .......................................................................................... 242 Appendix A ................................................................................................. 244 Abstract ............................................................................................... 244 Graph Structures ................................................................................ 244 Fuzzy Graphs ...................................................................................... 247 Fuzzy Relations and Fuzzy Graphs .................................................. 250 Random Groups vs. Fuzzy Graphs ................................................... 251 A Sketch of an Application ............................................................... 252 L-Fuzzy Lattices .................................................................................. 257 Preliminaries ....................................................................................... 258 LM Fuzzy Lattices of Type 1 ............................................................... 261 LO Fuzzy Lattices o Type 1 ............................................................... 265 LM and LO Fuzzy Lattices of Type 2 .................................................. 268 P and R Fuzzy Lattices ....................................................................... 271 Š ostak Structures ............................................................................... 271 References .......................................................................................... 273 Appendix B ................................................................................................. 276 Abstract ............................................................................................... 276 Brief Overview of Motivation Theory .............................................. 277 Psychodynamic Theory ...................................................................... 277 Behaviorism ........................................................................................ 278 Evolutionary Psychology .................................................................. 281 Self-Perceived Fitness (SPFit) Theory ............................................. 286 Brain Substrate ................................................................................... 286 Toward a Model of Motivation ......................................................... 288 Nonlinear Stimulus-Response Functions ......................................... 291 Conclusion .......................................................................................... 299 References .......................................................................................... 300 About the Author ....................................................................................... 304 About the Contributors ............................................................................ 305 Index ............................................................................................................ 307
ix
Foreword
Question: What is more difficult than writing a book on the multi-disciplinary subject spanning from multi-agent systems, algebraic models, simulations, cognitive and developmental robotics to cognitive and social psychology, theory of motivation, and evolution of language? Answer: writing the foreword for such a book! Dr. Trajkovski and his collaborators, in an intellectual tour de force of about 300 pages have managed to brush upon a majority of the hot topics in the aforementioned areas, opinionate, and give their own contribution. The unifying theme of the book is the notion of agent. In a bottom-up approach, in the first section of the book, the author lays down the basics of the Interactivist-Expectative Theory of Agency and Learning (IETAL). In accordance with their construal of the notion of the agent, it is impossible to define an agent without its environment. The theory of agency is first given as a narrative, and then, an algebraic formalization is given. The book can be read on many levels and if you are, say, mathematically challenged, it is okay if you skip the major part of Chapters IV-VI. You will still have a good idea of what IETAL is about. As agent is also a social construct, the author offers a theory of multiagent systems embodied in Multi-Agent Systems Simulation (MASim). They do not shy away from deep psychological questions and include discussion on developmental issues (with special accent on Piaget’s theory) such as motivation, emotions, drives, inborn knowledge, and the like. The subtleties of the notion of emergence in human and artificial agent societies are given a special chapter (written by Samuel G. Collins) where he succeeds to comment on most of current theorist of emergence and put forward his own critical contribution to the debate and to the book. The second section of the book is by far the most technically oriented, and you will learn about all the intricacies of writing a simulator for IETAL or MASim experiments, as well as experiments with human subjects which inhabit similar
x
virtual environments. The latter, POPSICLE experiment (POPSICLE: Patterns in Orientations: Pattern-Aided Simulated Interaction Context Learning Experiment) is hinted in the first section. In this section of the book, we also learn about e-POPSICLE, a version of the program for conducting online experiments with distant subjects as well as several hardware robotic platforms used as proof-of-concept tools. The third and final section of the book is devoted to the discussion of current and future work of the author. Also, a major part of it is about types of interactions (linguistic, non-linguistic, among agents of same and of different types, human-machine interactions, etc.) and the emergence of language in multiagent systems. Special place is given to the use of spatial metaphors in descriptions of the online interactions. Is it possible to avoid it? What happens if we start using some application without prior knowledge of the interface or of its use? What will emerge from this type of human-machine (or human-machinehuman) interactions? These are the questions that the author embarks on with their newest experimental setup Izbushka! As every decent book, this one closes with more questions than answers! Understandably, such a book does not make an easy read, but it certainly makes a gratifying experience! Georgi Stojanov San Diego January 2006
xi
Preface
In a nutshell, this book treats the methods, structures, and cases on the representation(s) of the environment in multi-agent systems. The individual agents in the systems are based on the Interactivist-Expectative Theory of Agency and Learning (IETAL). During their sojourn in the environment, the agents interact with it and build their intrinsic representation of it. The interagent communication is solved via imitation conventions of the homogenous agents. Due to the specific interrelation of the drives, motivations, and actions, a multitude of fuzzy structures is used as a base for the formalization of the theory. Original results in the Theory of Fuzzy Graphs, and Fuzzy Algebraic Structures, valued by lattices, posets, and relational structures are also given. Algorithms for detection of the learnability of the environment are given, as well as a discussion on the concept of context within this theory. The phenomenology of the drives and percepts as well as of the imitation is surveyed in detail. Solution of the interagent communication and of the emergence of language in the system is also discussed. The original experiments with humans investigate the status of some key notions of our theories in human subjects. We also present a variety of hardware and software solutions that have served us in our research and that are flexible enough to serve other researchers in similar experiments, whether they are of simulation nature, emulations on ro-
xii
botic agents, or based on harvesting data from human individuals or groups. In our discussions we focus on important biological concepts from humans that our theories are based on. We discuss actions, motivations, and drives in human subjects, explore the concept formation in related literature, and give our own take on it, as that serves as the base of our discussion on emergence of language in our artificial societies. There are four key notions that we observe in this volume: agent, environment, imitation, and multi-agent systems. In existing literature, there have been numerous attempts to define the term agent. The existing definitions span from laconic definitions of software online agents to those that define the term so that it serves a specific goal in the context it is being discussed in. So, authors use it as broadly or as narrowly as needed within the context of the topic they have been discussing. We are adopting a fairly general — in our opinion — axiomatic definition of an agent. For an artifact (biological or man-made) to be considered an agent, we require for it to be in possession of the following three properties: (1) autonomy, as the ability of the agent to function on its own with no outside help; (2) proactivity, as the ability of the agent to undertake actions to satisfy its goals; and (3) purpose, as the ability to attribute goals, beliefs, and/or desires. The agents inhabit a given environment that they interact with. Through interaction with the environment they are in, they learn what to expect from it (but sometimes they end up being surprised or experiencing pain when bumping into obstacles). If more than one agent inhabits the same environment, we talk of a multi-agent system inhabiting that environment. Actually, if the agents only view the other agents as a part of the environment, without communicating with each other, it would not make sense to talk about systems, as the term itself from the perspective of the science of cybernetics assumes that the parts of a collection work together. Without the agents working together, we would be observing a collection of agents that only notice the other ones as a part of a changing, dynamic environment, and most of the time they would mess up the expectations of the agent of the environment. When many agents that are built the same, have the same abilities, and can communicate between themselves, are put together in the same environment we can sit back and observe emergent phenomena in a society of homogenous agents. We live in such an environment, indeed, for example, with so many other fellow humans. But we do not communicate with other humans only. If we look deeper into our environment, we might identify agents that are not human but with still some kind of an interaction and communication going between us. For example, although my pets are not human, I can still communicate with them. Systems where the agents are not necessarily homogenous will be referred to as heterogeneous.
xiii
This book will concentrate on the uniagent environments and then extend the theory to multi-agent systems (the term multi-agent societies is — after all — an oxymoron) that consist of homogenous agents. The discussions may lead to possible generalizations into the area of heterogeneous systems, though. Systems with one agent in the environment that we observe will be referred to as uniagent environments. Why this interest in multi-agent systems? Well, in the past few years we witnessed an increase in their importance in many of the computer sciences (artificial intelligence [AI], theory of distributed systems, robotics, artificial life, etc.). The main reason for this appeal — I think — is the fact that they introduce the problem of collective intelligence and emergence of patterns and structures through interaction. Classical AI has attempted to study a lot of relevant phenomena. Despite the limited success in several isolated fields, this discipline seems to have always been chocked into its own unsuccessful, full of vicious traps, formalisms… The research in multi-agent systems requires an integrated, not analytical approach. The majority of research work in this domain is toward two main goals. First, carrying out a theoretical and experimental analysis of the self-organizing mechanisms during the interaction of two (or more) agents. The second is creation of distributed artifacts able to cope with complex tasks via collaboration and interaction amongst themselves. Several years ago, when we were investigating interagent communication, the imitation phenomena found its natural place in our multi-agent theory. After being put aside as nonintelligent learning (if learning at all), with the 1996 discovery of mirror neurons by Rizzolatti, we ended up convinced that — after all — imitation is far from a trivial phenomenon but rather a creative mapping of one’s actions and their consequences onto self. The mirror neurons, located in the frontal cortex, fire when the subject is executing motor actions while watching somebody else doing them. It seems that, after all, we are wired for imitation. We believe that is how we learn at very early age. In this book, we give an attempt to establish a mechanism of interagent communication in the multi-agent environment, thus expanding IETAL to multi-agent systems with linguistic competence; formalizing the said methods via the means of the classical algebraic theories and the fuzzy algebraic structures; studying the emerging structures when interacting with the environment, and studying the emerging social phenomena in the system. What we discuss in this book would not be classified as mainstream AI by most researchers. We rather offer solutions for some problems that AI was either not successful in solving or did not consider at all. This is a contribution towards the sciences that emerged more than a decade ago from the classical AI, more specifically the cognitive sciences and especially developmental and cognitive robotics and multi-agent systems.
xiv
The book is organized in three main sections followed by two appendices: Theory (Chapters I-IX), Cases (Chapters X-XIII), and Alternatives (Chapters XIVXV). In the sequel of this introduction we briefly summarize the contents of the chapters, thus painting the big picture. The details are in the respective chapters. The appendices have been added to supplement the chapters; they have been placed at the end of the book.
Section I: Theory In this section we give an overview of our theories and views of agency, learning in agents, multi-agency, and emergent phenomena in multi-agent systems. In Chapter I, On Our Four Foci, we introduce the most important basic concepts for this book. We give an overview of the notion of an agent; a multiagent society; interaction between the agent and the environment and the interaction between agents; and put it in the relevant historic and theoretical framework. We focus here on the four foci of our theories: agent, environment, imitation, and multi-agent system. The Chapter II, On Agency, overviews the IETAL theory developed from the need to offer solutions to well-known problems in autonomous agent design. As the name of the theory indicates, the key concepts in the uniagent theory are those of expectation and interaction. With the expectancies we emphasize the notion of being in the world. They are defined as the ability of the autonomous agent to expect (anticipate) the effects of its actions in the world. The second key notion in the theory is the interaction with the environment. The expectancies are being built through the interaction with the environment the agents inhabit. Chapter III, On Drives, we give the relation between the drives, motivations, and actions, as well as a short overview of some key interagent and social concepts in multi-agent systems. We often distinguish between consummatory acts and appetitive behaviors. The first ones refer to the intention for satisfaction of a tendency, and the latter ones are being applied in the active phase of the purposeful behavior. The taxings are on the very border between the two. They are the behaviors that orient and align the agents towards (or from) the source of simulation. The agent gains information of the world via its precepts and readies action to meet its goal. The perception system is the gate between the world and the agent. The standard categorization of the motivations is as follows: personal motivations, the object of which is the agent itself, aiming at satisfying its needs or relieving itself from obligations imposed on it; motivations from the environment, produced by perceptions of elements of the environment; social motivations, imposed by the meta-motivations of the designer;
xv
and relational motivations, that depend on the tendencies of the other agents. A separate class of motivations (according to other classification criterion) would be the motivations of agreement as a result of the agreement of the agent to undertake a given responsibility. Appendix B complements this chapter, as it focuses on what motivates humans. It raises questions researched well in psychological literature. What motivates people? Why do they behave as they do? For that matter, why do people do anything at all? Questions like these have persisted over 100 years of psychology despite decades upon decades of research to answer them. Waves of academic thinking have addressed these issues, with each new school of thought providing different answers, at least in form if not in function. When the agents are more complex, the relationship between the drives and tendencies become more complex as well. In more simple agents where we include Petitagé, the tendencies are results of the combinations of the inner stimuli and the stimuli of the environment. As the complexity rises, the systems are able to combine the tendencies into higher-order tendencies. The main problems that this theory deals with are learning about the environment and using the intrinsic representation of the environment that helps guide the agent towards the satisfaction of its drives. The second problem is the one of the human-agent interaction, which is typically not being discussed within the research in the area. Unlike the traditional approaches in the so-called behavior-based design, we emphasize the importance of the interaction between the agent and the environment. During this interaction, the agent becomes aware of the effects of its actions via learning the expectations. Accurate spatial representations are imperative for good performances in autonomous agents. We give the algebraic framework for spatial representations in mobile agents, which is used as a formal frame for our experiments. For successful performance of the mobile agents, accurate spatial representations are crucial. The agent should learn the environment through the interaction with it. In more recent studies, this approach is referred to as navigational map learning. These maps are planar connected graphs whose vertices are locally distinct places, and edges are the agent’s actions. More problems emerge in a more realistic situation when two or more places in the environment look the same to the agent, due to sensor limitations. This problem is referred to as perceptual aliasing. Chapter IV (On IETAL, Algebraically) presents the algebraic framework for modeling our agents, whereas Chapter V (On Learning) discusses how these agents build their intrinsic representation of their environment through interacting with it. Appendix B fills in the theoretical gap between Chapter V and Chapter VI and should be read by those unfamiliar with the fuzzy algebraic structures before reading Chapter VI. In this appendix, three approaches in defining fuzzy graphs (Fuzzy graphs: graphs with fuzzy vertices; graphs with fuzzy edges;
xvi
and graphs with fuzzy vertices and fuzzy edge [• ostak graph]). They are further discussed in the context of the term random graph, and • ostak graphs are also defined. In Appendix A, we also present the theories of L-fuzzy lattices, as a specification on one hand, and generalization, on the other, of the fuzzy graph, and we give directions for further generalizations in the sense of poset and relational structure-valued lattices. As lattices are being considered as algebraic and as relational structures, we discuss two types of two kinds of fuzzy lattices: LM, where the membership function of the structure carrier is fuzzified, and LO, where the ordering relation of the carrier is being fuzzified. The link between these two kinds of lattices is given via algorithms for rerepresentations. This approach is outlining a more general approach that can be adopted in the fuzzification of algebraic and relational structures via their level cuts. As we believe — and our beliefs are supported from previous research — the drives in humans are hierarchically ordered. As the behavior of our agents depends on the active drives, the drive structure becomes crucial in the modeling of our societies because most structures are valued by the drive hierarchies. Abraham Maslow has developed a hierarchical system of drives that influence human behavior. On a low level, the psychological and physical needs are on the bottom of the hierarchy. They need to be at least partially satisfied in order for the human to be motivated by higher-order motivations. Maslow’s hierarchy levels (bottom to top) are as follows: biological motivations (food, sleep, water, and oxygen), safety, belonging, and love (participation in sexual and nonsexual relationships, belonging to given social groups), respect (as an individual), and self-actualization (being everything that the individual is able to be). In such a strict hierarchy, the drives are ordered in a chain, a special case of a lattice. The motivations, though, do not need to be always comparable, and the hierarchy does not always imply linear order. Chapter VI, On IETAL, Fuzzy Algebraically, which is based on the theories in Appendix A, gives an alternative to the algebraic environment representation. The notion of intrinsic representation is formally defined as a fuzzy relation valued by the agent’s drives lattice. The understanding of the term context is also discussed. In Chapter VII, On Agent Societies, we describe the multi-agent environment inhabited with homogenous agents that have the ability to imitate their cohabitants. Every agent in the multi-agent system has a special sensor for the other agents close to it in the environment. The drive to communicate is the top of its drive structure. This multi-agent extension of IETAL is called Multi-Agent Simulated Interactive Virtual Environments (MASIVE). As soon as one agent in the society senses another, the agent switches to the imitation mode, where it stays as long as the interchange of the contingency tables lasts. The contingency tables grow as the intrinsic representations are being exchanged. The drive to find akin agents in the environments is at the top of the drive structures in MASIVE. We now observe some emerging structures in the multi-agent system.
xvii
Hardware constraints of the associative memory are a more complex problem than the temporal constraints. Solutions need to be provided when the memory is full. Normally, rows with low emotional context would be discarded and replaced with more actual ones. The knowledge structure of agents in those cases would have a bottom but not a unique top, and therefore, would not be a lattice. As an alternative to the whole intrinsic representations exchange during the imitation conventions, agents could only exchange rows relevant to only and active drive. In the tradition of generation-to-generation knowledge propagation in human societies, agents could unidirectionally exchange knowledge from the older to the younger individuals. A problem of importance in the context of this discussion is when to conclude if two rows are contradictory to each other. In our simulations we were randomly choosing one of the candidates for expectations. Another biologically inspired solution to this problem would be in cases like this one, to go with the personal experience and discard in consideration those rows that were acquired in conventions. Instead of random choice when contradictory rows are in question, each row can be attributed a context, in the sense of evidence, from the theory of evidence, or more generally, the theory of fuzzy measures. The phenomenon of imitation is the one that we strongly believe is responsible for learning from other agents (at least at an early age). In Chapter VII (On Agent Societies) we also overview relevant literature from developmental psychology, infant behavior, and neuroscience that both through indirect results (various experiments) and direct wiring (brain regions in monkeys’ and human brains) motivated us to adopt imitation conventions as a method in information interchange between agents. Chapter VIII, On Concepts and Emergence of Language, gives a critical take on the theories on concept formation and concept development. The popular theories of concept formation involve categorization based upon the physical features that differentiate the concept. Physical features do not provide the understanding of objects, entities, events, or words, and so cannot be used to form a concept. We have come to believe that the affect the object, entity, event, or word has on the environment is what needs to be evaluated for true concept formation. Following our argument for a change in the direction of research, our views on some of the other aspects of concept formation are presented. The second section of the chapter discusses language as a key element for communication in multi-agent systems. In this chapter we discuss the phenomena of emergence of language, amongst the other issues. Communication is expressed via the interactions in which the dynamic relation between the agents is being carried via mediators, called signals that once interpreted, influence the agents. In the sense of the complexity of the communications, the autonomous agents can be classified in four basic categories (ordered by intensity of the communication): homogenous, which do not communicate; heterogeneous, without communication; heterogeneous, which communicate; and cen-
xviii
tralized agent (uni-agent environment). Normally, we assume that there are six ways of communication in two major functional categories (paralinguistic and metaconceptual): expressive functions, that is being characterized by information interchange of and about the intensions of the agent: “This is me, what I believe and what I think”; conative function, with which one of the agents asks another to answer a question or to do something for it: “Do this, answer me this question”; reference function that refers to the context of discourse: “This is the state of the matters”; fatic function that is being used to establish, prolong, or cut a given process of communication: “I want to communicate with you and can clearly read your messages”; poetic function, that beautifies the message, and the meta-linguistic function: “When I say ‘X’, I mean ‘Y’.” In the uniagent version of IETAL the term perceptual category refers to a row in the contingency table. These categories are being built based on the inborn scheme. The agent builds perceptual categories during its sojourn in the environment and also depends on the active drive of the agent. During its interaction with the environment, an internal classification/similarities engine classifies the contents of the mind of the agent and builds categories/concepts. We can assume the other agents to be understood as a part of the environment in this context, and they help build the contingency table of a given agent. They normally need to be named, and the name stamp should also be a part of the table. The naming of the agents is a separate problem that can be solved via special percepts for each individual agent. Perceptual aliasing is also another phenomenon that can be observed. All perceptual categories that refer to a given drive compose the conceptual category for the drive. They model the trip to the satisfaction of a given drive, as well the object that satisfies it. From the perspective of agent’s introspection, the concept makes the agent aware of the places where a certain drive can be satisfied. From the social perspective, when the concept is being fortified during the imitation conventions, the agent serves the environment it is in, and disseminates to the other agents information about the satisfaction of the drives of the fellow agents. If we introduce a system for attributing similarity measure to two lexemes, then the classes of equivalence will be indeed the concepts in the agents. Categorization is needed, because generally, we have a limited number of behaviors. In the modeling of our agents, we can introduce a module for computing similarities, that is, the categorizing module. Based on the protolexemes and their emotional context, the similarity-measuring engine decides whether two lexemes are similar or not. That can further be used for the reduction of the contingency tables after the imitation convention of two agents. Then, there would be no need for spending any associative memory for keeping similar protosentences. Another aspect of the categorization is the so-called categorical perception, when the agent distinguishes between physical and functional perception. The functional perception refers to objects in the environment that are perceived differently but serve a similar function. In our theory the functional perception was solved by assuming that the agent has a special sensor for the object that satisfies its appetitive
xix
drives. From the mathematical perspective, the similarity relation is a fuzzy equivalence relation. Every equivalence class in that context is a concept. When the agent enters the environment it starts building its conceptualization of the environment, while sharing information with the other agents through its personal experience and the information of the other agents. The conceptualization module builds the conceptual scheme of the agent. During the imitation conventions, the agents cannot share concepts, but they can exchange representatives from the equivalence classes. This comment is biologically inspired because in humans we explain one notion via others until the other individual conceptualize that notion. The first section of the book ends with Chapter IX, On Emergent Phenomena: If I’m Not in Control, Then Who Is? The Politics of Emergence in Multi-Agent Systems. There, we give an overview and critique on emergent phenomena in multi-agent societies. Part of the popularity of multi-agent systems-as-generative-metaphor, however, lies in the synergy between multi-agent systems in computer science and the sciences of complexity in biology, where the beauty is seen in the emergence of higher levels of collective behavior from the interactions of relatively simple agents.
Section II: Cases In this section of the book we overview a series of software and hardware solutions that we have been using in the process of simulation studies, robotic emulations, and gathering information from human subjects. The emphasis is given on the technical solutions that we have implemented, thus making them replicable and customizable for researchers doing similar research. We end this section with an overview of our Patterns in Orientation: Pattern-Aided Simulation Interactive Context Learning Experiment (POPSICLE). The Multi-agent Systems Simulations (MASim) simulator is overviewed in Chapter X, On MASim: A Gallery of Behaviors in Small Societies. This simulator has been developed in order to facilitate the user with an environment in which to observe behavior of one to four agents in a randomly generated environment with wall-like obstacle. The chapter focuses on the particularities of the use of MASim and presents some simulation results generated by a modification of this software. Note that the metrics in this gallery are different than the metrics used in the study of IETAL. A step here refers to a single action application, whereas, in the IETAL statistics generation we used the application of a row of the contingency table as one congregate action (transitions). This chapter goes into detail on the implementation of the emotional context function that has been discussed scarcely up to this point. The emotional context depends on the active drive, it is attributed to the rows of the contingency tables, and it is a
xx
measure of the usefulness of that particular row in the quest of drive satisfaction in previous experiences. The emotional context changes in time diminishes for entries in the contingency table that have not been contributing recently, so the agent tends to forget them. Although we have experimented with exponential approach in the description of the emotional concept previously, in the original IETAL simulations, here we simplified the approach and use integers. The larger the integer, the further the row of the contingency table is from the drive satisfier. We also use an alternative (oversimplified) parameter in the approach shown in this chapter: confidence. The importance of MASim lies in the lessons learned, and identifies the points of improvement. Historically, MASim evolved in a more general environment presented in Chapter XI, On a Software Platform for MASIVE Simulations. We present a software library providing tools for IETAL and MASIVE simulations. This chapter overviews the technical solutions in the development of the project and gives details on integrations of the modules developed for other IETAL or MASIVE-like experiments. The system simulates the behavior of autonomous agents in a two-dimensional world (grid) of cells, which may include cells the agent is free to move into or walls that block movement. The goal of an agent is to satisfy specific user-defined drives, such as hunger, thirst, and so forth. An agent may have any number of drives, but only one is active at any given point in time. Additionally, the user populates select world cells with drive satisfiers that are used to satisfy the active drive of any agent entering the cell. In other words, when an agent moves into a cell containing a drive satisfier, the drive is only satisfied if it is the agent’s active drive. In the beginning, the agents navigate around the world using a user defined inherent scheme, which is a short series of moves the agent will make by default. Gradually, an agent builds up an internal associative memory table as it explores the environment in search of drive satisfiers. As the agent moves in search of a drive satisfier, observations are made and recorded in the emotional context of the active drive. Once a drive has been satisfied, the recorded observations (leading to drive satisfaction) are recoded in the agent’s associative memory table. As the agent continues to explore the world, other drives may become active, leading to new observations in new contexts. As the agent continues to build a model of the world in relationship to its drives, the agent will begin to use its associative memory to plan a route to the drive satisfier. The agent uses current observations to derive expectations from the associative memory table. When a matching observation is found in the table, the agent temporarily abandons its inborn scheme and uses the expectation to execute the next series of moves. If the observations made during this next series of moves match another observation in the table, the process continues until the drive satisfier is reached. If at any time a subsequent observation does not match the expectation, the agent records a surprise and returns to its inherent scheme to continue exploration — all the while, continuing to make new observations. Additionally, if the agent cannot make a
xxi
move because a path is blocked by a wall or world boundary, the agent registers this as pain, skips the move, and continues execution of scheme or expectation. Unlike MASim, this environment is much more general and can be used and integrated in a variety of simulations. It contains a library of functions for customized implementations when designing a broad range of experimental setups. For realistic real-world experiments of IETAL/MASIVE we have developed our own robotic agents. In Chapter XII, On a Robotic Platform for MASIVELike Experiments, we overview some of the technical details on the hardware solution and the low-level programming details of projects implemented. As a control unit for the robot we have chosen the BrainStem® unit. Not only was this unit inexpensive (as are all the other parts of the robot) but we can easily boost its computational powers by using a Palm® Pilot. When a Bluetooth™ card is inserted in the Palm Pilot the agents can communicate between themselves. This chapter gives the recipe for building such agent(s), as they are easily replicable. The POPSICLE experiment chapter (Chapter XIII: On the POPSICLE Experiments) looks at the setup and some of the results of our experiments with human subjects that are emulating abstract agents. POPSICLE is a study of learning in human agents, where we investigate parameters and patterns of learning. We study the use of inborn schemes in environments and phenomena that emerge in the environments via the imitation conventions of the subjects. In this chapter we describe the environment, give its importance in the context of IETAL and MASIVE, and give the results collected from 60 participants as subjects of the study, who emulated abstract agents in a uni- and multi-agent environment. The parameters that were measured were the interaction times, the parameters in the success of finding food, as well as the amount of pain (hitting an obstacle) encountered in the quest for food. From the data collected, we were able to extract data on the sequences used while in the environment to attempt a study in the inborn schemes area. The lessons learned from POPSICLE help us calibrate our simulations of the agents in the IETAL and MASIVE theories. The subjects’ reactions are indications of a limited (for such a small environment) but valuable variety of inborn schemes in the sense of Piaget. In addition to the simulation solutions, we have also developed an online data harvesting engine to facilitate POPSICLE-like studies in human agents online, called e-POPSICLE. This tool is rather general and enables the designer to set up his/her own scenario of POPSICLE-like modules. Instead of tiles with colors, as in the original experiments, the designer might choose to work with more general environments (not only direct perception-action systems). For example, what the subject sees might be numbers, and a pattern consisting of prime numbers might lead from the start position to the goal directly. Having studied the phenomenon of stress in the human agents in the POPSICLE environment, we have developed a 3D testing environment with methodologies
xxii
from game design that are expected to keep the motivation of the subject high during any configuration of a POPSICLE experiment. Both qualitative data and interview remarks indicate that to the subjects it is hard to maintain concentration past the first 15-20 minutes of the POPSICLE experience. We have taken the experimental environment one dimension up, made it 3D and implemented common objects from video games — counters, prizes, and so forth to keep the motivation up while we are gathering interactions data and data on pain and surprises in subjects.
Section III: Alternatives In this section of the book we give results from our related research in the area, and we pave the ground for our future investigations. In Chapter XIV, On an Evolutionary Approach to Language, we use the genetic algorithms approach in the study of language emergence in an artificial society. The inborn scheme evolves through the generations of the agents inhabiting the environment. The inborn schemes of any two agents can cross over or they can self-mutate (within an agent, internally). Chapter XV, On Future Work, concludes the theory discussed in the book and gives directions for further research. Some of the topics we discuss here are the human-computer interaction (HCI), interactions in heterogeneous environments, and the use of the alternative theories and tools for environment modeling in multi-agent systems, as a continuous counterpart of our discrete theory. In environments with heterogeneous agents, the problem of interagent communication is important, and not many solutions have been proposed for it. Due to the differences in the construction and the perceptual abilities, there would be no similarities in the construction of the protolanguage. In this area, we are aware of research in the domain of polyglot agents that serve as translators. The phenomenon of bilingualism itself creates a plethora of questions and has been studied at large. But, is it possible to construct a translating function between heterogeneous agents? What preconditions should be met for such communication to be possible? How does language emerge from protolanguage? What is the reason for bilingualism? Another aspect of the before mentioned questions is the communication between the agents and the designers. How does the designer tell the agents what he/she wants them to figure out about the environment? The problem of interaction then becomes a problem in HCI. What is then the interface? How to design an interface between two homogenous agents? How to do the same for two heterogeneous agents? Online interactions have usually been structured through metaphors drawn from physical spaces (e.g., multi-user dungeons, chat rooms, distance learning, and
xxiii
home pages) and through certain assumptions about the user derived from preexisting, physical relationships (e.g., the distance learner in the virtual classroom, online shopping). Not only do these assumptions ignore the heterogeneity of our social and cultural lives, but also to the extent to which all of these differences might mean different outcomes, that is, the possibility that there are different ways of conceptualizing computer-mediated interaction applicable online interactions of all kinds. We are now developing a dynamic, online, 0-context environment-agent that will allow us to study online interaction as a simultaneously, cognitive, social, and cultural event. Our online instrument, Izbushka, changes with the inputs from users and the patterns that emerge are the coproduction of each agent — human and nonhuman. During the experiences, different teams will interact with the online environment and with each other, generating emergent forms of learning, communicating, and behaving. In the proposed research, we proceed from the insight that our interactions with online environments and agents have been usually structured through metaphors drawn from physical spaces (information superhighways, chat rooms, home pages) and through extant, social interactions (querying, directing, sending). While this is to a certain degree unavoidable (cyberspace is, after all a notional space), this way of understanding does not take into account (1) the social and cultural differences structuring different online practices, and (2) the ways in which humans themselves have cognitively, socially, and culturally changed as they have accommodated their lives to the information networks interpenetrating their lives. Taking ideas of distributed cognition and enaction associated with multi-agent systems and third-generation cybernetics, we propose to develop online tools that we believe will be generative of new understandings of online interactions by presenting users with a 0-context environment-agent Izbushka that is in many ways inexplicable and unfamiliar, that is, without evident goals, familiar spaces, or language. *
*
*
So, this is the big picture. Let us zoom in now and look into the chapters.
xxiv
Acknowledgments
There are a number of people that have supported me personally and professionally in my research over the years, and particularly while writing this book. I want to dedicate this volume to Dr. Georgi Stojanov, a friend and mentor. When I was starting my research, he taught me what research was all about. It is not an understatement to say that most of what I know now about the research process has come from the long nights working together with him at the empty building of the Faculty of Electrical Engineering of the SS Cyril and Methodius University in Skopje, Macedonia. I am honored that he contributed the Foreword to this volume. Teams are important in research, especially teams that click together. Dr. Samuel G. Collins, from the Department of Anthropology, Sociology and Criminal Justice at Towson University certainly makes a great collaborator. The ideas of future work elaborated in the last chapter have come up after numerous fruitful discussions with him. He is the author of Chapter IX, where he focuses on issues of emergence in multi-agent societies. Dr. David B. Newlin from the Research Triangle Institute International, the Baltimore, MD office, patiently explained concepts from psychology to me—a computer scientist by training. He authored Appendix B, where he gives us some insights on the motivation systems in humans. Many of my undergraduate, graduate, and doctoral students have directly and indirectly been involved in the work covered in this book. They are individually acknowledged at the end of each chapter, where applicable. Working with undergraduate students has been a special challenge and privilege for me. The National Science Foundation’s Research Experience for Undergraduates program has been instrumental in involving undergraduates from
xxv
all over the U.S. in research ventures. Two chapters in this book are a result of collaboration with these bright young students: Chapters VIII and XIV. The Towson University’s Jess and Mildred Fisher College of the Sciences and Mathematics Undergraduate Research Committee supported the research projects of many of the undergraduate students that were working with me on the robotics projects. The Department of Computer and Information Sciences at Towson was housing me and my lab (The Cognitive Agency and Robotics Laboratory) while I was working on this book. The work on this book has been funded in part by the National Academies of Sciences under the Twinning Program supported by contract no. INT-0002341 from the National Science Foundation. The contents of this publication do not necessarily reflect the views or policies of the National Academies of Sciences or the National Science Foundation, nor does mention of trade names, commercial products, or organizations imply endorsement by the National Academies of Sciences or the National Science Foundation. The quality of this book has been significantly improved by the constructive suggestions of the anonymous reviewers. A further special note of thanks goes also to all the staff at Idea Group Inc., whose contributions and support throughout the whole process from inception of the initial idea to final publication have been invaluable. And finally, I want to thank my family for their encouragement and love.
Goran Trajkovski, PhD Baltimore, Maryland January 2006
xxvi
Section I Theory
On Our Four Foci 1
Chapter I
On Our Four Foci
Abstract In this chapter we give a brief overview of the history and the key developments of the concepts of agents and multi-agent systems that are relevant to the book, as preliminaries to the presentation of the theories in the subsequent chapters of the first section of the book.
Introduction For quite some time now, the computer sciences, especially artificial intelligence (AI), have been focusing on such programs, which as distinct entities would successfully compete with humans in solving complex tasks. The omnipresent digital computer, nowadays, has developed to a degree where it can deal with complex tasks such as running and helping to manage complicated manufacturing processes, medical diagnoses, and the design of new machines, but programs run on it have not been successful in the said competition in the eyes of many. All of these efforts are a result of the continuing rivalry between the human and the machine, and the efforts to humanize the machine, to raise the program on a level of a direct equivalent of the expert, capable of independently solving
Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
2 Trajkovski
problems in a given problem area. As the machine becomes more humane and human on the one hand, and on the other, we as humans tend to resemble the machine more. Technological advances not only make it easy for us to adapt to the machines but also to augment our body or replace nonfunctioning parts. But where are we as far as the intelligence in AI is concerned? Newell, Shaw, and Simon (1957) state that the term artificial intelligence denotes research directions aimed at constructing an intelligent machine capable of executing complex tasks as if it were a human being. Their article focused on the automatic theorem proving, and arose an array of questions not only on the nature of AI itself. A more important question emerged; namely, if the machine can think and if it can be intelligent, should it be given a legal status similar to humans? All of these questions still stimulate intensive discussions that have lately been critical of what AI can or cannot do. With the idea of illustrating the concept of an intelligent machine, Turing devised a test based on the assumption that a machine would be considered intelligent if it could not be differentiated from a human being in the course of a conversation. A machine would be considered intelligent not based on some intrinsic criterion, but based on the effect of imitating the human behavior. In this context, the intelligence separates the unit from the group; it is a feature of the individual agent, and not of the group of agents. In that sense, a computer program can be observed as a thinker, bound in its own worlds. This directly reflects the idea of expert systems as a computer program being able to replace the human expert (agent) in cases of extremely difficult tasks that require knowledge, experience, or specific reasoning. This centralized and sequential concept conflicts with several theoretical and practical situations. On the theoretical level (Latour, 1989; Lestel, 1986), intelligence is not an individual feature detachable from the social context in which it is being exercised. A human does not develop normally if he/she is not surrounded by other humans. Without a corresponding social milieu his/her cognitive abilities are significantly limited. In other words, what we understand as intelligence is not based solely on the genetic predispositions and the neural networks in the brain, but also on the interaction with the environment.
Agent(s) This section overviews the “natural history of agents” (Kampis, 1998). It stresses three features widely discussed in history, relevant for the definition of an agent. Our specific adopted definition is discussed in the next chapter.
Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
On Our Four Foci 3
More than three decades ago, Nilsson (1974) wrote: Today, the knowledge in a program must be put in “by hand” by the programmer although there are beginning attempts at getting programs to acquire knowledge through on-line interaction with skilled humans. To build really large knowledgeable systems, we will have to educate existing programs, rather than attempt the almost impossible feat of giving birth to already competent ones. Being able to talk about cognitive agents today proves that we have gone beyond Nilsson’s expectation. We do not have to educate the agents ourselves, as programmers or designers, the agents learn on their own. Before we go deeper into the details of this discussion, we need to adopt a working definition of the term. As you will see, this is necessary, as the term has been used as widely or as narrowly in literature, as needed in a given context. The term agent (and its multiple synonyms) has been a buzz word in the science in the past several decades. Under different names, one might say it has been around puzzling scientists forever. This suggests that people do not have difficulty talking about design and analyzing things as if they were animate. Objects dealt with are usually man-made objects, typically machines. Existing definitions of the term agent vary from definitions of online software agents (Bates, 1994) to such definitions that narrowly and purposely define the term to serve a specific discourse in a given context (Jennings, 1992). Bates defines an agent (in the sense of an online software agent) as a program that “acts as a believable character” to the users (emphasis ours). This approach is clearly narrowly connected to the idea of the Turing Test. On the other hand, Jennings (1992) for example, in the context of research on the phenomenon of collaboration in multi-agent societies when solving problems, defines axioms (relevant to his very problem at hand) with a special emphasis on the mental and behavioristic state named joint responsibility. Namely, the latter definition is very specific to the context used, unlike the first one. Here, inspired by multiple approaches (Kampis, 1998), we decide to use a somewhat axiomatic definition of the term agent that will be as general as possible, but still within the spirit in which this term has been and is currently being used throughout literature, in various contexts concerning applications throughout disciplines, both in the physical and in the cyber world. A short ad hoc list of properties that alone or in combination may be used to define an agent could include:
Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
4 Trajkovski
•
Autonomy, as the ability of the biological or man-made artifact to operate in an unaided and independent fashion,
•
Proactivity, as the ability to take initiative to affect actions towards its/his/ her goals, and
•
Intentionality, as the attribution of purpose, belief, need, and desire.
Let us explore these three (vague) terms. Autonomy is what is considered by many to be the crucial property of an agent. In numerous occasions, instead of the term agent, scientists use the term autonomous agent, to stress this property. We will do that when absolutely necessary. When talking about requirements on systems that are to perform a permanent or recurrent task without further control or intervention, autonomy (or selfsufficiency) is definitely one of the front-runners. It seems that for the first time in the systems theory (cybernetics) this has been introduced via the notion of homeostasis (Ashby, 1956) the law of requisite variety, dynamical equilibrium, self-regulation, regulatory models, and feedback control (Ashby, 1956; von Bertalanffy, 1928/1961; MacKay, 1980). In the robotics area, autonomy has meant cutting the wires — thus eliminating the need for a wired umbilical cord to the center (mainframe). Lately, the term autonomity has evolved to endorse more elegant solutions that monitor the internal state of the agent, self-care and repair, even adaptation to the environment for better performance (or even for basic operability). Homeostasis, a crucial problem around the notion of autonomy, in colloquial terms means the ability to recover and maintain function in a wide range of perturbations, by recurring to an actively maintained internal balance. In other words, this translates to the ability of an agent to survive (whatever that would mean in a given discourse) in the given (dynamic) environment it is in. When applied to living organisms (where cyberneticists studied this initially), homeostasis is about their (internal) organization, making it possible for the control mechanisms to appear and work when needed. When comparing an organism to a man-made artifact, it is homeostasis that emphasizes that different classes/types of systems exist. Systems can have reversible, somewhat reversible, and irreversible changes, for example. The threshold of this point of no return varies, and the consequence of the irreversibility — catastrophic (that is probably why mathematically, many of these problems are being studied within the Theory of Catastrophe!). For example, if we observe a neural network and in the process of graceful degradation (Boden, 1992) we take several neurons out (or they die) not much would happen. The situation is completely different if we killed all of the neurons.
Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
On Our Four Foci 5
While the extreme case of renewable-like systems have been studied by many (for example, Rosen (1987)); however, not many have focused on the more fundamental question of initiation of the reversible process. The Theory of Autopoiesis (Maturana & Varela, 1980) throws a fresh and, for some, a controversial light on the problem of autonomy. The theory suggests that the property of autonomy is a result of the property of organisms to continually renew and auto-produce themselves (or something else). People produce people, but other products too; cards evolve in a pile of metal. The autopoiesists conclude at the end that the evolution (if any) of the system is a consequence of their internal laws/structure. Or, to rephrase, any kind of truly interesting autonomy and independence of supervision is only achievable by autopoietic systems. Collins’ essay in this book (see Chapter IX) further elaborates on this. Proactivity is another defining feature of agents that is historically considered in the annals of science than intentionality. From the behaviorist point of view (predominant in the early to mid 1990s), instead of reacting to an input the proactivity is postulated by an internal driving force — a spontaneous initiative that gives rise to a behavior. In the cognitivist glossary of terms proactivity does not exist. Intelligence, knowledge, planning, and internal states are attributed to humans and computer programs aimed at simulating the human mental machinery. In the early conception of the theory of agents, both James (1955) and Bergson (1997) placed some agency into the mind, thus making it possible for the subject to reach out towards the world. This implies directedness, making the concept of intentionality an indivisible property of agent. Intentionality, or the attribution of purposes, beliefs and desires, is the third key property of agents as we define them. There is (and this is not really unexpected) a widely agreed upon attitude in the scientific community of what intentionality is. Bretano (1973) radically claims that internal objects (called intentional objects) exist in the mind and that this distinguishes humans from the rest of the universe. Psychology deals with the intentional objects (minds), as opposed to the natural sciences that deal with nonintentional objects. In ordinary discourse intentional notations are typically used in a more relaxed way. The interest and intent in using autonomous, self-sufficient systems in rich environments is rapidly growing. This work is intended to contribute towards the striving for agents that perform successfully in real-world environments, by offering our solution to the problems of agents, their representation(s) of the environment, and multi-agent systems of homogenous agents.
Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
6 Trajkovski
Environmental Representation(s) in Agents In this section we examine representations of environments in agents. This is one of the areas where AI has been commonly perceived to under-perform. The agents sojourn in their environment at any given time. Inspired by Bickhard (1980), we accept the idea that in principle it is possible that an objective model of the world (environment) is possible to exist, expressed in terms of data structures (encodings). These structures do not depend on the concrete physical realization (embodiment) of the artifact and can be assimilated via structural isomorphism (functionally congruent one-to-one mapping). The problem of representation (of the environment) can now focus on suitable structures for internal representation. Most efforts towards contributing to the study of representation in AI have been in the direction of predicate logic or its numerous variations and extensions (predicate logic(s) of higher orders, modal logic, temporal logic, etc.). This has been followed by extensive research in the domains of more structured representations, such as semantic networks (Collins & Quillian, 1969), frames (Minsky, 1988), scripts (Schank & Abelson, 1977), and so on. We have learned of several serious problems that are not only philosophical in nature: the framing problem (Pylyshyn, 1984), the symbol grounding problem (Harnad, 1990), and the reference frame problem (Clancey, 1989), just to name a few. A major part of the agency theories deal with symbolic representations of the world. Examples of such research are, the STRIPS planning system (Fikes & Nilsson, 1971) and a group of formal theories of agency based on various types of logics, such as the possible worlds approach (Ginsberg & Smith, 1988). Most of the critiques of this theory are in line with the dis/nonembodied intelligence attributed to these approaches. Brooks’ work from the mid 1980s initiated a qualitative shift in the focus of the discipline of AI. Obviously frustrated by the feeling that something is essentially wrong with the dominating approaches (that were looking into the correct representation or the models of the world and their maintenance), Brooks suggests an alternative that aims at developing a complete being instead of isolated cognitive simulators (Brooks, 1991; Wooldridge & Jennings, 1994). What now becomes essential is the observation of the being, instead of the maintenance of its internal models of the world. As “the world is its best model of itself,” detailed representation is neither necessary nor possible (Stojanov, 1997). Denouncing the need for explicit representation, Brooks concentrates on generating behaviors via the use of behavioral models based on the so-called subsumption architecture (Brooks, 1986) that consist of various behavioral
Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
On Our Four Foci 7
modules that are in a hierarchical order. One level does not know about the levels above it but its activity can be stopped by them. The modules have access to the sensory input. The resulting behavior is a result of the asynchronous functioning of all modules. Efforts that immediately follow Brooks are quick to point out that there is a need of some kind of representation in order to achieve anything significantly different from simple behaviors, as those observed in insects, for example. Matariæ ’s Toto robot (1992) uses the so-called augmented subsumption architecture. This and other efforts point out that representation is a part of the embodiment of the being. To the best of our knowledge, there is not a firm theoretical framework for studying representations in any of these proposed extensions of the behaviorist artificial intelligence. That is exactly one of the contributions of the Interactivist-Expectative Theory of Agency and Learning (IETAL) (Stojanov, Bozinovski, & Trajkovski, 1997), which serves as the theoretical basis for this book. In the past two decades, there have been significant paradigmatically unorthodox approaches in the domain of agency with respect to representations. In this section we overview those most relevant to this book. A complete overview can be found in (Wooldridge, 1995). Rosenstein and Kaelbling (1986) propose a procedure for constructing agents (called situated automata). Initially, a declarative specification of the agent is given and then compiled so that a digital, directly, implementable machine is constructed. As this procedure is followed, the constructed agent is not involved in any manipulation of symbols, because in fact, no symbolic expressions are being manipulated as the complied agent (program) executes. The key idea of the Agre and Chapman’s Pengi project (1987) can be summarized as follows. As most of the daily activities can be characterized as routine (which means that there is no need of new abstract reasoning), the agent needs to have a module equipped with responses to routines. In Pengi, the answers/responses are encoded in the very low levels, such as the level of digital circuits. The commonality of the aforementioned efforts is that they emphasize the role of reactivity in situated agents. Their agents perform well in obstacle avoidance and react fast in cases of stereotypical tasks (insects interactions in Brooks’ sense), but they do not have a systematic and well-grounded treatment of the phenomenon of learning. Reinforcement learning has been the predominant approach in learning in autonomous agents, especially after the inception of Qlearning (Watkins, 1989). Notable newer approaches are those of Matariæ (1994), who focused on learning in multi-agent systems, and of Balkenius (1995), who proposes a new designing approach based on preexisting approaches, such as the subsumption architecture, neural networks, reinforcement learning, and several variants of Q-learning.
Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
8 Trajkovski
Multi-Agent Systems and Societies of Agents In this section we overview the various problem foci of the theories of multiagent systems. Multi-agent systems have attained great importance in several branches of the computer sciences, mostly because they introduce — we think — the problem of collective intelligence and structure emergence from interactions. The observation of the autonomous units (agents) on one hand, and the collective behavior of the system on another, poses a multitude of questions. As a new and emerging field, its focus of study is somewhat fuzzy, in the sense that there are no sharp boundaries between it and robotics, many of the AI fields, and what is traditionally labeled as distributed systems. The research in multi-agent systems requires an integral and not an analytical approach. The questions that emerge are along the lines of the following. What does it mean for one agent to be interacting with another? Can they and how can they collaborate? Which collaboration methods are needed? And, how should the efforts be distributed and coordinated in the frames of a complex task? All of these questions are relevant, as our goal is to create systems with the following features: flexibility, adaptability, ability for integration of heterogeneous programs, fast functionality, and so on. Most researchers in this field have two principal goals. The first one is the theoretical and experimental analysis of the self-organizing mechanism that acts when two autonomous units are interacting. The second is creating distributed artifacts capable of completing complex tasks via collaboration and interaction. Therefore, the view on problems in this context is dualistic. On one hand, we focus on the cognitive and social sciences (psychology, ethnology, sociology, philosophy, etc.), and the natural sciences (ecology, biology, etc.) as at the same time they model, simulate, and describe the natural phenomena and offer models of self-organization. On the other hand, problems can be observed as focusing on creating complex systems based on the principles of agents, communication, collaboration, and action coordination. Within the classical, disembodied approach, the recognition and the communication with the others is not a problem, as all of the units have access to an internal, mirroring representation. They communicate knowledge (i.e., new combination of symbols) easily via inherent linguistic terms (Fikes & Nilsson, 1971). In the considerations of learning from the other agents in a multi-agent society and observation of the emergence of language (and other phenomena), we turn to imitation.
Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
On Our Four Foci 9
Imitation as a Phenomenon The phenomenon of imitation plays a key role in the multi-agent systems considered in this book. Here we overview the changing attitudes toward imitation. As the complexity of the artificial agents within the embedded mind paradigm increases, the need for effective learning methods (better than direct programming) in those artifacts increases as well. The developments in these research branches renewed the interest in learning through imitation (or: learning by seeing, learning by showing). The concept of imitation (or: learning by seeing, learning by showing) is not new. Thorndike (1898) uses the following definition “[imitation] is learning of one act by observing it being done.” Piaget (1945) considers the ability to imitate as an important part of his six phase developmental theory. After Piaget, the interest in the phenomenon of imitation (of movements) diminished, mainly because of the predominant attitude that imitations are not an expression of intelligence of higher order (Schaal & Stenad, 1998), or intelligence in general. As it turns out, imitation is far from a trivial phenomenon. It is a creative mapping of somebody’s actions and their consequences on oneself (evaluating of the visual input-motor pairs of command-consequence). And, moreover, it seems that we are wired for imitation! Rizzolatti, Fadiga, Gallese, and Fogassi (1996) discovered the so-called mirror neurons that fire when the unit (human) realizes motor actions or observes another unit performing the same actions. These results give a possible explanation of the human ability of empathy, as well as the situations when he/she reads somebody else’s mind.
Problems Discussed in This Book In the closing section of this chapter we overview how our approach differs from others, thus laying the grounds for the elaboration of the theory and modeling tools in the subsequent chapters. The IETAL theory considers learning in an agent in an environment inhabited with one agent only (uniagent environment), and serves as basis for Multi-agent Systems in Simulated Virtual Interactive Environments (MASIVE). These theories aim to provide answers, or at least new perspectives to the following questions. How does an intelligent being internalize (conceptualize) the environment that he/she inhabits? What does it mean to conceptualize the environment? How do agents communicate? What is being exchanged then? What is the
Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
10 Trajkovski
role of the language? Actually, what is language? What are the representations in this context? What is learning? Is there anything that we can refer to general-purpose intelligence? In his doctoral thesis, Stojanov (1997) focused on proposing the concept of intrinsic representation in order to avoid the traps of representationalism, proposing a theory of agency and learning with interaction and expectancy as key notions. In this book, by attempting to set up a mechanism of interagent communication, we are extending the existing theories over agents with linguistic competency in a multi-agent environment; formalizing the theories using classical and fuzzy algebraic structures; studying structures needed for representations that enable interagent communication and interaction between the agent and its environment (the other agents are also a part of the environment, therefore we can also say communication of the agent with its environment); and studying emergent phenomena in the societies. And we discuss several tools to help understand the concepts and take them further, whether via with simulation studies, emulation on robotic agents, or experiments with humans.
References Agre, P., & Chapman, D. (1987). Pengi: An implementation of a theory of activity. Proceedings of the AAAI-87 (pp. 268-272). Seattle, WA. Ashby, W. R. (1956). Introduction to cybernetics. London: Chapman Hill. Balkenius, C. (1995). Natural intelligence for artificial creatures. (Cognitive Studies 37). Sweden: Lund University. Bates, J. (1994). The role of emotion in believable agents. Communications of the ACM, 37(7), 122-125. Bergson, H. L. (1997). The creative mind: An intro to metaphysics. New York: Citadel Press. Bickhard, M. H. (1980). Cognition, convention, and communication. New York: Praeger. Boden, M. A. (1990). The creative mind: Myths and mechanisims. London: Weidenfeld and Nicholson. Bretano, F. (1973). Psychology from the empirical standpoint, international library on philosophy. London: Routledge. Brooks, R. A. (1986). A robust layered system for a mobile robot control. IEEE Journal of Robotics and Automation, RA-2, 14-23.
Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
On Our Four Foci 11
Brooks, R. A. (1991). How to build complete creatures rather than isolated cognitive simulators. In K. VanLehn (Ed.), Architectures for intelligence (pp. 225-239). Hillside, NJ: Laurence Erlbaum Associates. Clancey, W. J. (1989). The frame-of-reference problem in cognitive modeling. Proceedings of the Annual Conference of the Cognitive Science Society. Collins, A., & Quillian, M. R. (1969). Retrieval time from semantic memory. Journal of Verbal Learning and Verbal Behavior, 8, 240-247. Fikes, R. E., & Nilsson, N. J. (1971). STRIPS: A new approach to the application of the theory proving to problem solving. Artificial Intelligence, 2, 189208. Fikes, R. E., & Nilsson, N. J. (1972). Learning and executing generalized robot plans. Artificial Intelligence, 3(4), 251-288. Ginsberg, M. L., & Smith, D. L. (1988). Reasoning about Action I: A possible worlds approach. Artificial Intelligence, 35,165-195. Harnard, S. (1990). The symbol-grounding problem. Physica D, 42, 335-346. James, W. (1955). The principles of psychology. New York: Dover. Jennings, N. R. (1992). On being responsible. In E. Werner & Y. Demazean (Eds.), Decentralized AI (pp. 93-102). Amsterdam: North-Holland. Kampis, G. (1998). The natural history of agents. Proceedings of the HUNABC’98 (pp. 10-24). Budapest: Springer. Latour, B. (1989). Science and action: How to follow scientists and engineers through society. Cambridge, MA: Harvard University Press. Lestel, D. (1986). Contribution à l’étude du raisonnement expérimental dans un domaine sémantiquement riche. Unpublished doctoral thesis, EHESS (in French). MacKay, D. M. (1980). Brain, machines, and persons. Grand Rapids, MI: Eerdmans. Matariæ , M. (1992). Integration of representation into goal-driven behaviorbased robots. IEEE Transactions on Robotics and Automation, 8(3), 304-312. Matariæ , M. (1994). Interaction and intelligent behavior. Unpublished doctoral thesis, MIT, Cambridge. Maturana, H. R., & Varela, F. J. (1980). Autopoiesis and cognition. Dordrecht, Germany: D. Reidel. Minsky, M. (1995). Computation & intelligence: Collected readings. In G. F. Luger (Ed.). AAAI Press.
Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
12 Trajkovski
Newell, A., Shaw, J., & Simon, H. (1957). Empirical exploration of the logic theory machine: A case study in heuristics. Computers and Thought (pp. 109-133). McGraw-Hill. Nilsson, N. (1974). Artificial intelligence. Information Processing, 74, 778801. Piaget, J. (1945). Play, dreams, and imitation in childhood. New York: Norton. Pylyshyn, Z. W. (1984). Computation and cognition: Towards a foundation for cognitive science. Cambridge: MIT Press. Rizzolatti, G., Fadiga, L., Gallese, V., & Fogassi, L. (1996). Premotor cortex and the recognition of motor actions. Cognitive Brain Research, 3(2), 131141. Rosen, R. (1987). Some epistemological issues in physics and biology. In B. J. Hiley & F. D. Peat (Eds.). Quantum implications: Essays in honor of David Bohm (pp. 327-341). London: Routledge, Kegen and Paul. Rosenstein, S., & Kaelbling, L. (1986). The synthesis of digital machines with provable epistemic properties. Proceedings of the Conference on Theoretical Aspects of Reasoning About Knowledge (pp. 83-98). Schaal, S., & Sternad, D. (1998). Programmable pattern generators. Proceedings of the 3rd International Conference on Computational Intelligence in Neuroscience (pp. 48-51). Research Triangle Park, NC. Schank, R., & Abelson, R. (1977). Scripts, plans, goals and understanding. Hillsdale, NJ: Erlbaum. Stojanov, G. (1997). Expectancy theory and interpretation of EXG curves in the context of machine and biological intelligence. Unpublished doctoral thesis, University in Skopje, Macedonia. Stojanov, G., Bozinovski, S., & Trajkovski, G. (1997). Interactionist-expectative view on agency and learning. IMACS Journal for Mathematics and Computers in Simulation, 44, 295-310. Thorndike, E. L. (1898). Animal intelligence: An experimental study of the associative process in animals. Psychological Review Monograph, 2(8), 551-553. von Bertalanffy, L. (1961). Kritische theorie der formbildung. [Modern theories of development: An introduction to theoretical biology]. New York: Oxford. (Original work published 1928) Watkins, C. (1989). Learning from delayed rewards. Unpublished doctoral thesis, Kings Coledge, Cambridge, UK.
Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
On Our Four Foci 13
Wooldridge, M. (1995). Intelligent agents: Theory and practice. Knowledge Engineering Review, 10(2), 115-152. Wooldridge, M., & Jennings, N. (Eds.). (1994). Intelligent agents. Proceedings of the ECAI-94 Workshop on Agent Theories, Architectures, and Languages. Amsterdam: Springer-Verlag.
Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
14 Trajkovski
Chapter II
On Agency
Abstract In this chapter, we present the Interactionist-Expectative Theory on Agency and Learning (IETAL). Our intention is to provide remedies for the observed deficiencies in the existing approaches to the problem of autonomous agents design. A brief critical overview of the relevant agent-oriented research is given first.
Introduction Although we agree with some of Brooks’ (1991) criticism of traditional AI, we disagree on some crucial points, particularly on those regarding learning. We agree with his view that:
Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
On Agency 15
[The] problem-solving behavior, language, expert knowledge and application, and reason, are all pretty simple once the essence of being and reacting is available. That essence is the ability to move around in a dynamic environment, sensing the surroundings to a degree sufficient to achieve the necessary maintenance of life and reproduction. This part of the intelligence is where evolution has concentrated its time — it is much harder. But in his work, he neglects the first prerequisite he points out by concentrating only on the essence of reacting. Denying the need for representation, he concentrates on generating behavior by using behavioral modules. This situation is similar to what happened with behaviorism during the first part of the 20th century. Behaviorism discarded any theoretical construct or introspection and focused on the stimuli and reactions, attempting to describe behavior. Then Tolman, Ritchie, and Kalish (1996) pointed out the difficulties of the present radicalism by explaining results of his experiments with rats learning to solve maze problems. He emphasized the role of expectancies and cognitive maps in explaining the learning process.
The Key Notions: Expectancy and Interaction Here we consider the two key notions of IETAL, expectancy and interaction. Refocusing on these phenomena enable us to consider existing problems from different viewpoints and formulate other problems, relevant and practically significant for autonomous agents in a single-agent environment. In our uniagent (single-agent) theory the two key notions are expectancy and interaction (with the environment). The notion of interaction augments its meaning as we move towards multi-agent systems of homogenous agents later in the text, including the interaction with the other agents as well. Expectancy emphasizes being in an environment (world). It is the agent’s ability to anticipate the effects of its own actions in the environment. We say that the agent is aware of the environment if it can anticipate the results of its own actions in it. This means that given some current percept the agent can generate expectancies about the resulting percepts when certain actions from its repertoire are applied. Anticipation does not mean that surprises do not happen. After inhabiting some environment for a certain time, the agent builds a network of expectancy triplets percept-action-percept, thus building the agent’s cognitive maps (MacCorquodale & Meehl, 1953) in Tolmanian sense. Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
16 Trajkovski
In most approaches, the environment is represented in the agent by using formal methods, and very often the designer’s ontology is the same as the agent gnoseology. In other words, this means that in most approaches, the agent knows as much about the environment as the meta-agent — the designer. When some form of reinforcement learning is used in situated agents, there is an assumption that both the agent and the environments are in a clearly defined state that the agent can sense, meaning often that agents are given the knowledge of their global position in the environment (in terms of coordinates or some omnipotent sensors) that inform them about the particular state of the environment (Barto, Bradtke, & Singh, 1995). This is — to say the least — biologically implausible. We distinguish between the worlds, the perspectives of the agent, and of the designer. The agent builds its own perspective of the environments via its interactions with it. The designer’s perspective is the one of a meta-agent. The designer is the know-all godlike observer of the interaction between the agent and the environment. He/she does not need to know what is happening inside the agent, or at least, cannot perform any divine interventions on the agent, directly intervening with the agent’s exploration of its environment. The agent uses what it has learned through interactions when expectancies have been built. We do not use logical calculi or semantic nets and as we do not spoon-feed the agent with details of the environment. The agent constructs its Umwelt (subjective universe or surrounding world [Wikipaedia.com] Umwelt means environment in German) (Seberok, 1976/1985; Uexküll, 1981/1987) exclusively through the interactions with the world. The nature and quality of the interactions are defined by the restrictions of the agent and the restrictions that the environment itself imposes on the agent. That is where, for example, the resolution of the sensors of the agent and the power of its actuators come into play, on one side, and the lighting level in the environment, on the other. The structure emerging from these interactions can be seen, both, as the agent’s model of the world and of itself or, better said, as a model of the agent’s being in that particular world. The above description is, naturally, reminiscent of Piaget’s constructivism. In his theory of knowledge he states that knowledge is not simply acquired by children bit by bit, but is instead constructed into knowledge structures. Children build these structures based on their experience in the world. Children do not passively absorb experiences. They are not empty vessels where knowledge is being poured in. They rather construct and rearrange their knowledge of the world based on experiences. Piaget’s genetic epistemology (Campbell, 2002) has had a particular influence on our work. In particular, Piaget (1970) suggests “knowing reality means constructing systems of transformations that correspond, more or less, to reality.” In our terms, this quote means that the agent gains knowledge of the effects of its actions to its percepts. That is, given a percept the agent knows (expects) what it will get if it applies an action (transformation) of its repertoire. The
Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
On Agency 17
agent assimilates world’s structure by exercising its innate motor patterns. What triggers these actions are the agent’s internal drives.
Surprised Agents Although an agent might have built its expectations of the environment, it can still get confused, as limited by its sensory abilities or conditions in the environment. That is why it is important to consider the aliasing phenomena. Agents sometimes get confused — I know I do. It is possible that places that are not the same (locally distinct, geographically distant, etc., depending on the context of the agent) may be perceived as the same by the agent. When I drive, say, downtown Baltimore, two distinct streets with row houses might be perceived as the same street. By what I perceive while being on the one street, I cannot tell the difference between it and our percept of the other. Nevertheless, it often happens that I spend huge chunks of time trying to figure my way around. In these cases I (or any agent) cannot rely on percepts only. That is when context starts playing an important role in navigation through an environment. This uncertainty (confusion) is attributed to perceptual aliasing; a many-to-one mapping between the states the agent is in and its percepts. Given an agent with
Figure 1. Agent-environment interactions represented as a polychromatic oriented graph (see text for details) (In future examples, we will refer to this graph (portion) as Γ)
E E
Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
18 Trajkovski
its sensors and actuators and the environment it inhabits, a structure emerges because of their interaction. It can be depicted as a labeled-oriented graph. A portion of such a graph is given in Figure 1. Nodes in the autonomous agent’s world can be seen as the agent’s percepts and the arcs as actions from its repertoire. What is depicted there is a situation that can model my driving through sections of downtown Baltimore. The eight cubes denote neighborhoods, and the arrows denote connections between them. There are at most, four possible actions that I as an agent can enact in this environment — go north, south, east, or west. In the figure, knowing the interpretation of picture, the edges of the graph are implicitly labeled by the direction in which the driving happens. The patterns that fill the cubes denote my/the agent’s percept of the neighborhoods. There are neighborhoods that the agent, due to whatever reasons, but mostly in this case due to perceptual resolution (my not paying attention), look the same to the agent. What is shown in this figure is but only a portion of the environment that I inhabit. Note that based on how much time I spend in a given portion of the environment, despite the perceptual aliasing present there, I get better in figuring out where to satisfy my drives — where to find a restaurant when I need food or buy soda if I am thirsty. I learn how to find where the satisfiers of my drives are in the environment.
Learning and Using the Expectancies Two main problems emerge: (1) to learn the environment, and (2) to know how to use it. Traditionally the first problem has been neglected as agents are spoonfed the environment. There has been, however, a significant body of literature focusing on the second problem. Kaelbling, Dean, and Basye, for example, devise different strategies of exploring the environment in order to build a complete model of the graph (Basye et al., 1995). We recognize the interaction graph to be important, but it is not crucial for the agent to have a model of the whole graph, as it is biologically implausible. What agents would need would be a working model of a portion of the environment that is constantly being revised to reflect the according changes in the environments. In this sense, our agent locates itself in the graph in Figure 1 based on the expectancies or the triplets percept-action-percept that it has learned and by performing actions which will confirm or discard them in a given situation. Expectancies are incrementally learned during the agent’s stay in the environment. The agent assimilates the environment and then accommodates to it. The success of the agent is not measured in terms of how well it has learned the Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
On Agency 19
graph but rather how well it uses its expectancies to satisfy currently active drives. A problem that we marginally mention here, which has not arisen in the annals to date, is the human-agent communication. How do we tell the agent what we want it to learn? In existing approaches, agents are usually born with the one and only problem they are to learn to solve. We consider the complex drive structures in humans, and this is reflected in the formal statements of our approach.
Conclusion In this final section, we summarize the main features of our IETAL theory and discuss the experimental results, presented in detail in Chapter XIII. The comparison to related works is also given; pursuing the analogy between behaviorism in psychology and the behaviorist AI we label our approach as NeoBehaviorist. Unlike the dominant approach in the so-called behavior-based design (Brooks, 1986) where the stress is given to the definition of the appropriate agent’s reactions to the environmental stimuli, we stress the importance of the interaction between the agent and the environment. During these interactions, an agent becomes aware of the effects of its actions to the environment by learning the expectancies. All these expectancy triplets have some emotional context associated with them and with the drive tow whose satisfaction they have contributed to. This network of expectancies is the agent’s model of the interaction graph that we call intrinsic representation of the environment. This model helps the agent to satisfy its active drives in fewer steps, as it inhabits an environment for a longer period. We will now present an algebraic formulation of our theory, which was the base for the simulation experiments. We conducted two series of simulation experiments. The elaborations on the simulations and simulation experiments are given in the second section of this book.
References Barto, A. G., Bradtke, S. J., & Singh, S. P. (1995). Learning to act using realtime dynamic programming. Artificial Intelligence, 72, 81-138.
Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
20 Trajkovski
Bayse, K., Dean, T., & Kaelbling, L. P. (1995). Learning dynamics: System identification for perceptually challenged agents. Artificial Intelligence, 72(1), 139-171. Brooks, R. A. (1986). A robust layered system for a mobile robot control. IEEE Journal of Robotics and Automation, RA-2, 14-23. Brooks, R. A. (1991). Intelligence without representation. Artificial Intelligence, 47, 139-159. Campbell, R. (2002). Jean Piaget’s genetic epistemology: Appreciation and critique. Retrieved January 22, 2004, from http://hubcap.clemson.edu/ ~campber/piaget.html MacCorquodale, K., & Meehl, P. E. (1953). Preliminary suggestions as to a formalization of expectancy theory. Psychological Review, 60(1), 55-63. Piaget, J. (1970). Genetic epistemology. New York: Columbia University Press. Sebeok, T. A. (1976). Contributions to the doctrine of signs (Rev. ed.). Lanham, MD: University Press of America. Tolman, E. C., Ritchie, B. F., & Kalish, D. (1996). Studies in spatial learning II: Place learning versus response learning. Journal of Experimental Psychology, 36, 221-229. Uexküll, Thure von (1987). The sign theory of Jakob von Uexküll. In M. Krampen, K. Oehler, R. Posner, T. A. Sebeok, & T. von Uexküll (Eds.). Classics of semiotics (pp. 147-179). New York: Plenum Press. (English edition of Die Welt als Zeichen: Klassiker der modernen Semiotik. Berlin: Wolf Jobst Siedler Verlag, 1981.) Wikipaedia.com (n.d.). Wikipaedia entry on Umwelt. Retrieved September 15, 2005, from http://en.wikipedia.org/wiki/Umwelton
Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
On Drives 21
Chapter III
On Drives
Abstract In this chapter we briefly overview the connections between drives, motivations, and actions in an agent. Drives take a central place in our agents and influence significantly how they use their internalization of the environment. Appendix B focuses more elaborately on these phenomena in humans.
Introduction Apart from the intrinsic representation of the environment, another important part of the agent is its actuatory system. Based on tendencies, this subsystem decides which actions are to be undertaken based on the history and the anticipated consequences of the actions.
Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
22 Trajkovski
The concept of reactive action is quite often mistakenly confused with the term reactive behavior, where the concept of motivation and decision are intrinsically connected to the agent. Nevertheless, the problem of executing actions and the organization of the actions in reactive systems has been studied in detail by a long list of researchers, normally under the terming of the problem of choice of actions. The bottom line of the problem is as follows: What actions to shoot in order to achieve a certain goal, taking into account the action repertoire, based on the external stimuli to the agent (Meyer & Guillot, 1989; Meyer, Roitblat, & Wilson, 1993; Meyer & Wilson, 1991).
On Hunger In this section we give the link between appetitive behaviors, taxings, and consummatory acts, all of which are inherently connected to the satisfying of (a) given drive(s) in an agent. Actions do not have to have the same influence on the agents. Difference is often being made between the consummatory acts and the appetitive behaviors. The former relate to the intention to satisfy a tendency, while the latter are being exhibited in the active phase of the goal-oriented behavior. For example, when an agent takes food, the consummatory act is eating, whereas the appetitive behavior is searching for food. In general, the consummatory acts complete the appetitive behaviors and therefore they are sometimes termed as final, terminal acts. We can also notice that the appetitive behavior most often happens without the presence of a stimulus from the environment, while the consummatory act depends exclusively on the presence of the stimulus in the environment. Thus, the appetitive behavior is a precondition to the consummatory act. Indeed, an agent cannot eat if there is no food, while the quest for food happens in the absence of food in the environment. Appetitive behavior, thus, is directed by the internal drives of the agent. On the verge between the appetitive behaviors and the consummatory acts are the so-called taxings, a term borrowed from the airport terminology. Those are behaviors that orient and move the agents so that they are directed towards (or from) the source of stimulation. When a taxing is getting the agent towards the goal, taxings are the link between the appetitive behavior and the consummatory act. If an agent in its appetitive phase sees food, it will start taxing before it starts consuming it. Through the percepts, the agent gets information about the world surrounding it and prepares its action(s) geared towards seeing through its goal. The perceptual system is the door between the world and the agent. The perception phenomena
Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
On Drives 23
are an individual phenomenon and they contribute towards the personal view of the world (Uexküll, 1956).
Action and Control How do agents choose actions to be executed at a given time? The problem of decision or choice of actions in an agent can be defined in several ways: •
When the agent has a choice of several actions in a given context, which action should be taken?
•
What to do when more than one drive is active at the same time?
•
How to incorporate in the model the changes in the environment that prevent the agent from satisfying its drives?
In order to answer these questions Tyrrell (1993) and Werner (1994) define an array of criteria that a good model of action selection should possess. Here are some of them:
•
Consistency: The model should be general enough so that it responds in the same fashion to all problems, so that there is no need for ad hoc models.
•
Energy conservation: The model should conserve energy. If the agent can satisfy its hunger drive at one place, it should eat there; not eat some there and some more at other places where there is food.
•
Drive priority: The model should have a priority system that would discriminate drives based on intensity.
•
Action focus: The agent should focus on executing arrays of actions instead of starting temperamental arrays of actions without finishing them.
•
Danger awareness: If in a given, new situation the agent is suddenly at risk (predator, danger, etc.), or if it is not being able to complete a given action due to an obstacle, the agent should be able to quit executing the given action.
•
Expandability: The agent should be easily expandable, so that new actions can enlarge its action repertoire.
•
Flexibility: The model should be able to evolve through a learning process.
Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
24 Trajkovski
Some of these criteria may sound contradictory to one another. However, in the design of the action selection model we strive to comply as many of the previously mentioned criteria as possible.
Tinbergenn’s Models In the process of atomizing the complex actions, we get a certain tree structure. We can observe it as a Petri network, whose nodes are exclusive to the level of neurons, and only one of them activates given a stimuli from the higher level. Many action selections models are inspired by research in the domain of psychoanalysis. The best researched are those that are based on the hierarchical relationship of the behaviors, where complex actions are broken down to simple, atomic behaviors. The best known of these models is perhaps the model of Tinbergenn (1951). In this model, the organization of behaviors is hierarchical in nature and is structurally a tree. The nodes of the tree have the roles of formal neurons, linked by (directed edges) an innate triggering mechanism (ITM), which is a precondition to the activation of a given node. The nodes activate as a combination of activations of one or more nodes one level up in the structure. If the activating signal is intense enough (past a given threshold of activation) the node (neuron) will activate, and its energy will be transferred towards the nodes of the next level. The highest level of the model deals with complex actions, such as reproduction, for example, and the lower we go in the structure, the more elementary the actions become, so that in the lowest levels there are actions that, for example, represent the movement of a certain muscle. Every node at a certain level is in conflict with the rest of the neurons. If one of the neurons activates, all the other ones get inhibited. Tinbergenn’s model does not have much of a practical significance. The agent is introvert and it is involved too much in satisfying its drives and goals, neglecting the stimulation of the environment. That is why Rosenblatt and Payton (1989) and Tyrell (1993) work on models that, although hierarchical by structure, are not hierarchical when the decisions are being carried out. They are based on the socalled free-flow hierarchies, in which the nodes of a higher level do not decide which ones of the lower level neurons would fire; they just define preferences in the sense of candidates for activation on the lower level.
Rosenblatt and Tyrell’s models can be easily represented by structures using the fuzzy tools from Appendix A.
Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
On Drives 25
Motivation The connate system determines which action of the agent should be undertaken based on its current knowledge, in order for it to maintain its functionality, while maintaining its structural integrity at the same time. How does the agent decide what to do next? What moves it to undertake one action and not another? What is the cause for these actions? What motor and stopping mechanisms influence its behavior? These questions have been the primary reason for the inception and the development of numerous scientific domains within the frames of philosophy, psychology of behavior, sociopsychology, psychoanalysis, and so forth. Within all of these and other disciplines the question of internal and external drives has been discussed, and as a result of this process we have numerous theories today which all witness the complexity of this question indeed, as well as on the human ignorance of these mechanisms in humans and animals in general. In the previous sections of this chapter we discussed a part of the conative system — the action selection system. Here, we will discuss it further in a broader context, through an effort to give a general model of the conative system, that is the system of processes that enable the actuation of agents, in the sense of surveying the mental factors that enable the agent to work/actuate independently. Most theories within the field of artificial intelligence assume that when we talk about agents, we discuss rational agents, where the actions are a result of goaloriented functionality. As Newell (1982) indicates, the principle of rationality consists of such functioning that if the agent knows that a certain action will achieve a certain goal, then it will undertake the said action. An agent (in the broader sense, in the context of multi-agent system) is said to be rational if the said agent adjusts towards the realization of its goals, in the sense that if it has the intention of and can enact a certain action, it will. The system determines which action the agent should undertake based on its current knowledge, in order to maintain its functionality, while maintaining its structural integrity. This problem has been observed in literature under the name of problem of control (Bachimont, 1992; Hayes-Roth & Collinot, 1993) or action selection problem (Tyrell, 1993). The key concepts of this model are intentions, motivations, and the commands, and each category has a respective cognion. In 1936 Albert Burloud developed a theory of psychological behavior based on the intentions as a starting concept. In his theory the intentions are a dynamic form that determines the acts. By separating the biological and the physical realization, he puts the intention in the center of the field of psychology and observes it as a scientific object. The behavior as a whole is a result of the combination and interaction of the intentions that are the motor part of the mental Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
26 Trajkovski
actions; moreover, they are an expression of a deficit that the agent tends to lessen. For example, hunger represents deficit of food. These deficits are not always exclusively individual, but can also be inter individual and social. When the agent cannot complete a task that it wants to complete and requires from another agent to complete the said task, it transfers its deficit (via the request) to the other agent, and the other agent might be able to lessen the deficit. If an agent does not have any deficits, it is inactive. In multi-agent environments, deficits and their propagation lead the system to the goal. Apart from the intentions that prevent the agent from acting, there are intentions that might limit the action of the agent. For example, taboos and social norms limit the repertoire of actions considered socially acceptable. Intentions are a result of a combination of motivations, more elementary cognions (percepts, drives, standards, social and interpersonal norms, etc.) that consist of the motivating subsystem of the conative system in an agent. So, intentions represent a key element of the conative system and the transition towards actuation. For example, if the agent is hungry and finds food around, it will develop an intention to take it for itself. So, this act is a combination of the internal drive (hunger) and perception (food nearby) that provoke the agent’s intention to act. The form of the resulting intention depends on the internal organization of the agent. The decision subsystem evaluates these intentions and chooses those of highest priority and determines what actions should be undertaken in order to realize them. The reason that makes the agent actuate is usually called motivation, and it is one of the basic constituents of the intentions. We have examples of motivations in the hungry agents, where the hunger drives the agent to find new sources of food. Ferber (1999) groups the motivations in five main categories: •
Personal motivations, the objects of which is the agent itself, and are geared towards satisfying the needs of an agent or releasing of task it is expected to perform;
•
Environmental motivations, produced based on the perception of elements of the environment;
•
Social motivations, related to the meta-motivations imposed by the designer;
•
Relational motivations, that depend on the motivations of the other agents; and
•
Contractual motivations which are a result of the agent’s agreement towards a certain commitment.
Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
On Drives 27
All motivations are intertwined and interrelated. According to the hedonistic principles, if one action does not result in increasing the satisfaction or avoidance of unwanted consequences, the said action will not fire. Conversely, the action will realize only if it is in accordance with the norms of the society the agent is a part of. So the goal results are a consequence of the compromise between the principles of satisfaction and responsibility, between the personal satisfaction and the constraints of the society to which the unit belongs. In more complex agents the link between the drives and intentions is more complex. In simpler agents, including our agents, intentions are a result of the combination of the inner stimuli and the percepts or the stimuli from the environment. As we move towards more complex agents, systems are able to combine intentions and result in intentions of a higher level. Personal motivations include those motivations that produce satisfaction in the agent (hedonistic) and those that are a result of the personal obligations of the agent (contracts), where the agent has certain goals that it wants to realize, and it is persistent in realizing them. Hedonistic motivations are the base of living organisms in quest for satisfaction and well-being. For example, the quest for food, making clothes, or the wish to reproduce relates to the primal needs expressed in terms of drives as hunger, cold, and sex drive. The term drive denotes an internal stimulus produced by the vegetative system that is inseparable from the agent. Not satisfying these drives results often in consequences on the agent and its successors. For example, if the hunger is not satisfied the agent will die. So, the drives are associated to the elementary needs and are a mental representation of excitations that source out from the survival system, or more generally, the physical needs of the agent. They transform into intentions (and consequently goals) based on the percepts and the information and beliefs that the agent possesses. The drives are the basis of individual actions, and are frequently in conflict not only with the contractual motivations but also between themselves. The fact that an agent wants to follow one motivation for a shorter or longer period of time is what Ferber (1999) refers to as the personality of the agent.
Malinowski (1944), a supporter of functionalism in anthropology, believes that the societal structure depends on the success in satisfying the elementary needs of the units. More detailed elaboration and discussion of this view would lead us to conclusions on how the agents’ actions are connected and how they change the elementary needs, while structures are being produced, that further change the relationship between these needs and the social structures that result from them. Percepts are the basis for reflex reactions (by definition, a reflex is a momentary reaction to an external stimulus). They support other motivations, especially the hedonistic ones, in such a sense that they increase their intensity. According to
Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
28 Trajkovski
Lorenz (1984), the intensity of an intention is a function of the strength of the percepts and the drives, as expressions of the internal and external stimuli. Lorenz considers that the increase of intensity is an additive function, but there are also models where this function is multiplicative. Social motivations usually expressed in terms of norms belong to two categories: functional relationships and social rules. The functional relationships relate to the function of the agent in the multi-agent society and its role in it according to its social position. In some societies these motivations are being linked to the hedonistic motivations. The social rules are the prohibitions and the ideals imposed upon the agent by the society. Although mainly antihedonistic, they are considered to be useful to the society. The core of these motivations is not in the agent but in the other agents in the society. The basis of these motivations is the interagent communication. Based on the age and the knowledge of a given agent it may request from another agent to satisfy one of its needs, and the same can be requested of it, as well. Agreements are one of the key concepts in the collective action of multi-agent systems. When the agreement is given, the agent promises that it will do a particular action in the future that will stabilize the relationships in the society in the sense of organization of the actions (activities). The characteristics of the contracts in the context of multi-agent systems are given for the first time in (Bond, 1989) and are further elaborated in (Bouron, 1992). Ferber (1999) unites the two and gives a five-dimensional definition of an agent that consists of the following five variables: •
Relational responsibilities as contracts between several agents that are jointly working on a given task;
•
Responsibilities towards the environment as responsibilities towards the resources of the environment;
•
Responsibilities towards the social group the agent belongs to;
•
Responsibilities of the organizations towards its members; and
•
Self-responsibilities as responsibilities of the agent towards itself.
References Bachimont, B. (1992). Le contrôle dans les systèmes à la base de connaissances. Paris: Hermès. Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
On Drives 29
Bond, A. H. (1989). Commitment, some DAI insights from symbolic interactionist sociology. Proceedings of the 9th Workshop on DAI (pp. 239-261). Bellevue, WA. Bouron, T. (1992). Structures de communication et d’organisation pour la coopération dans un univers multi-agens. Unpublished doctoral thesis, Université Paris 6. Burloud, A. (1936). Principes d’une psychologie des tendances. Paris: F. Alcan. Ferber, J. (1999). Multi-agent systems: An introduction to distributed artificial intelligence. Addison-Wesley. Hayes-Roth, B., & Collinot, A. (1993). A satisficing cycle for real-time reasoning in intelligent agents. Expert Systems with Applications, 7, 3142. Lorentz, K. (1984). Les fondaments de l’éthologie. Paris: Flammarion. Malinowski, B. (1944). A scientific theory of culture and other essays. Chapel Hill: North Carolina Press. Meyer, J. A., & Guillot, A. (1989). Simulation of adaptive behavior in animats: Review and prospect. Proceedings of the First International Conference on Simulation of Adaptive Behavior. Cambridge: MIT Press. Meyer, J.-A., Roitblat, H.L., & Wilson, S. (Eds.). (1993). From animals to animats 2. Proceedings of the Second International Conference on Simulation of Adaptive Behavior. Cambridge: MIT Press. Meyer, J.-A., & Wilson, S. (Eds.). (1991). From animals to animats. Proceedings of the Conference on Simulation of Adaptive Behavior. Cambridge: MIT Press. Newell, A. (1982). The knowledge level. Artificial Intelligence, 18, 87-127. Rosenblatt, K. J., & Payton, D. W. (1989). A fine grained alternative to the subsumtion architecture for mobile robot control. Proceedings of the IEEE/INNS International Joint Conference on Neural Networks (pp. 70-74). Tinbergen, N. (1951). The study of instinct. Claredon Press. Tyrell, T. (1993). The use of hierarchies for action selection. Adaptive Behavior, 1, 387-420. Uexküll, J. (1956). Mondes animaux et mondes humains. Paris: Denoël. Werner, G. M. (1994). Using second order neutral connections for motivation of behavioral choices. Proceedings of the From Animals to Animats (pp. 154-161).
Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
30 Trajkovski
Chapter IV
On IETAL, Algebraically
Abstract In this chapter, we formalize the Interactivist-Expectative Theory of Agency and Learning (IETAL) agent in an algebraic framework and focus on issues of learnability based on context.
Introduction Accurate and reliable modeling of the environment is essential to its performance. It is conceptually easier to talk in terms of agents moving in 2D or 3D environments, we will speak of the agents as of mobile autonomous robotic agents, and thus connect our research to the field of navigational map learning (Kuipers, 1978). In many cases, due to the limitations of the agent’s perceptual abilities, the agent can end up in a situation where it cannot distinguish between two locally distinct places that are perceived as the same (refer to comment on
Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
On IETAL, Algebraically 31
Figure 1, for example). If the reason for this uncertainty is of a perceptual nature, we talk of perceptual aliasing. Then the only way a node can be recognized as different is to examine its context. Context of a vertex in a graph is a tree-like data structure rooted at the vertex and defined recursively to include all the reachable vertices and their contexts. A sink vertex’s context is that vertex itself. Dean, Basye, and Kaelbling (1992) give algorithms for learning models of such graphs while considering different types of uncertainties. In other papers, (Stojanov, Stefanovski, & Bozinovski, 1995), for example, learning algorithms inspired by biological systems have been presented. Here, we are concerned whether an agent can learn a given graph based on its context, in an environment where perceptual aliasing is present.
Basics of the Algebraic Formalization In this section, we give a mathematical, graph-theory-based model of agent and related terms (Trajkovski & Stojanov, 1998). Bear in mind Figure 1 (graph Γ) while reading this section, as we will be referring to it and illustrating theoretical concepts with examples based on it. From the perspective of the designer, Γ is a graph, and as such consists of vertices (or nodes) and edges. It is a directed graph, as the edges denote the direction in which an action can take the agent when the agent executes this action. Therefore, let: V={v1, v2, ..., v n} be a finite nonempty set of vertices and A={s1, s 2, ... , sm} a finite nonempty set of actions with implicit ordering induced by the indexing. So, if we use the designer’s convention on Γ in naming the vertices in Figure 1, and the acronyms N, S, E and W for north, south, east and west respectively for the actions of the agent, V={11, 12, 13, 22, 23, 31, 32, 33},
Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
32 Trajkovski
Figure 1. Designer’s view of Γ. The number labels are the names of the vertices, and the pattern of the boxes is how the agent perceives the vertex.
11
12
22
31
32
13
E E
23
33
and A={N, S, E, W}. Let the pair: Gs = (V, rs), s∈A where rs⊆V×V, is an oriented regular graph with matrix of incidence of the relation rs for every s∈A that contains exactly one element 1 in each column (all other elements are 0s). It defines where we can go from which vertex in V, using only the action s from the action repertoire. Figure 2 gives an example of this graph for with respect to the action N. Why the restriction requiring only element 1 in each column, and all other zeros? Due to the nature of the problem at hand, one action executed at one place will be able to take us only to one next place. It is worth
Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
On IETAL, Algebraically 33
Figure 2. ΓN , reachable places with the action N
11
12
22
31
32
13
E E
23
33
noting that this consideration is at least theoretical and can be imposed on the simulation situations of this model. However, in realistic environments, when for example, we are emulating an IETAL model with a real robotic mobile agent, many more times than not, the same intended action from the same place will not result in the exact same move, due to problems with, say, the servos. As we are going slowly to the definition of the designer’s ontology, we need to define a couple of transitional terms. Firstly, for example, we will call: HI ) Γ’ = (V, I7 ∈8
graph of general connectivity of the family {Gs : s∈A}. If G’ is connected, then the triplet: Γ = (V, A, r), r⊆(V×V)×A, defined as follows ((v1, v2),s)∈r if and only if
Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
34 Trajkovski
Figure 3. A portion of Γ around the vertex 22, where the edges are labeled explicitly with the actions S
N E E
22 S
E
N
(v1,v2)∈rs v1,v2∈V, s∈A, represents an oriented graph with marked edges. V is set of vertices, r relation, and A set of actions of Γ. The elements of r are called marked edges. The edge: ((v1, v2), s) of Γ begins in vertex v1, ends in v2 and has mark s. The vertex v2 is also called s-follower of v1. Γ is deterministic if for every as any vertex has at most one s-follower for any action s. Let: L={l1, l2, ..., lk} be a set of labels and f : V→L a surjection. The function f is called labeling of the vertex set of G, or perception. In the graph G the labeling is depicted with the pattern that the cubes are filled in with in Figure 1. The labels are the patterns in that case.
Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
On IETAL, Algebraically 35
The quintuple: Γ”=(V, A, L, r, f) is called Designer Visible Environment (Designers Ontology). In the example case of the graph Γ, Γ” is actually what we have in Figure 1. The relation r induces the relation: r’’’ ⊆ (L×L) × A, defined by the means in the following expression ((f(v1), f(v2)),s) ∈ r’’’ if and only if ((v1,v2),s)∈r. The graph: Γ’’’ = (L, r’’’, A) with marked edges is called Agent Visible Environment (Agent Gnoseology). What is the difference between the designer and agent’s environments? The agent perceives the vertices, and it might happen that it perceives two locally distinct places as the same. In the case of the example in Figure 1, vertices 31 and 22 are the same to the agent (the agent cannot know the unique labels of the vertices), although they are two different vertices. The sequence: P : t0 a1 t1 a2 ... ap tp, ti∈V, where i=0,1,...,p, p≥1 is called a1a2...ap — path from the vertex t0 to tp in p steps if ti-1 is a i — follower of ti, i=1,2,...,p. The number p equals the length of path P. As Γ” is deterministic, the path P can also uniquely be denoted in the following two forms: P : t0a1a2...aptp and
Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
36 Trajkovski
P : t0a1a2...ap. The sequence: t0 t 1t2...t p is called vertex visiting string of path P, whereas f(t 0) f(t 1) f(t 2)...f(tp) is the respective label string. If in some vertex visiting string all the elements of L D appear at most once but (tp, f(t p)) (which appears exactly twice), P is said to be bounded by t p in p steps starting in t0.
Unlearnable Environments In this section, we give algorithms for determining learnability of an environment based on context. Let us denote with P(v) the set of all the bounded paths starting in v. We construct tree-structure Context_Tree(v) (context tree for v), v∈V following procedure given in Figure 4. So, knowing the designer’s view of the graph, we can use the procedure in Figure 5 to see if an agent can distinguish all vertices based on context, or where it may fail to do so. The relation “… is indistinguishable from …” is an equivalence relation under a perception f. If the equivalence classes equal the cardinality of V then the environment is learnable, otherwise it is not. Presented further in the thesis, it represents the basics of environment representation for multi-agent systems. The example of an unlearnable environment is from Trajkovski and Stojanov (1998). The agent’s perception is given with f={(A, 1), (B, 1), (C, 2)}. Figure 7 presents the context trees for the environment in Figure 6, and Figure 8 its perceptual projection. Based on the discussion above, this environment is unlearnable.
Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
On IETAL, Algebraically 37
Figure 4. Construct_Context_Tree(v) procedure 1. Set X=ei∈LD to be the root (at level b=0) of the tree Context_Tree(v). Declare it nondeveloped node of the tree. 2. Draw m marked edges from every nondeveloped nodes at level b of the tree to their s1, s2, ..., sm-1 and sm followers. 3. Observe the path from the root to the developing leaf. If the path is bounded, declare it a leaf or developed node. 4. Repeat steps two and three incrementing b, until there still exists nondeveloped vertices.
Figure 5. The procedure Is_learnable (G”) 1. For all v∈V, Context = APPEND Context_Tree(v) TO Context (that is: TD is a tuple containing all context trees) 2. For all Context_Tree(v) construct the projection tree Context_Tree_Projection(v) (nodes (vi,f(vi)) are reduced to (f(vi)) ) 3. If v, u ∈V are such that Context_Tree_Projection(v) is a sub-tree of Context_Tree_Projection(u) with roots f(v) and f(u) respectively, declare u and v indistinguishable under perception f. Declare G" unlearnable if there are indistinguishable vertices; otherwise declare it learnable.
Figure 6. An example of an unlearnable ontology
Perceptual vs. Cognitive Aliasing Not only problems of perception can contribute to the uncertainties of the agent. In this section we distinguish between two types of aliasing — perceptual and cognitive. As previously mentioned, one of the problems that our approach to agency and multi-agency that we are trying to resolve is the problem of perceptual aliasing. In a recent reflection on this phenomenon (using the term sensor aliasing) (Siegwart & Nourbakhsh, 2004) state:
Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
38 Trajkovski
Figure 7. Context trees of the nodes of the Figure 6 ontology (A,1)
a a
(C,2)
. (C,2)
b (B,1)
b
a
(C,2)
(C,2)
(B,1)
a b (B,1)
a
(C,2)
(C,2)
b (B,1)
b
(C,2)
a
(C,2)
(C,2)
b
(C,2)
Figure 8. Perceptual projections of the context trees in Figure 7 (1)
a a (2)
(2) b (2)
b a (2)
(1)
a (1)
b
a (1)
(2)
(2) b (2)
b (1)
a (2)
(2)
b (2)
… [Another] shortcoming of mobile robot sensors causes them to yield little information content, further exacerbating the problem of perception and, thus, localization. The problem known as sensor aliasing, is a phenomenon that humans rarely encounter. The human sensory system, particularly the visual system, tends to receive unique inputs in each unique local state. In other words, each place looks different. But then I keep on getting lost in downtown Baltimore. Why? They further give an example of what perceptual aliasing would look like in humans: Consider moving through an unfamiliar building that is completely dark, when the visual system sees only black; one’s localization system quickly degrades. Another example they give is a movement in a maze with tall hedges. In the context of our discussion in this chapter, we will give a critique of these statements and introduce another category of aliasing, namely, cognitive aliasing, as a phenomenon in unlearnable environments. Whereas, looking into the context can solve the perceptual aliasing problem, or looking at the context experiences as in IETAL, in the case of cognitive aliasing that is not possible
Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
On IETAL, Algebraically 39
when we are considering two vertices that are not distinguishable. You need to have perceptual aliasing phenomena across the graph to have a cognitive aliasing problem, the reverse does not have to hold. Having made this distinction, making perceptual aliasing a local problem, and cognitive aliasing a global problem, the movement in the hedges should no longer be considered a source of cases of perceptual aliasing problems only. If we try hard, we can remember very many details of each place in the labyrinth and still get lost. As the visual system in the human would be able to perceive everything, this does not mean that he/she will use all of this information to find their way around the maze or remember the locally distinct places already visited. The problem is not really in the sensory system but what the individual makes of it (based on the context). On the other hand, the dark room is clearly a problem of a perceptual aliasing nature gone awry — causing cognitive aliasing as well.
References Dean, T., Basye, K., & Kaelbling, G. (1992). Uncertainty in graph-based learning. In J. Canell & S. Mahadevan (Eds.), Robot learning. Kluwer. Kuipers, B. (1978). Modeling spatial knowledge. Cognitive Science, 2, 129153. Siegwart, R., & Nourkbakhsh, I. R. (2004). Introduction to autonomous mobile robots. Cambridge: MIT Press. Stojanov, G., Stefanovski, S., & Bozinovski, S. (1995). Expectancy based emergent environment models for autonomous agents. Proceedings of the 5th International Symposium on Automatic Control and Computer Science, Iasi, Romania. Trajkovski, G., & Stojanov, G. (1998). Algebraic formalization of environment representation. In G. Tatai & L. Gulyàs (Eds.), Agents everywhere (pp. 59-65). Budapest, Hungary: Springer.
Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
40 Trajkovski
Chapter V
On Learning
Abstract In this chapter we explain how Interactivist-Expectative Theory of Agency and Learning (IETAL) agents learn their environment and how they build their intrinsic, internal representation of it, which they then use to build their expectations when on quest to satisfy their active drives.
Introduction As we develop our view of agency, we give a central part to the Piagetian inborn scheme. This inborn programming is the mission that we are equipped with when we are born (defined with the actions that we can do and constricted by the limitations of our body). Piaget (1977) states:
Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
On Learning 41
I think that all structures are constructed and that the fundamental feature is the course of this construction: Nothing is given at the start, except some limiting points on which all the rest is based. The structures are neither given in advance in the human mind nor in the external world, as we perceive or organize it. (p. 63) In English translations of Piaget’s work, the words scheme and scheme are used interchangeably as translations of either one of the French schème and schéma. Siraj-Blatchford (n.d.) explains the difference between these terms:
The term ‘scheme’ […] was used by Piaget in his later work to refer to operational thoughts, or ‘schemes of action’ and […] the difference between schemes and scheme represent fundamental differences between operative and figurative thinking. […] For Piaget, a scheme was thus a pattern of behaviour into which experiences were assimilated. As a result of experimental play, children from a very early stage in their development, apply a range of action schemes to objects and begin to categorise those that are suckable, those that are throwable and so on. As these categories develop further, complex structures of knowledge and understanding are formed. Schemas are cognitively more complex concepts. Schemes are more atomic. When our agent learns, it uses its inborn scheme that for the purposes of our modeling and simulations, translates into a predefined sequence of actions from the agents action repertoire that it wants to realize/do, as a given mission/ programming in its interaction with the environment. As the agent starts interacting with the environment based on its inborn scheme, limitations from the environment (obstacles, for example) will enable it to realize either the full sequence of actions or just a few of them. In realization of these actions it remembers the percepts of the new states it is in, and thus it starts building its operative, intrinsic representation of the environment. In this chapter we focus on the learning procedure(s).
Building the Intrinsic Representation The intrinsic representation is a simple table (that we refer to as contingency table) where sequences of action and percept pairs are paired together in a fashion not unlike that one of a finite automaton. The contingency table is not,
Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
42 Trajkovski
however, a finite automaton. Not only that the number of the states may grow or decline in time but rows of the contingency table also contain information on the emotional context of each of the rows (associations) with respect to each of the drives. As it will be discussed later, the emotional context is a numerical measure that denotes how useful an association has been (in past experiences) in satisfying the particular drive of the agent.
Figure 1. The IETAL learning procedure in an autonomous agent PROCEDURE: GLOBAL: ARGUMENT:
GENERATE_INTRINSIC_REPRESENTATION G: ONTOLOGY, ξ: INBORN_SCHEME Table: CONTINGENCY_TABLE
BEGIN_PROCEDURE INITIALIZE_EMPTY_CONTINGENCY_TABLE INITIALIZE_POSITION_IN_G (G; Position) TRY_TO_EXECUTE_SCHEME (Position, ξ; (B1, S1)) ADD_ROW_TO_CONTINGENCY_TABLE ([(λ, λ), (B1,S1)]; R∆); WHILE (Active_Drive_Not_Satisfied) TRY_TO_EXECUTE_SCHEME Position, ξ; (B2, S2)); ADD_ROW_TO_CONTINGENCY_ TABLE([(B1, S1), (B2, S2)]; R∆); (B1, S1):=(B2, S2); END_WHILE PROPAGATE_CONTEXT ((B1, S1), drive; Table). END_PROCEDURE
PROCEDURE: ARGUMENTS:
TRY_TO_EXECUTE_SCHEME Position: POSITION_IN_ONTOLOGY, (B, S): PERCEPTS_ACTIONS_PAIRS
BEGIN_PROCEDURE S:=EMPTY_STRING; TRY_TO_EXECUTE_ACTION_AT (Position, ξ; (ADD_TO (S, Current_Percept), B)); REPEAT TRY_TO_EXECUTE_ACTION_AT (Position, B; (Add (S, Current_Percept), B) UNTIL NOT ENABLED (B) END_PROCEDURE
PROCEDURE: ARGUMENTS:
PROPAGATE_CONTEXT d: DRIVE; Table: CONTINGENCY_TABLE
BEGIN_PROCEDURE N:=0; WHILE (B1, S1) ∈ Projection [d] (Table) DO BEGIN_WHILE Projection [d] (Table) := CONTEXT_EVALUATION_FUNCTION (N); N++ END_WHILE END_PROCEDURE
Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
On Learning 43
The learning procedure, the procedure that an agent follows in building its intrinsic representation and uses past experiences in its mission to satisfy a drive is schematically given in Figure 1. Let us focus on the given procedures. The agent’s exploration of the environment is guided by the main function in Figure 1, GENERATE_INTRINSIC_REPRESENTATION. Initially, in the Piagetian sense, the agent is a tabula rasa in the environment, and starts trying to execute the actions in its inborn scheme, in the order in which they appear in the sequence. Please note that the parameter G, the ontology is visible to the designer only and is a global parameter in these procedures, as we, the designers, are the ones guiding the experiment. The agent uses its contingency table contents for operative purposes in the environment. So, the agent tries to execute its inborn scheme. It may execute all of its actions, or just some of them. For example, if the agent is in a corridor and in the scheme the next action requires it to turn right, for example, it may not be able to execute this particular action, as it would bump into the wall. It records the percepts from the places it ends up in after executing an action from the scheme. So, the agent remembers the subscheme it executed and the percepts perceived while pushing this scheme. Then it starts over again, it starts pushing the scheme over again, from the new place in the environment and registering perceptually the context of the places it passes through. These subschema-perceptual (experiential) string pairs are crucial to the building of the contingency table. Its experiences are being kept in the table as associations between two such pairs, telling the agent what experience followed after the first experience. After the agent finds a drive satisfier (note that all of this is happening while the drive d is active), the emotional context for that drive is being updated. According to the CONTEXT_EVALUATON_FUNCTION, the association that was being built or used when the drive satisfier was found gets the best emotional context value, the association prior to that one, slightly worse, and so on. In the simulations that we discuss in this chapter, we use CONTEXT_EVALUATON_FUNCTION (n) to be exp(-N), and we have also tried other ones (see Chapter X on the MASim simulations, for example). As the agent explores the environment in quest for food, water, or other items that would satisfy its active drive, it relies on its intrinsic representation and emotional contexts to get to food in an optimal way; due to the perceptual aliasing problems, this might not always happen. When what is expected and is excepted in the contingency table does not happen in reality is being recorded as a surprise to the agent. The bumping of the agent in a wall/obstacle in the environment is counted as a unit of pain. People forget, environments are dynamic, memory is limited. Later on we will discuss how we can account for these phenomena in this model. If we have the
Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
44 Trajkovski
Figure 2. Example of a designer visible environment
emotional contexts decrease in time, new associations and expectations would be dominating in a context of a drive, and associations that used to be helpful in finding, say, food, if the food stand is removed would eventually be forgotten. In Stojanov, Bozinovski, and Trajkovski (1997) and Stojanov, Trajkovski, and Bozinovski (1997), we discuss the successfulness of the agent in its quest to satisfy one drive — food. Here we will briefly overview our experiences with that simulation. In the experiment we use the designer visible environment given in Figure 2. For simulation purposes the agent has only one drive, hunger. In the environment there is only one vertex where the agent can satisfy its hunger. In Figure 2 we give a snapshot of the contingency table, after the agent has had the opportunity to interact with the environment from Figure 2. Note, that in these examples we used an agent able to go forward (action: F), or to turn right (R), whose initial orientation is towards north. The actions take the agent to the next figure in Figure 2 in front of it, or to its right, depending on the action in the inborn scheme that is being executed at that time. In Figure 3 we present an excerpt of two associations from the intrinsic representation that an agent built after spending certain time in the environment. Its inborn scheme for this example is FRF. The second and fourth columns
Figure 3. Two associations from the contingency table of an agent at a given time while being in the environment Perceived string (S1) Subscheme (B1)
Perceived string (S2)
Subscheme (B2)
RF
FF
FF
FRF
Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
On Learning 45
Figure. 4. A snapshot of the contingency table of the agent exploring environment from Figure 2, in a finite automaton/cognitive map fashion
denote the action subscheme that was executed, and the second and fourth column show the sequence of percepts thus perceived. We do not portray the emotional context in this table as we elaborate on it later in the book. Each node in Figure 4 is a perception string — subscheme pair, and they are connected if they are such association at the time the snapshot was taken. Figure 5 depicts the average numbers of steps an agent needed to get to food. Notice that the steps are counted as the number of transitions/rows in the contingency tables. It can be seen that the agent is fast in figuring out where the food is and the number of steps stabilizes. In the figure we depict the average number of steps to food versus the number of activations of the hunger drive. The agent keeps on learning and uses what it knows from its previous experiences to find the food the next time the hunger drive fires. The data depicted in Figure 5 is from 50 consecutive activations of the hunger drive (x-axis). On y-axis, the average steps to food (range 0-8) are shown, measured in number of contingency table associations (cognitive maps hops), and not actual actions. After the first five activations of the drive, the number of average steps stabilizes. Figure 6 shows what happens with the agent when we change the degree of perceptual aliasing in the environment. The data collected behaves as expected — the higher the degree of perceptual aliasing, the harder it is for the agent to find the food. Despite that shift in the graph, the general behavior of the data is unchanged qualitatively — after a short period of exploration, the number of steps to food stabilizes. The graph fluctuates up and down, but that is attributed to the fact that once the agent finds the food, the next time it is put on a random place in the environment.
Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
46 Trajkovski
Figure 5. Average number of steps to perceiving the place where the hunger drive can be satisfied versus the number of activations of the hunger drive
Figure 6. Average number of steps to food in environments with two different ratios of perceptual aliasing (percepts/labels), for the 10-50 th activation of the hunger drive
0
10
50 50
Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
On Learning 47
The squares in Figure 6 represent numbers of associations used to find food in the case of a lower aliasing ratio (0-8 is the range of the y-axis) Environments agents are in are (for the most part) dynamic, they change over time. Even if an agent hit an obstacle in one realistic environment, it may move it, thus producing — possibly — changes that may be perceived differently by the agent; to the agent that would be a different environment. Depending on the realization of the emotional context function, the average number of steps to food increases. When we used the exp(-N) for context evaluation, we incurred a linear increase in the average number of steps to food. Nevertheless, in a changing environment the agent is successful (although not always optimal) in finding one of the places where it can satisfy its food drive.
In Quest for Food Based on the theory presented before, we now present the architecture of Petitagé, an IETAL agent. In Petitagé, central place is given to the internal drives. It is there where everything begins. With the innate fixed motor patterns (scheme), Petitagé starts acting in the world it inhabits. Petitagé is capable of performing four elementary actions (forward, backward, left, and right) and notices 10 different percepts. It is additionally equipped with one food sensor. It has one hunger drive, which is periodically activated and is deactivated once the food sensor has sensed food. Initially its contingency table is empty. When the hunger drive is activated for the first time, the agent performs random walk during which expectancies are stored in the associative memory. The emotional contexts of these expectancies are neutral until food is sensed. Once this happens, current expectancy emotional context is set to a positive value. This value is then exponentially decremented and propagated backwards following the recent percepts. Every time in the future the hunger drive is activated, the agent uses the context values of the expectancies to direct its actions. It chooses, of course, the action that will lead to expectancy with maximum context value. If all the expectancies for the current perceptual state are neutral, random walk is performed. Again, when food is sensed emotional contexts are adjusted in the previously described manner. In Stojanov, Bozinovski, and Trajkovski (1997), we run a Petitagé simulation in an ontology of 100 vertices. The cumulative number of surprises stabilizes quickly (Figure 7), and the pain decreases. The average number of steps to food depends on the number of food percepts, perceptual aliasing ratio, and so forth.
In the simulations we performed, Petitagé inhabits a two-dimensional world surrounded by walls and populated with obstacles (Figure 5). Its action repertoire
Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
48 Trajkovski
Figure 7. Cumulative surprises versus number of activation of the drive 12 11 10 9 8 7 6 5 4
0
5
10
15
20
25
consists of four elementary actions (rotate left, rotate right, step forward, and step backward), which causes changes in its position for a quantity much smaller than the obstacle dimensions.
Context? The intrinsic representation of the environment in autonomous agents gives us the possibility to introduce flexible and context-dependent environment representations, which questions the traditional approach to the problem of context modeling (McCarthy, 1993). There is no definition of the term context that is widely accepted. Paraphrasing one of many, we can say that the entities of a given environment, that in one way or another are internalized in the agent, and influence the context defined as a state of mind (Kokinov, 1995). Apart from using so many ill-defined terms, then we ask what is not context? Some researchers, such as Guinchiglia (1992) think that context is a set of facts that are used locally in order to prove a certain goal in union with the procedures used to reason on these facts, which will eventually, when trying to define the sets of facts, bring us to framing problem. Behaviorist psychologists deal with this problem when they are trying to define the term stimulus. What should we, as observers, need to take in consideration while we are observing the behavior of a given agent (Stojanov et al., 1996)? Guinchiglia and Bouquet (1996) raise an old, but crucial question: Is there a need of new names for old problems? Intuitively, one gets the impression that what is happening today in the area of research in context is using the divide et impera Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
On Learning 49
(divide and conquer) strategies, by which an artificial border between the (indivisible) terms representation and context is being introduced. We think that the reason for this is the pioneering work of McCarthy (1993) who for the first time defines the term of contextual knowledge as a component of the systems for representation based on mathematical logic. Normally, the definitions of context depend on the context of the discourse (simulation of human behavior, making intelligent artifacts, mobile agent navigation, etc.). We think that the human agents are extremely complex, and experiments based on language and other highly cognitive tasks depend on a range of known and unknown parameters. That is why our approach in its fundaments is based upon and congruent with Drecher’s constructivist artificial intelligence (Drecher, 1992), that constructs simple agents and concentrates on researching their internal representation of the environment. As all researchers basically agree that context (whatever it might be or however it might be defined) influences the internal representation of the environment in agents, it is those representations that need defined. We believe that the process of construction of these representations depends on the context, that is, the form of the representations themselves depends on the contextual knowledge. IETAL agents are simple, and only have inborn goals which are genetically specified. Those goals are defined by the inner drives of the agent, and the way the agent sees the environment at a given time. So, the agent conceptualizes the environment via its contingency table. Depending on the system of active drives at a given time, the way the agent views the environment might be very different. Contextual information is built in the inner representation of the agent. Therefore, these representations are flexible and contain the inner context of the agent.
Emotional Context Chaplin and Rhalibi (2004) state the following: Emotion is an essential part of human decision making. It emphasizes areas that need attention to provide stable, “content” being. Sometimes these emotions are skewed by the environment, but very generally our behavior is the logical outcome between stimulus and the emotions. A lot of current research concentrates on drives being set and steady throughout a simulation, however in real life there are affects of the agents drives that have to be taken into account. Any action taken b/on the agent should have a residual effect in its drives and this in turn affects the emotional decisions Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
50 Trajkovski
of the agent. An agent’s mood is decided by many factors, including successful interaction with other agents and successful completion of goals, the relationship with another agent also affects the outcome of any future interactions between agents. The details of our implementation(s) of emotional context are given in detail in Chapter X on MASim. Variations of the implementation are possible. We attribute an emotional context as a numerical value attached to each row of the contingency table for each drive, as a measure of helpfulness of that row towards satisfaction of the actual drive, normally in terms of a metrics, a distance from the drive satisfaction position. In Bazzan and Bordini (2001) a logical framework for modeling emotions in agents is given. The qualitative behavior of agents gives a striking resemblance to those simulated in our approach. Using the emotions, the multi-agent learns to cooperate, and in this case shows prospects towards a successful solution of the Iterated Prisoner’s Dilemma (IPD), a standard benchmark test for cooperation problems, where normal social behavior (norms) is promoted in society by punishment in further interaction. Their paper is based on the cognitive appraisal Ortony, Clore, and Collins (OCC) theory proposed in (Ortony, Clore, & Collins, 1988). The basic idea of the OCC model is that each emotion eliciting situation is assigned to different conditions according to the construal of each agent. What emotion it will be depends on the evaluation or interpretation of the situation. Although the situation and conditions might be the same, different emotions might be generated. The OCC model classifies emotions in three categories, according to reactions to what (events, agents, and objects). Each class can be differentiated into some groups of emotional types. An emotion type is a collection of individual emotions with similar emotion eliciting conditions. Individual emotions vary by intensity. Based on the OCC model, Lee (1999) classifies the emotion eliciting categories as cognitive and noncognitive. The cognitive factors are: •
Importance of goals;
•
Likelihood of goal success and failure;
•
Goal success and failure;
•
Desirability to other;
•
Importance of standards;
•
Degree to which self/other is held to be responsible; and
•
Strength of attitude to objects or other agents (how much an agent likes or dislikes an object or an agent).
Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
On Learning 51
The noncognitive factors consist of: •
Facial expressions;
•
Current emotions and drives; and
•
Emotional memory, as retrospect on the old days and past experiences.
An emotion in the OCC model is as a linear weighted sum of factors. The different patterns of emotions characterize different personalities.
References Bazzan, A., & Bordini, R. (2001). A framework for the simulation of agents with emotions: Report on experiments with the iterated prisoner’s dilemma. Proceedings of the AGENTS ’01, Montreal, Quebec, Canada. Chaplin, D. J., & El Rhabili, A. (2004, June 3-5). IPD for emotional NPC societies in games. Proceedings of the ACE ’04, Singapore. Dresher, G. (1992). Made-up minds. Cambridge, MA: MIT Press. Guinchiglia, F. (1992). Contextual reasoning. Epistemiologia, Special Issue on I Linguaggi e le Machine, XVI, 345-364. Guinchiglia, F., & Bouquet, P. (1996). Introduction to contextual reasoning: An AI perspective. Course Material at CogSci 96, Sofia, Bulgaria. Kokinov, B. (1995). A dynamic approach to context modeling. In P. Brezillon & S. Abu-Hakima (Eds.), Working Notes. IJCAI ’95 Workshop on Modelling Context in Knowledge Representation and Reasoning. Lee, K. (1999). Integration of various emotion eliciting factors for life-like agents. Proceedings of the ACM Multimedia ’99 (Part 2), Orlando, FL. McCarthy, J. (1993). History of circumscription. Artificial Intelligence, 59, 23-46. Ortony, A., Clore, G., & Collis, A. (1988). The cognitive structure of emotions. Cambridge University Press. Piaget, J. (1977). Foreword. In J. C. Bringuier (Ed.), Conversations libres avec Jean Piaget. Paris: Editions Laffont. Siraj-Blatchford, J. (n.d.). Schemes and schemes. Retreived June 22, 2004 from http://k1.ioe.ac.uk/cdl/CHAT/ chatschemes.htm
Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
52 Trajkovski
Stojanov, G., Bozinovski, S., & Trajkovski, G. (1997). Interactionist-expectative view on agency and learning. IMACS Journal for Mathematics and Computers in Simulation, 44(3), 295-310. Stojanov, G., Trajkovski, G., & Bozinovski, S. (1997). The status of representation in behavior based robotic systems: The problem and a solution. Proceedings of the Conference on Systems, Man and Cybernetics (pp. 773-777). Orlando, FL.
Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
On IETAL, Fuzzy Algebraically 53
Chapter VI
On IETAL, Fuzzy Algebraically
Abstract Based on the material presented in Appendix A and B, in this chapter we give alternatives to the definition of agent using fuzzy algebraic structures. The notion of intrinsic representation of the environment is formally defined as a fuzzy relation valued by the lattice of drives of the agent.
Introduction Inspired by the flexibility of the fuzzy modeling tools and taking into account the relationships between the multiple key notions in our theories on agency, learning, and multi-agent systems, especially the drives, actions, and the like in this chapter we give a new approach to the definition of an Interactivist-Expectative Theory of Agency and Learning (IETAL) agent.
Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
54 Trajkovski
Structures in Drives, Motivations, and Actions The agents’ drives, motivations, and actions can identify multiple relational structures that can later be used as valuating structures for the L, P, and R fuzzy structures. Abraham Maslow (Hoffman, 1988) developed a hierarchical system of motivations that influence human behavior. The psychological and the physical needs of the human are at the bottom of the hierarchies, and they need to be at least partially satisfied before people can be motivated by motivations higher in the hierarchy. Maslow gives the following layers of motivations (starting from the bottom, going upwards) in humans: •
biological motivations (food, water, oxygen, sleep);
•
safety;
•
belongings and love (participation in sexual or nonsexual relationships, belonging to social groups);
•
respect (as an individual); and
•
self actualization (to be all the unit is able to be).
In one such strict hierarchy, we talk about a total linear ordering, and chains are special cases of lattices. Motivations, however, do not have to always be comparable, so the hierarchy does not need to always be a total ordering. If the weakest and the strongest motivation in an agent — along with all the other motivations that are comparable to the weakest and the strongest motivation, but not necessarily to all the others — can be identified, then the algebraic structure of the motivations does not need to be a linear ordering. In the most general discourse, they compose a relational structure (when we cannot identify the strongest or the weakest motivation), but we can identify binary relationships between the motivations in an agent. In the discussion that follows, we shall be observing the structure of the motivation as crisp relational structure with a finite carrier. Moreover, we will assume that the motivations are structurally ordered in a lattice and by intensity. In a general case, as far as the motivations are concerned, in most of the models, the weakest motivations at the bottom of the lattice structure of the drives, is the need to survive. The strongest motivation, on the other hand, is the need to use the resources it (or its environment) possesses maximally.
Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
On IETAL, Fuzzy Algebraically 55
Motivations do not need to be ordered only by intensity. Inspired by the hierarchical model of Tinbergen (1951), in his efforts to understand and order motivations, suggests hierarchies that are at least tree structures of drives. As such, it can serve as a valuating structure for a number of P-fuzzy structures in the modeling process. The minimum of two motivations is the more complicated more elementary motivation that is a component of both motivations. The maximum (if it exists) is the most elementary motivation whose components are the two observed motivations. The maximum does not need to always exist. Therefore, in this case not only do we have a tree-like structure, but we also have a semi lattice (always a minimum of two elements in this hierarchical structure, and not necessarily a maximum). Due to the relationships between the motivations and drives, this discussion is also a discussion on the relationships between the drives in a given autonomous agent. As done in several instances above, even the term’s motivations and drives are used interchangeably. From the point of view of actions, in Chapter IV we assumed that the actions of the agent’s action repertoire A are ordered implicitly by their index in the set. If two actions have the same chance of being executed, the one with a higher (or lower) index will fire. This ordering is a chain, a total linear ordering relation, and as such also a relation of preference. We can generalize this case as well and obtain a unique projection relational system of actions in the agent. What we could at least do is introduce a “does nothing” phantom action ∅ in the action repertoire, which would denote an agent capable of doing nothing (no motivations active and all the drives are satisfied). Structure-wise in that case, we would get a semi lattice, and as such we can use it in defining P-valued fuzzy structures. In the general model of the environment in the multi-agent version of the theory, it makes sense to observe the relationships between the percepts an agent can perceive, given its perceptual repertoire and resolution. In a realistic case, there are situations that are noisy and/or with some uncertainty in the data (lack of thereof, contradictory data, etc.). In those cases, it makes sense to apply one of the following models: •
To each percepts we can attribute a vector, with which we describe its membership degree to each of the previously given ideal perceptual situations that the agent can distinguish between, or
•
To define the relational structure of the percepts, so that in case of a hesitation, it can be solved by the minimum of the two, three, or more perceptual cases that cause this hesitation so that the agent cannot set a clear boundary between them when decision making.
Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
56 Trajkovski
In our experiments in a uniagent environment, we used the first model. The second model is more general than the first and as such it can completely subsume the first case. However, the latter case is general enough to be used when discussing whole new classes of perceptual problems in the agent(s).
The New Definition of Agent In this section we give a fuzzy version of the mathematical model of environment representation in mobile agents from Chapter III. Let the set of locally distinct places in the environment: V = {v1, v 2, ..., vn} be nonempty and let the structure A = (A, ρ)
be a relational structure with a carrier the action repertoire of the agent A = {s1, s2, ... , s m}, and the inter-reaction between the actions is given via the relation r that has the property of unique projection. Let: L = {l1, l2, ..., lk} be a set of percepts and the fuzzy set of perception be given via f : V → L. Then Γ” — the designer’s ontology — is a fuzzy graph with labeled nodes and edges. The graph Γ’’’, the Agent Visible Environment or its gnoseology, is a fuzzy graph with labeled edges. Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
On IETAL, Fuzzy Algebraically 57
Let: (D, ≤) be the lattice of drives of the autonomous agent, where the drives of the Relement set D = {d1, d2, …, d R} are ordered with the ordering ≤ ⊆ D×D. The vector-valued mapping: φ : V → {0,1}R ,
defined in the designer visible environment as φ (v j ) = (e1 , e 2 ,..., e R ), v j ∈ V , j = 1,2,..., n,
is called schema of satisfaction, where ei (i=1,2,…,R) denotes if in the node vj the autonomous agent can (ei=1) or cannot (ei=0) satisfy its i-th drive. The septuplet: ΓGIO = ( V, A, L, D, r, f, φ ) is then termed as graph of interaction with the environment.
Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
58 Trajkovski
The vector-valued mapping:
d : T → {0,1} , R
whose rank is a Boolean lattice, is the drive activation pattern. The i-th component of d(t), t-time, denotes if at the moment t, the drive di is active or not. Let S be a finite set of possible inborn schemas, where: Σ⊆A * and let ξ∈Σ, be such that ξ=b1b2…bk, bi∈A, i=1,2,…,k and there is such j = 1, 2, …, m, so that a j = b i. The set B of instances of the schema x consists of all the subschemas of x, that is of all schema of the type: bi1 bi2 ...bi p ,
i j = 1,2,..., k , i1 < i2 < ... < i p .
Let v *∈V be a given node in the graph of interaction with the environment.
Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
On IETAL, Fuzzy Algebraically 59
For the schema: ξ = b1b2…bk, we define the unary relation γξ⊆V over V in the following fashion v* ∈ γ ξ ⇔ . (∃v ∈ V )((v* , v), b1 ) ∈ r ∧ (c = 1 ∨ v ∈ b2b3...bk ).
In other words, v* belongs to this relation if the schema x can be executed from this node (the actions must be executed in the given order). Let us observe now the L-fuzzy set: γ : V → 2B ,
defined as
γ (v ) = {ϑ | ϑ ∈ B ∧ v ∈ γ ϑ }. That is the set of all subschemas of γ that can be executed from the node v. We will now define the relation t of key importance in the theory, as an L-fuzzy binary relation valued by the lattice of drives: τ : ( B × L* ) 2 → D,
where L* is the set of all possible sequences of perceived percepts, defined as follows:
(
)
τ ( Bi , S c ), ( B j , S d ) = d w , Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
60 Trajkovski
where
B
i
= b1b2 ...bi1 , br ∈ A,
B j = bi1 +1...bi1 + i2 ,
r = 1,2,..., i1 + i2
1 < i, j ≤ B , i1, i2 , K ∈ {1,2,..., k }, Bi , B j ∈ B,
c, d = 1,2,....
Sc , S d ∈ L*
if there is such a node v*∈V, so that
( )
Bi ∈ γ v * .
Assume that dw is the current/active drive. In the graph of interaction with the environment there is a path Π : P *b1 P1b2 P2 ...bi1 +i2 Pi1 +i2
such that
S c = f (v * ) f (v1 )... f (vi1 ), S d = f (v i1 +1 ) f (v i1 + 2 )... f (v i1 + i2 ).
Then the relation t is called the contingency table. At the same time this relation defines a graph with a set of nodes over B×L*, and labeled edges, that we call agent’s ontology. This is the intrinsic representation of the agent. The cut/level projection of the relation τ d , d w ∈ D is the agent’s context for the drive dw. w
Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
On IETAL, Fuzzy Algebraically 61
Finally, an autonomous IETAL agent is the quadruplet: AGENT = ( L, A, D, τ ) where L is the set of percepts, A the agent’s action repertoire, D the set of drives, and τ the agent’s intrinsic representation of the environment. As AGENT is a dynamic structure, in order to stress the temporal component, a more appropriate denotation would be AGENT(t), t∈T. The set T is either continual or discrete. In our simulation it coincides with the set of natural numbers N.
References Hoffman, E. (1988). The right to be human: A biography of Abraham Maslow. Los Angeles: Tarcher. Tinbergen, N. (1951). The study of instinct. Oxford University Press.
Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
62 Trajkovski
Chapter VII
On Agent Societies
Abstract In this chapter we propose a model for a multi-agent society, based on expectancy and interaction. Based on results on imitations, we proposed a multi-agent version of the Interactivist-Expectative Theory of Agency and Learning (IETAL) theory—Multi-agent Systems in Interactive Virtual Environments (MASIVE).
Introduction The Petitagé-like agents inhabit the same environment and interact with each other using imitation. The drive to imitate is the highest on the hierarchy of drives of all agents. After the convention of two agents, they interchange their contingency tables and continue to explore the environment. The problems associated with the imitation stages and the changes in the agents can be observed from different angles. Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
On Agent Societies 63
Imitation Revisited In this section we summarize relevant results from research from neurophysiology and psychology applicable in the domain of imitation, in our approach towards the modeling of the multi-agent society. This approach is justifiable by the efforts towards a more efficient learning paradigm in agents with the embodied mind paradigm. The concept of learning by imitation (learning by watching, teaching by showing) is certainly not a new one. Thorndike (1898) defines it as “[imitation is] learning to do an act from seeing it done,” whereas, Piaget (1945) mentioned it as a major part when offering his theory for development of the ability to imitate going through six stages. The discovery of neonatal imitation (Maratos, 1973; Meltzoff & Moore, 1977) changed a few takes on the phenomenon of imitation. Piaget later demonstrates that imitation cannot be a purely cognitive process that appears at the end of the first year; rather it has an innate interpersonal function. The interpersonal function of imitation was first studied in Maratos (1973) in the context of early interaction. Infant imitations selectively elicited by humans, but not by objects (Legerstee, 1991) has become the laboratory criterion of social responsiveness. Nagy and Molnar (2004) state that neonatal imitation: … is defined as various facial, hand, and finger movement and vocalizations made by a young infant to show orientation, attention, learning, effort and motivation when reproducing the previously modeled movements or sounds, many of which are quite unnatural or artificial. Neonatal imitation is one of the many highly complex social skills and preferences that constitute and inborn intersubjectivity; including emotional expressions, preference for humanlike faces, and extremely rapid learning of vital clues from the mother’s body or how to identify her voice, face or odor. Neonatal imitation is where the nonverbal protoconversation start. What carries this protoconversations? A variety of ethological studies that demonstrate: Several complex human behavioral patterns are universal, including basic patterns of communication and basic facial emotion expressions, Children who are born blind from birth are able to express a complex emotional expression… That basic emotional expressions are not only universal, but also innate…. Human neonates also show visual preference toward human faces, directing attention to face-like patterns as compared to non face-like stimuli or to scrambled faces. Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
64 Trajkovski
Babies prefer their mother’s voice (DeCasper & Fifer, 1980), face (Burnham, 1993), and are able to distinguish their mother’s milk compared to the milk of a stranger (Marlier, Schall, & Soussignan, 1998). Historically speaking, after Piaget the interest in (movement) imitation diminished, partially because of the prejudice that mimicking or imitating is not an expression of higher intelligence (Schaal & Sternad, 1998). The phenomenon of imitation is, though, far from being a trivial task. It is rather a highly creative mapping between the actions and their consequences of the other and of one-self’s (visual input-motor commands-consequences evaluation). Rizzolatti, Fadiga, Gallese, and Fogassi (1996), discovered the so called mirror neurons — neurons that fire while one is performing some motor actions or look at somebody else doing the same action. This finding of Rizzolatti et al. (1996) may give insights in our empathizing abilities as well as our other mind reading abilities. It states that in the addition of the first person point of view (subjective experience) and the third person point of view (scientific or objective stance) we have a special within-the-species shared point of view dedicated to understanding others of the same species. Consequences for cognitive science and artificial intelligence (AI) include putting radical constraints within the space of possible models of the mind. In this context, Gallese, Keysers, and Rizzolatti (2004), in introducing their hypotheses on the basis of social cognition state: Humans are an exquisitely social species. Our survival and success depends crucially on our ability to thrive in complex social situations. One of the most striking features of our experience of others is its intuitive nature…in our brain, there are neural mechanisms (mirror mechanisms) that allow us to directly understand the meaning of the actions and emotions of others by intentionally replicating (‘simulat-simulating’) them without any explicit reflective mediation. Conceptual reasoning is not necessary for this understanding. Within the classical, disembodied approach recognizing and communicating with others was not a real problem. New knowledge (i.e., new combinations of symbols) was easily communicated via inherently linguistic terms, due to the assumption that everybody and everything have access to some internal mirror representation (Fikes & Nilsson, 1972).
Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
On Agent Societies 65
Mirror Neurons as a Biological Wiring for Imitation Binkofski (n.d.) uses the following clarifications related to the mirror neurons: The lower part of the premotor cortex that is traditionally regarded as the premotor area (processing high level motor related information) for mouth and hand representations. In humans the left ventral premotor cortex is partially overlapping with the motor speech area — the Broca’s region. In a part of the monkey ventral premotor cortex, the area F5, the so called “mirror neurons” have been discovered. … Mirror neuron system [is] A system consisting of mirror neurons localized in different parts of the cerebral cortex (ventral premotor cortex, inferior parietal cortex, superior temporal cortex) working as a network. The mirror neuron system is supposed to be involved in understanding action content, in learning of new motor skills and recognizing of intentions of actions. The mirror neurons are active when the monkey performs a particular class of action like the other F5 neurons, but also they become active when the monkey observes the experimenter or another monkey performing an action. In most of the mirror neurons, there is a clear relationship between the coded observed and the executed action. The actions observed so far include grasping, breaking peanuts, placing and tearing (paper) (Arbib, Billard, Iacobioni, & Oztop, 2000). In Rizzolatti and Arbib (1998) the authors show that the mirror systems in monkeys are the functional homologue of the Broca’s area, a crucial speech area in humans. They argue that this is the missing link for the hypothesis that primitive forms of communication, based on manual gestures preceded speech in the evolution of language. They may be providing the link between doing and communicating. Arbib (2000) goes into further details with this claim and suggests that imitation plays a crucial role in language acquisition and performance in humans, and that the brain mechanisms supporting imitation were crucial to the emergence of Homo Sapiens. Mirror neurons may be an important step in explaining the social abilities in humans.
Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
66 Trajkovski
Social Agents In this section we give a description of a multi-agent environment inhabited by Petitagé-like agents that are able to imitate their cohabitants. The problems of interagent communication in heterogeneous multi-agent environments are discussed at the end of the section. As discussed before, the agent is equipped with a special sensor that is sensing other Petitagé-like agents in the same environment. The sensor is working in parallel with the other sensors. Sensing another Petitagé takes the agent into its imitating mode, during which the agents exchange their contingency tables. The tables are being updated by new rows. As soon as one agent in the multi-agent system senses another, the agent switches to the imitation mode, where it stays as long as the interchange of the contingency tables lasts. The contingency tables grow as the intrinsic representations are being exchanged. If the contingency table of the agent AGENT 1 that is in imitation mode with the agent AGENT2 has m, and of agent AA2 has n associations, after the imitation convention with full exchange the contingency tables of both agents will have at most m+n rows. It is natural to assume that the agents have begun to inhabit the environment at different times, T1 and T2 respectively, where the time is being measured in the ontology in absolute units. They are also being born in different places in the environment. While inhabiting the environment, they explore different portions of the environment that may be very different, or perceptually similar. Depending on the perceptual topology, the intersection of both of the contingency tables can have between 0 and min {M,N} rows. It is to be expected that the smaller the intersection, the greater the likelihood that the agents have been exploring different portions of the environment, or it is the case when one of the agents have been exploring the environment for a significantly shorter time than the other. Big overlapping of the contingency tables indicates that the agents have either inhabited the same portion of the environment, or they have been in perceptually very similar portions of it. The results of the uniagent theory IETAL cannot be generalized so that we assume that the longer the agent has been in the environment, the more it knows about it. Depending on the perceptual situation in the environment, the cardinality of the individual tables does not have to be proportional to the life of the agent in the given environment. Therefore, the age difference abs(T1 – T2) does not influence the learning curves of the agents. Taking into account the solution when we have cases of cycling in the environment, with random directing when cycling is detected, the chances of the
Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
On Agent Societies 67
agents to explore a more comprehensive portion of the environment grow, as does the likelihood of proportionality of the size of the contingency table and the age of the agent. After the information interchange, the learning curve falls between the time axis, and the lower of the two individual learning curves of the agents. The drive to find akin agents in the environments is the top of the drive structures in MASIVE. We now observe some emerging structures in the society.
The Agent Societies Now we are ready to extend our definition of an agent to agent society. Let GGIO is the environment interaction graph, C the set of homogenous agents in the said environment, and ε the initial distribution of the agents in the environment. If T is the time set, then the structure: (GGIO, C, ε, T) is called homogenous multi-agent environment. Since the agents are akin, they enter the environment with the same inborn schema , and start their exploration of the environment while trying to realize the while schema. The initial distribution e of the agents is a subset of the Cartesian: ε ⊆ V ×C ×T .
The triplet (v, AGENT , t ) ∈ ε
denotes that at time t, the agent inhabits the environment in the vertex v. With the procedures of composition of fuzzy structures, we get a composite social structure Ω (omniscience) of knowledge in the agent society. It is an Rfuzzy structure, valued by C. Below we discuss several aspects of structures in the set of agents.
Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
68 Trajkovski
As they live in the environment and visit portions of it, in time agents tend to cluster regionally. The borders of the regions are not crisp, but the designer can use them to classify parts of the environment in the perceptual sense. The minima of clusters can help conclude other things about the group of agents. The minima (intersection of all agents in the cluster) is called kernel of the said substructure. The agents can be linearly ordered by age, where age is defined as the time an agent has spent in the given environment. General conclusions cannot normally be carried out, due to aforementioned reasons. Temporal constraints can be reasons for incomplete table ex-/interchange. If the conventions are bound in time, then the intrinsic representations are only partially being exchanged. Hardware constraints of the associative memory are a more complex problem than the temporal constraints. Solutions need to be provided when the memory is full. Normally, rows with low emotional context would be discarded and replaced with more actual ones. The knowledge structure of agents in those cases would have a bottom, but not a unique top, and therefore would not be a lattice. As an alternative to the whole contingency table(s) exchange in the imitation conventions, agents could only exchange rows relevant to only and active drive. In the tradition of generation-to-generation knowledge propagation in human societies, agents could unidirectionally exchange knowledge from the older to the younger individuals. A problem of importance in the context of this discussion is when to conclude if two rows are contradictory to each other. In our simulations, we were randomly choosing one of the candidates for expectations. Another, biologically inspired solution to this problem would be in cases like this one, to go with the personal experience and discard in consideration those rows that were acquired in conventions. Instead of random choice when contradictory rows are in question, each row can be attributed a context, in the sense of evidence, from the Theory of Evidence, or more generally, the Theory of Fuzzy Measures. Upon the first look the society of Petitagés would be easily ordered by the amount of information contained in the contingency tables. However, due to the history of the agent and the part of the environment visited since, the agents in the environment cannot be ordered in a linear manner by inclusion of the contingency tables. Algebraically, the structure with a carrier of all possible contingency tables in a given environment, ordered by set inclusion, from the designer’s point of view, is a lattice that we refer to as the lattice P of all Petitagés. The bottom element is an agent with no knowledge of the environment that has been made an inhabitant of the environment, and on the top, an agent that has been in the
Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
On Agent Societies 69
environment long enough to know everything. The agents will cluster in sublattices of agents that have been visiting similar areas of the environment. As agents come into the environment and start learning, they walk through P, too, moving in the direction of the top element. All the agents in the environment at a given time t, consist of a structure S(t), that is a general, partially ordered set by inclusion, with both comparable and incomparable elements. In other terms, as a graph it may not be connected at a given point in time. No general interrelation can be stated between the knowledge and the age hierarchies of the agents in the world of Petitagés. As stated before, no guarantee can be given that the longer an agent inhabits the environment, the more it knows. The same statement holds even for agents with the same lexicons, that is, belonging to a class rooted at the same agent in P or S(t). The problem of inability to exchange the whole contingency tables due to hardware or time constraints is a practical constraint that needs to be taken into serious consideration while simulating a multi-agent environment inhabited by Petitagé-like agents. The problem of lack of time for convention (the other agent got out of sight before the contingency tables were exchanged) is not as severe as the problem of hardware memory constraints. The imitation stage pulls both agents higher in P; they actually gain more knowledge of the environment. While addressing the memory problem, policies of forgetting need to be devised. The abundance of approaches towards multi-agent systems that are not necessarily compliant with the mainstream AI, give us the motivation to explore the theory further. In the upcoming chapters we will simulate the World of Petitagés, and explore the problems that such a simulation will reveal, for a later implementation of the approach in robotic agents.
References Arbib, M. A. (2000). The mirror system, imitation and the evolution of language. In C. Nehaniv & K. Dautenhahn (Eds.), Imitation in animals and artifacts (pp. 229-280). Cambridge: MIT Press. Arbib, M. A., Billard, A., Iacobioni, M., & Oztop, E. (2000). Synthetic brain imaging: Grasping, mirror neurons and imitation. Neural Networks, 13, 975-997. Binkofski, F. (n.d.). Evolution of cognitive functions in primates. Retrieved March 27, 2005, from Humboldt Foundation Web site: http://www.humboldtfoundation.de/en/netzwerk/frontiers/jgfos/abstracts/binkofski.htm
Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
70 Trajkovski
Burnham, D. (1989). Visual recognition of mother by young infants: Facilitation by speech. Perception, 22(10), 1133-1153. DeCasper, A. J., & Fifer, W. P. (1980). On human bonding: Newborns prefer their mothers’ voices. Science, 208(4448), 1174-1176. Fikes, R. E., & Nilsson, N. J. (1972). Learning and executing generalized robot plans. Artificial Intelligence, 3(4), 251-288. Gallese, V., Keysers, C., & Rizzolatti, G. (2004). A unifying view of the basis of social cognition. Trends in Cognitive Science, 8(9), 396-403. Legerstee, M. (1991). The role of person and object in eliciting early imitation. J. of Experimental Child Psychology, 51(3), 423-433. Maratos, O. (1973). The origin and development of imitation in the first six month of life. Unpublished doctoral thesis, University of Geneva. Marlier, L., Schall, B., & Soussignan, R. (1998). Bottle-fed neonates prefer an odor experienced in utero to an odor experienced postnatally in the feeding context. Development Psychobiology, 33(2), 133-145. Meltzoff, A., & Moore, M. K. (1977). Imitation of facial and manual gestures by human neonates. Science, 198, 75-78. Nagy, E., & Molnar, P. (2004). Homo imitans or homo provocans? Human imprinting model of neonatal imitation. Infant Behavior & Development, 27, 54-65. Piaget, J. (1945). Play, dreams, and imitation in childhood. New York: Norton. Rizzolatti, G., & Arbib, M. A. (1998). Language within our grasp. Trends in Neurosciences, 21(5), 188-194. Rizzolatti, G., Fadiga, L., Gallese, V., & Fogassi, L. (1996). Premotor cortex and the recognition of motor actions. Cognitive Brain Research, 3(2) 131141. Schaal, S., & Sternad, D. (1998). Programmable pattern generators. Proceedings of the 3rd International Conference on Computational Intelligence in Neuroscience (pp. 48-51). Research Triangle Park, NC. Thorndike, E. L. (1911). Animal intelligence. New York: Macmillan.
Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
On Concepts and Emergence of Language 71
Chapter VIII
On Concepts and Emergence of Language with Paul R. Bisbey, Concordia College, USA
Abstract The existing theories of concept formation involve categorization based upon the physical features that differentiate the concept. Physical features do not provide the understanding of objects, entities, events, or words, and so cannot be used to form a concept. We have come to believe that the affect the object, entity, event, or word has on the environment is what needs to be evaluated for true concept formation. Following our argument for a change in the direction of research, our views on some of the other aspects of concept formation are presented.
Introduction In 1938, Einstein and Infeld (1938/1967), in the context of metaphors, stated: Physical concepts are free creations of the human mind, and are not, however it may seem, uniquely determined by the external world. In our endeavor to understand reality we are somewhat like a man trying to Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
72 Trajkovski
understand the mechanism of a closed watch. He sees the face and the moving hands, even hears its ticking, but he has no way of opening the case. If he is ingenious he may form some picture of a mechanism which could be responsible for all the things he observes, but he may never be quite sure his picture is the only one which could explain his observations. He will never be able to compare his picture with the real mechanism and he cannot even imagine the possibility or the meaning of such a comparison. (p. 31) In out theories, the agent builds its own operative representation of the environment. It conceptualizes the environment within its contingency table. This brings us to the next question in our discussion — what is really a concept, and how are concepts formed. How about considering concepts to be the associations of subschemes and percepts, the infamous (B, S) pairs in the contingency tables? Let us see why this makes sense. Let us observe ontology similar to the one in Figure 2 (Chapter V) in shape (shape as the designer’s category), with an agent that suffers from perceptual aliasing (all percepts are the same), and has a FFFF…FR (a sequence of forward moves finished by a turn to the right). What will happen in the contingency table? There will not be many rows. There will be (B, S) pairs of two types — corridors and corners, depending whether there is a right turn in the subscheme or not. If the scheme is shorter, say FR only, it might happen that in the contingency table, in the subscheme columns, we have F, R, and FR, parts of a corridor, and parts of a corner (including the two T-crosses in the ontology). Things get more complicated as the percepts and more complicated schemes come into play. However, it still makes sense to consider the (B, S) pairs as basic, protoconcepts of the agent in the Vygotskyan sense, as discussed later on. Or, alternatively, structures in the contingency table can be observed as agent’s concepts. Loops in the contingency table might indicate a corridor that needs several executions of the scheme to get to the end of. In order to understand what concepts are, we give here a brief overview of the research on this topic. The cognitive sciences are trying to understand how the concept formation works, for as Fodor (1995) writes, “concepts [are] the pivotal theoretical issue in cognitive science; it is the one that all the others turn on.” However, researchers seemingly must often work in such a narrow aspect that they at first appear not to be involved in the process of solving the same puzzle. We believe what is preventing the field from creating an agent that can form concepts and continue on to developing language is a proper model of conceptualization. A well accepted definition of a concept includes “a mental representation of a category” (Medin, Lynch, & Solomon, 2000), along with the statement that “concepts are the least complex mental entities that exhibit both representational
Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
On Concepts and Emergence of Language 73
and casual properties” (Fodor, 1995). Many hold categorization to be the essence of a concept and its formation (Estes, 1994; Medin, 1989; Ross & Spalding, 1994), being that if an agent can adequately categorize the objects, words, events, and entities of the world then the agent will have succeeded in forming concepts. These categorizations can then be used to assist the functioning of the agent. We believe that there are distinctions between concept formation and categorization, and argue on this later on. This problem very central to agents; implementations that hope to succeed in producing intelligent behavior (whatever the definition) in a complex environment need to first resolve the issue of concept formation. The implications of creating a working model of concept formation would be so widespread throughout the field, it is hardly necessary to for us to assert it, and is exemplified only done so for those less knowledgeable in this area. Without a thorough understanding of concept formation, agents are being denied the essential breath of life. The excuses, when failing in their attempt to build an agent with the varied yet specific abilities required to function in the complex real world, provided by some researchers, do not convince us. It does not seem that just by adding enough computational power or sophisticated enough sensors to our current agents that intelligence, in the sense we all seek, will simply emerge. It is our belief — as we stated — that currently the greatest hindrance in intelligent agent creation is the development of a functioning model of concept formation. It even seems reasonable that the other aspects of our current agent model, theories of agent learning, the models to implement intelligence with, and so forth, would be sufficient to allow for the emergence of true intelligence, albeit perhaps not in the most efficient way.
Modern Theories of Concept Development Here we give a brief overview of our criticism of the present theory on concept formation. There must exist some construct in the mind (even if it is emergent) of all agents that have the ability to form concepts that organizes the sensory input into concepts. This is apparent, yet proven difficult to replicate effectively and efficiently. Our critique is of how concepts are understood by the agent including how they are initially formed and then categorized. Concept formation has traditionally been viewed as an issue of categorization. How it is that objects, entities, words, or events should be differentiated from each other remains the primary question on the minds of those attempting to build a theory of concept formation. Through years Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
74 Trajkovski
of thought and experimentation, most have settled on selecting the physical features that differentiate, and so categorize that object, entity, or word from others as the method of concept formation (Estes, 1994; Medin, 1989; Ross & Spalding, 1994). Researchers have and continue to attempt the discovery of key physical features that we as humans use to differentiate, say, a bird from another animal. Vygotsky (1986) argues that a concept lacking an understanding of why the object/ word/entity/event has its particular properties is not a complete concept, but what he calls a protoconcept. Only after one understands how a light bulb works, for example, by combining a number of concepts, the protoconcept becomes a true concept. The complexity of these true concepts should not be undervalued. Concepts—in Vygotskyan sense—are believed to not actually begin forming in children earlier than just before puberty (Piaget, 1954). Concepts include the “accumulated knowledge about a type of thing in the world” (Barsalou, 2000). We embrace the belief that this long delay in true development is a demonstration of the Buddhist idea that the world all knowledge is contained within a petty seed. One cannot understand the seed without having diverse knowledge of the world that was formed by living in that environment. To understand that the seed is placed within the soil to grow into a plant, an agent must know what placing in the soil means, what soil is, and how it is different than other areas on of the ground, described what a plant is including what color is, of three dimensions and how it is viewed, of time, of the dark soil and the sun that provides the light, but not before informing the agent what light is and that objects that are solids and that light does not usually penetrate solids, and that friction and gravity will stop the seed from continuing to sink after it has been pushed in the soil and that the seed takes up space and moves the soil that is movable because it is made of many smaller pieces that easily move, and so on, all while — of course — understanding all of the words that it is being told. An agent that had not acted in our world would not be able to understand it—many years ago Abbott (1884) told us a nice story on this subject in his Flatland. The problem is not just the complexity, but where it is that one begins. In what order should the agent be exposed to the objects? This is the reason our theory assigns such importance to the agent-environment interaction. True concepts involve a level of understanding that takes even children a number of years of interacting with their environment to learn.
Past theories have built upon an assumption we feel is incomplete, for it overlooks the issue of understanding in the concept formation model. All the methods of categorization, used as the ultimate goal of building concepts, have used the physical features, semantics, or sequences (Case, 1999). Not only are we are in disagreement with regards to categorizing as the sole aspect in concept formation, but we disagree with the attitudes that even if categorization through physical features can successfully be performed the agent will have fashioned a true concept. Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
On Concepts and Emergence of Language 75
Relying on categorization through physical features fails for at least two reasons. One being that the world’s objects/words/entities/events are much too numerous, complex; two, and more importantly, their features are too varied to allow for easy categorization. Finding the key physical features that differentiate one object/entity/word/event from all others is doomed by both the variability of the environment, and that it can provide the agent with understanding. We believe that understanding cannot be achieved through examining only the physical features/semantics/sequences of what is trying to be conceptualized. If the agent does not understand the influence that object, entity, event, or word has on the environment, then, we maintain, the agent lacks the concept. As previously stated, Vygotsky (1986) supports this claim that an agent must understand why a concept has certain properties before it can be considered a true concept, but beyond that, it seems we can only rely upon the soundness of our reason until proper experimentation can be performed. Without understanding, even if categorization by physical features could be perfected (and we think due to the variability and complexity of the real world it cannot), categorization itself will prove useless. Examining physical features only, no matter in what detail, will never allow an agent to understand its world. The two problems can clearly be seen in MIT’s Cog. Brooks was one of the founders of the belief that to create intelligence that can function in our world an agent must be allowed to develop as a child does, and so he directed the construction of a robot that would just be proved the ability to observe, form an idea, decide an action, then act, developing as the field understands that children do. Cog was not provided with any prior knowledge of the world. Cog, being one of the most advanced robots in existence, was provided with state of the art sensors, actuators, and electronics, provided with a huge amount of computation…. And yet, though much has been learned throughout development, many consider Cog a failure. It still cannot make the apparently simple distinction between a glasses case and a cell phone. This disappointment should not be attributed to the eyes or electronics Cog has been provided, nor to, as many have blamed, a lack of computational power. The body (though possibly not used to interact enough with the environment) was, we hold, sophisticated enough for this seemingly simple distinction, however the mind might not have been organized correctly.
The Environmental Function Our theory of concept formation is a deviation from current theories in that it rejects the notion that categorization will lead to understanding.
Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
76 Trajkovski
If an artificial agent could categorize as well as researchers would wish, even as well us humans, it would still not be able to function in the environment. This is because the agent does not know the affect each object/entity/word/event has on the environment. If physical features/semantics/sequences cannot supply the agent with an understanding of why the particular properties are present in that concept, then what can? Physical features/semantics/sequences cannot answer the all important question of why because they fail to look at the environmental function of the object/word/entity/event. The environmental function, as our proposition to solve the problem of understanding and categorization, is found by comparing the difference between with and without states. An understanding of the influence the object/entity/ event/word has on the environment is first gained when the agent discovers how the environment has changed when the object/entity/event/word is removed or added (the with and without states). That change in the environment is the concept of that object/entity/event/word. Learning a concept inherently involves knowing the concept. A concept cannot be known without understanding its influence on its environment. Understanding the affect of objects, entities, events, and words requires that the agent has the ability to interact with the environment. Let us, for now focus on conceptualizing objects to make the argument easier to grasp. As with a window blind, when the blind is pulled the light is allowed to come in and the agent can see out the window. This change in the state of the environment from when the blind was down to when it is up is the concept of a blind. Fitting other blinds into the same category is not based upon color, size, material, or any other physical feature of the blind, but by how all blinds have a similar function in that context. A blanket could be used as a blind and would be categorized and understood by its function in the environment, overlooking the large differences in physical appearance between a stereotypical blind and a blanket. What is the concept of a light bulb? It is not the physical features of the bulb, its shape, size, color, position in environment, or any other physical feature of the object. Instead, it is the function of the light in the environment, meaning when the object, in this case the light bulb, is removed from the environment, what in the environment changes. If you remove a light, the room becomes darker. That is the concept of a light! The idea operates both ways. If you add the object to the environment, the difference can be observed, and then that change becomes that object’s concept. Concept formation is performed by finding the difference between two states in the environment; one state with the object/word/event/ entity and one without. If one is to watch children, it seems they are experimenting with their environment and remembering the change each object has. It is undoubtedly so that a
Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
On Concepts and Emergence of Language 77
child cannot physically remove a car or a house from his/her environment. Instead it is removed for them. They will find themselves in an environment where people at the moment do not have cars, such as when walking, and then the child has the opportunity to understand the concept of a car. Alternatively, the child could use their imagination to reason what would occur if the object is removed. It must be noted that if the child first witnessed people walking, the opportunity to begin forming their concept of a car would arrive when the child sees people traveling in a car. This leads into how concepts are categorized. The dominating views hold that objects with similar physical features are more related than those with dissimilar features. That is diametrically opposite to our view. The child must compare the car to walking based upon the similar function of the two. Connecting the two actions based on physical features would be extremely difficult if the agent was trying to compare the physical features of a car with that of legs of a person moving. The important feature is, naturally, that they are both a means for transportation, though without looking at how the action and object affect the environment, an agent could not deduce the relation of the two on its own. Concepts then are categorized based upon the concept’s environmental function. The process of categorization is simple, if the object/entity/event/word has a similar influence on the environment (similar environmental function) they are linked together. To maintain flexibility, the procedure of deciding which concepts belong in a group can be done dynamically through appropriate reasoning. This allows for the connection of concepts that would be considered to have only subtle similarities, which could conserve memory that would otherwise be forming more or less permanent connections. Once the agent has either frequently made a connection, or the agent had a high emotional context at the time the concepts were encountered, a connection can be made to the concepts involved for quicker recovery later. If the frequency of use decides how strongly the concepts are connected, it would be assumed that environmental laws and rules, such as the law of gravity, that are used in many situations would be some of the first the agent begins to understand. This could allow for a primitive reasoning process to emerge. One may ask, “And what of the things that will not change the environment if removed or added or altered?” To this we reply with the question of whether one can think of anything that would not change the environment if removed. Objects usually have some significance to us, even if they do not affect our achievement/ satisfaction of drives, removing them from our environment would cause a change, even if that change is only that we no longer see something that we are accustomed to seeing. Often we do not control what objects or entities are in our environment, but in the case of words and steps in a sequence, we, as humans, have control of whether they continue to persist. If a word had no meaning or a step in a sequence did nothing to move the agent towards achieving its goal, neither would persist. A Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
78 Trajkovski
word or step with no meaning would not continue to be passed along unless it acquired some importance. The context of an object/entity/event/word cannot be fully disregarded, as with the example of the blanket, its function could only be understood by how it is placed in the environment. However the purpose of context in concept formation should not extend beyond more than a clue in the puzzle of how to categorize. A concept cannot exist without finding the environmental function of that which the agent is trying to conceptualize. Once the concept has been formed then the agent can start determining in which context(s) that concept is useful or applicable. It is used only as a method to assist finding the influence something has on the environment in that particular situation, helping to reduce the number of possibilities, without having to add, remove, or alter the object in ways that would consume valuable computational resources in that specific context. When the agent believes it may have correctly categorized (or as we say characterized) an object/entity/word/event, then the expected behavior is compared to the actual outcome. If the difference is acceptable for the agent’s particular task, the agent knows it has correctly classified, and if not then the conceptualization process must be continued and refined for that object/entity/word/event. Most concepts can be formed only by combining other previously formed concepts. Concepts, such as the concept of a computer, involve a combination of a number of concepts. This makes conceptualization of a computer difficult for an agent to realize the environmental function, for it requires that a number of other concepts are first understood. We will not comment on our belief of how concepts are correctly connected, for this is too implementation specific, and our intention is to retain an argument that is independent of the method used for implementation.
Beyond Objects The theory extends to categorizing and the understanding of words, events, and entities as well. Nothing essentially changes — the agent still evaluates what effect they have on the environment. The environmental function of a word is how that word when added or removed from a sentence changes the sentence’s meaning. The concept of a word must itself have an influence on the environment. By removing the word together from a sentence, the sentence does not retain the same meaning and so, in its own way, has changed the environment. We believe that essentially everything changes the environment if removed. An understanding of events or processes can be gained by discovering, either through trial or reason, how each step influences both the environment and Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
On Concepts and Emergence of Language 79
ultimately the attainment of the desired end result. This will likely involve comparing many states and how each move the agent closer to achieving its goal. The concepts of events and steps are quite different from that of objects. We control whether a step is carried out or not, whereas we do not control those objects in our environment that might be useless to us. If a step is not helpful to use in achieving our goal, it is removed from the process next time. Conceptualizing an entity is more difficult and involves using many concepts in conjunction, though the method of doing so remains essentially the same. Categorizing an entity requires more attention and memory, though the basic principle remains. The behavior of the entity provides the agent with the environmental function of that entity. The agent must give much more attention and memory to the process for the process of assessing behavior over a span of time is quite a complex task (even us humans are not that terribly good at it). The agent must be able to view the sequences of behavior and their affect on the environment, categorize them, continuing to watch the behavior until the behaviors that are common for that entity can be discovered. The more time spent sampling behaviors, the better concept the agent will have of the entity. Does this seem familiar? As with first impressions, if they even hold true, they only provide a skeletal understanding of the person’s behavior. The longer one spends with another, the better concept he/she will have of the other. The less apparent, or less frequent, behaviors are seen with time. Watching a housefly may tell us little more than what we learned in the first 10 minutes or in the first day. The other extreme will not work well either. Examining the fly’s behavior for only 3 seconds would not be long enough to conceptualize a fly. The agent might view the fly when it is not flying and have no reason to add that flies fly to its concept of a fly. Or if the fly is flying for those 3 seconds the agent might not understand that flies are not always flying. The time that an entity spends on attending to a concept is a trade-off between better understanding of its affect on the environment and efficiency. Not all objects/words/entities/events are equal in the time needed to conceptualize (properly). In part, the behavior of an entity is how it can be conceptualized. Finding the behavior involves comparing a sequence of events that commonly occur. Birds are often flying, and other entities do not fly. But not all entities categorized as birds fly, as with a penguin. In this example, we would reply that this categorization was done using the common methods of biological taxonomy, which rely on categorization by assumed heritage. This does not mean that we — in distinguishing penguins from other entities — use the same method for categorization. For it seems instead that we have made our view fit this need to classify animals as what is related to what. When we consider the animal, we are more likely to think of its unique behavior than its physical characteristics. In the case of the penguin, we may think
Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
80 Trajkovski
of a waddling and swimming entity. Sure a picture of the entity is likely to enter our minds, but the importance of that particular image should be drastically lessened when considering how that concept first emerged. We admit that entities seem to lend themselves best to physical feature conceptualization. Although, as previously stated, for initial conceptualization to occur the agent must find the entity’s function in the environment, that being how the entity changes or behaves in the environment. Next, the entity is classified based on its function in the environment. Once these two processes have occurred, then the physical features can be examined and used in combination with context so that the entity’s function can later be recalled with only a quick glance at its physical features. Comparison of a behavior is not the only process through which entities are classified. Other concepts of how the environment operates need to be utilized. We understand that each human is a unique individual — no other one is exactly the same. Furthermore, we use our concepts of the environment to realize that an entity can only exist in one space at a time to help us form our concepts of entities. One could imagine an environment where an entity can exist in more than one area at one time. This would alter our concept of entities, and change how we categorize the entity that has such ability. This change in concept formation does not remove the need for using physical features/semantics/sequences for categorization either. It simply moves the physical feature/semantic/sequence identification component of concept formation to a different role. Physical features are used for quick identification of objects/entities/ events/words that have a concept of it already formed.
Emergent Order of Concept Formation Which concepts are formed first? There must exist foundational concepts (lowlevel) that the agent incorporates into more complex conceptualizations. This may seem obvious, but when attempting to build an agent that can build concepts we are forced to ask the question of what these foundational concepts are or at least how we may build a system that can independently discover them. It would be ideal to build an agent that inherently, through its learning technique, would incorporate those concepts that are elemental. And creation of such an agent structure may turn out to be simpler than originally supposed, due to the possibility of natural emergence of such layered concepts. It may be that those basic concepts that need no other concepts for their formation will themselves emerge, and the problem of ordering of the concepts will be naturally solved. And later when the right foundational concepts are adequately formed, concepts built from these low-level concepts (high-level concepts) can begin to form. We (hopefully) may not have to specify when the agent should attempt to begin Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
On Concepts and Emergence of Language 81
building these higher-level concepts if we run both of the low and high-level concept formation processes concurrently. The part of the agent’s brain that combines low-level concepts into high-level ones would be constantly running. This way, any high-level concepts that are not properly developed, meaning that they do not assist the agent in drive completion, will be deleted. Once the lowlevel concepts are better developed then more accurate and helpful high-level concepts can be built from them. For higher-level concepts to emerge, all that the agent needs to be given is a drive to form these concepts and an initially flexible system that will accept, and perhaps even encourage, concept refinement. This same idea of emergent ordering of concept forming likely applies to the order in which objects, words, steps in a process, and so forth, are formed. Concepts that lack the foundational concepts required to be suitably fashioned will not form correctly, and so, will later be changed for more effective and accurate representations when the foundational concepts have adequately developed. Once certain rules and laws (and in what context these apply) of the agent’s environment are understood, the agent can make predictions of what would occur with a certain action without having to actually perform the action. When correct predictions can consistently be made in a complex and varied world, we would say that the agent has succeeded in forming a concept.
Physical Feature Identification Methods It is not that physical features are not utilized at all, they just are not used in concept forming. Researchers might have already placed the physical feature classification in the wrong role, whereas physical features should only be seen as a way for quick identification. Each time an agent views a commonly seen object, the agent does not have to rediscover the function of that object. Rather the physical features (at times coupled with reason) are used to classify and remember the function of the object. In practice, the physical features of an object are most often only used in remembering the function of the object, although in some cases physical features are used to assist categorization. The physical features of an object can act as a clue to its function, but it must be said that without categorizing a similar object through functional concept formation, categorizing based solely on physical features would be impossible. We identify a car based on its physical features and context, however the concept is of the car’s environmental function. As for the theories behind physical feature identification we have nothing to add. We believe that they are close to sufficiently advanced to, with the assistance
Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
82 Trajkovski
of the concept’s environmental function, to handle identification of some commonly viewed objects/entities/words/events. And besides, finding a way to implement the environmental function theory should have precedence, if proven to be of such vital importance in initial concept formation.
Representation of Concepts In this section we focus on intrinsic concept representation in the operational memory of agents. In attempting to create an agent that can form concepts, another essential issue must be addressed. The question of how concepts should be stored (represented) in the agent can really not be overlooked, though it is more implementation specific than the rest of the chapter has been. We only decided to provide our view, of this ardently contested topic, even though we have little proof to convince the reader. Minsky (1975) and others (e.g., Davisson, 1993) provide nice surveys of the issues of representation. Concept representation only makes sense to us in that the system must dynamically produce the conceptualization. What this means is that instead of having a different location of blueness for each object that is blue, one area has the representation and fires when the object is encountered. The representation of blueness emerges with all other elemental areas firing simultaneously. But, how exactly should an individual concept be represented? This would be implementation specific and also vary between individuals. In our own agents, the (B, S) pairs or combinations of thereof are for all intensive purposes, proto or real concepts. Representation and memory are closely intertwined, though the difference remains that a model of representation could be implemented with different memory structure. It is apparent that there must be a system for remembering learned concepts, and, less obvious, a method for forgetting. We are still lacking a memory structure that would effectively function in a complex and varied environment. A powerful tool that the brain likely uses is what we refer to as a concept specific filter. Dependant upon the concept, the filter will attend to only certain stimuli that it considers to be more important to that particular concept. Removing potential conceptualizations that do not fit into the context would be one way of narrowing the possibilities. By the concept specific filter only evaluating certain features, reactions, behaviors, and so forth, the efficiency of forming an adequate conceptualization can be improved. Remaining realistic and aware of our current technological limitations is important. Computational power is frequently blamed for restricting the development
Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
On Concepts and Emergence of Language 83
of an agent that can act intelligently in our world. Although this is true as of now, lack of computation is just one of the problems and will, if technological development trends continue, likely be solved sooner than the others. What we see as holding back intelligent agent development — at least partially now and exclusively in the future — is both the limitations of actuators and of a model of learning and memory. As stated in our theories and explained previously, interaction with the environment is crucial for the agent to gain an understanding of its world. Only through manipulation of the environment may the agent form an understanding of how its actions alter the environment. After enough interaction, related concepts can be brought together to create a higher-level concept that can allow the agent to make a predication of its action in a situation that it has never before been placed in. Though before these concepts can be formed, the agent must act as a young child does, manipulating the environment and remembering the results. The problem is that today’s actuators are far from sufficient. Cog needs actuators that can sufficiently manipulate the environment, so that the difference between a glasses case and a cell phone can be understood. Our human legs, arms, and fingers are nimble, sensitive, and flexible enough to move around and manipulate the environment to the degree necessary for building concepts of most relevant things. We can easily open a glasses case, push the buttons on a phone, or turn on a light to see how that particular action changes the environment. Another one of the essential pieces that may hold the field back even after enough computational power is available is that which the theories laid out in this book are intended to help shape. The field currently lacks a model of how to implement intelligence. Having the ability to compute will not itself lead to the emergence of intelligence. We must provide our agents structures of how to create concepts, of all different levels, along with a body independently capable of exploring the environment, all so the agent can learn solitarily to meet its goals. That is all if we are going towards what traditional AI has always longed to achieve — a replica of the human.
On Language Language is a key element in the communication in multi-agent systems. In this section we discuss the phenomena of the emergence of language, the language in multi-agent systems, and the modalities of communication. Communication is the basis of the interactions and social organization. Without communication, the agent is but an isolated individual. Within the framework of the Interactivist-Expectative Theory of Agency and Learning (IETAL) and Multi-agent Systems in Simulated Virtual Interactive Environments (MASIVE) Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
84 Trajkovski
theories, without language or communication in a society, to an agent the other agents in these multi-agent settings would only be a part of the environment. As the agents all execute actions from their repertoire, for the agent the environment changes dynamically. This introduces the need for fast changes in the contingency tables of all agents. Communication is demonstrated via interactions in which the dynamic relation between the agents is expressed in terms of signals, which once interpreted, influence the agents. There are a number of approaches to the phenomenon of communication, especially within the social sciences and linguistics domains. The inceptions of this theory can be found in the early works in conjunction with the Shannon Theory of Information.
Communication and Social Taxonomies In this section we classify the communications (Figure 1) depending on the communication elements, and based on their functional essence (Jacobson, 1963). In the sense of the complexity of the communication, autonomous agents fall into four main categories (Figure 2): •
noncommunicating homogenous,
•
noncommunicating heterogeneous,
•
communicating heterogeneous, and
•
centralized agents (uniagent environment).
Figure 1. General scenario of communication in a multi-agent society; the agents model the goals and actions of the other agents; moreover, direct communication might also be enabled Environment
Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
On Concepts and Emergence of Language 85
Figure 2. Dimensions of communication complexity in a multi-agent society
Communicating
Degree of communication
Non-communicating Heterogeneous
Degree of homogeneity Homogenous
When there is no communication, the agent builds its intrinsic representation of the environment, and the other agent surrounding can be perceived, but as they may be moving, the contingency table of the agent at hand will be changing dynamically — here we have a noncommunicating set of disjoint agents. There is no such thing as society of homogenous agents, as every agent is an individual, and based of its particularities, demonstrates a distinct history while in its sojourn of an environment. When we speak of homogenous societies, we speak of agents that are built in a similar fashion, following the same paradigm, of similar/comparable perceptual, motor, and cognitive abilities. An example of a homogenous society in the sense we understand it in the framework of this book, would be the society of humans, where although all different, we have immense commonalities; an example of communication in a heterogeneous society in the context of our discussion would be a person giving verbal commands to a dog. Dautenhahn and Billard (1999) offer a definition of social robots that we modify and use it as a definition of multi-agent systems that summarizes our take on this term throughout this book: Multi-agent system are agents that a part of a society. They are able to recognize each other and engage in social interactions, they possess histories (perceive and interpret the world in terms of their own experience), and they explicitly communicate and learn from each other.
Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
86 Trajkovski
The term social robot and social agent are related, but we insist on the different terminology, as the emphasis in research in social robotics and the societies that we are observing are different, although complementary. Based on the initial list by Breazeal (2003), Fong, Nourbakhsh, and Dautenhahn (2003) provide a taxonomy of social robots is provided, based on how well the robot can support the social model that is ascribed to it, and the complexity of the interaction scenario that can be supported. The seven categories are as follows: •
socially evocative,
•
social interface,
•
socially receptive,
•
sociable,
•
socially situated,
•
socially embedded, and
•
socially intelligent.
Lengthy discussions can be carried out on how to define precisely each one of these categories. Many terms are not widely agreed upon, situated, for example, is one of those.
Functional Taxonomy In this section we classify communication depending on the communication elements and based on their functional essence. According to Jacobson (1963), who maintains a functionalistic view on language, the manners of communication can be classified depending on their functions. According to him, there are six ways/functions of communication that fall into two main categories (paralinguistic and metaconceptual): •
Expressive function, characterized by information exchange about the intentions and the goals of the agent — “This is me, here is what I think and what I believe”;
•
Conative function, with which one of the subjects involved in the communication requires from another an answer to a question or ask it to do something for it — “Do this, answer the following question”;
Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
On Concepts and Emergence of Language 87
•
Referential function, that refers to the context — “Here is the state of affairs”;
•
Fatic function, that enables a communication process to be established, stopped or prolonged — “I want to communicate with you, I hear you loud and clear”;
•
Poetic function, that beautifies the message (This function has not been yet observed in autonomous agents; no one has yet observed its esthetic aspects); and
•
Metalinguistic function — “When I say ‘X’ I mean ‘Y’.”
Communication is expressed as a cognitive convention (Ghiglione, 1986), where the unit that sends the message assumes knowledge of certain facts and adds facts to the knowledge base, without having to restate facts that it considers known. Current research within the theories in the domain of social psychology of communication accentuate the knowledge of the construction and the maintenance of shared cognitive universes, that is, coconstruction of references (Ghiglione, 1989).
Language as a Category in MASIVE In this section we try to define the term language in MASIVE in terms of the contingency tables and the hierarchical structures of agents that we previously observed. Standard theories of language that refer to the human development are based on three main assumptions (Nolan, 1994): •
The prelinguistic agent (infant) knows to work with prepositions, and through the learning of the language it acquires just the means for their expression (Barwise & Perry, 1983; Fodor, 1975).
•
The empirical research of language is possible independently from any epistemic assumptions (Chomsky, 1986; Fodor, 1990).
•
The language phenomenon in humans is a complex form of animal communication (Barwise & Perry, 1983; Benett, 1976; Dretske, 1981).
Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
88 Trajkovski
Recent research in the domain of cognitive development gives an alternative approach to the problem of emergence of language, and amongst the many questions that are being posed, the predominant is the one that considers the innateness of this phenomenon. Researchers focus on the dependency between language and knowledge of social practices to which an agent adapts and learns how to participate in. Nolan (1986) bases her essay on her attitude towards how conceptual categories emerge out of the perceptual ones. In humans, probably the best proof of conceptualization is the linguistic proof. In nonhuman units, the proof is the behavior of the whole unit. Without going into further philosophical discussions, and taking into account Denett’s determination to build a simple agent (first), in the spirit of our theories, we decide to work with the definitions below, with strong beliefs that they are truly appropriate as a starting point of study (and taxonomy) in simple, and afterwards, in more complex agents. From the perspective of the agent’s introspection, the concept makes the agent aware of the place where a given drive can be satisfied. From the social aspect, when the concept is being reinforced during the imitation conventions, the agent serves the environment it belongs to and shares with the rest of the agents it comes in contact with the information about the satisfaction of the drives that may be useful to the other agents. In MASIVE the notion of concept refers to one or several rows from the contingency table. This conceptualization is being built based on the conative scheme. The agent builds concepts during its sojourn in the environment. Other agents do so as well. In the exploration of the environment they either update the contingency table based on interactions with the environment, or, if there is communication, based on acquired information from other agents. The contingency table can be augmented by another column that would indicate the source of the given row in the table (the agent itself, or another agent, a fact learned during the imitation convention). The data in that column can be identified as the name of the agent that provided the information. The naming of the agent is a separate problem that can be solved with special precepts in the perceptual set of the agent, but also as a code/stamp that is being transferred whenever a row from a contingency table migrates. In the former case, the naming problem, that is, the problem of knowing or getting to know the agent is being solved at the agent’s end, and based on agent’s abilities, we can easily model the phenomena of perceptual recognition. The latter solution is totally in the hands of the metaagent, that is, the designer that names the agents, and as such is not a plausible/ realistic model. As other agent’s information can lead to surprises, we now have the grounds to talk about emotional context of another agent, as a measure of
Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
On Concepts and Emergence of Language 89
how useful its contribution through imitation has been to achieving satisfactions of drives without (too many) surprises. As previously discussed, depending on the activation pattern of the drives, certain rows from the contingency table at certain times start dominating with respect to their emotional value with respect to the active drive and then the perceptual input is being interpreted by this structure. This interpretation means identification of the object needed to satisfy an active drive. Stojanov (1997) states: Now we can easily imagine addition to the theory where we would suggest labels for certain instances of the intrinsic representation, some kind of a mechanism for manipulation with these labels, and to use these additions as some kind of symbolical description of the environment, which is in a way separate from the perceptual experiences. This is reminiscent of the Piaget state of formal operations. Naturally, these symbols are far away from the symbols that we use in the classical Artificial Intelligence. The object, as well as the path to reach them (as attributes to the goal of satisfying a given drive), that satisfy the drive are observed in a similar manner as subjective categories in other theories of agency (Lakoff, 1987; Rosch, 1973; Tversky & Gati, 1978). These symbols can be observed as the basis for representation from the natural languages type (Stojanov, 1997), or as a protolanguage. The agents in our theory are linguistically competent. They have a system for information exchange. The lexemes are being exchanged, which means that an agent (often) gains information on parts of the environment it had not visited beforehand. Bickerton (1990, 1995) suggests the stage of protolanguage as an intermediate step in the evolution of language in humans. The protolanguage consists of triplets noun-verb-noun (congruent with ours percept-action-percept triplets). Our discussion is also directly congruent by structure to the rows of the contingency table. The sequences of percepts can be observed as protolexemesnouns, and the sequences of actions as protolexemes-verbs. The drive attribute attributes intentionality to the protosentences.
Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
90 Trajkovski
Acknowledgments Paul R. Bisbey from Concordia College (Moorhead, MN) contributed significantly to the work on concept formation presented in this chapter. His involvement in this project was made possible by the National Science Foundation’s program Research Experiences for Undergraduates (REU). During the summer of 2005, he worked on this topic at the Towson University Multidisciplinary Computing Laboratory REU Site and the Cognitive Agency and Robotics Laboratory.
References Abbott, E. A. (1884). Flatland. Retrieved January 3, 2005 from http:// www.eldritchpress.org/eaa/FL.HTM Barsalou, L. W. (2000). Concepts: Structure. Encyclopedia of Psychology, 2, 245-248. Barwise, J., & Perry, J. (1983). Situations and attitudes. Cambridge: MIT Press. Benett, J. (1976). Linguistic behavior. Cambridge: Cambridge University Press. Bickerton, D. (1990). Language and species. Chicago: University of Chicago Press. Bickerton, D. (1995). Language and human behavior. Seattle: University of Washington Press. Breazeal, C. (2003). Towards social robots. Robotics and Autonomous Systems, 42, 167-175. Case, R. (1999). Developmental psychology: Achievements and prospects. Philadelphia: Psychology Press. Chomsky, N. (1986). Knowledge of language. WestPort, CT: Greenwood. Dautenhahn, K., & Billard, A. (1999). Studying robot social cognition within a developmental psychology framework. Proceedings of the Eurobot99, Third European Workshop on Advanced Mobile Robots (pp. 187-194). Switzerland. Davisson, P. (1993). A framework for organization and representation of concept knowledge. Proceedings of the SCAI ‘93. Retrieved June 12, 2005 from http://www.cs.lth.se/Research/AI/Papers/SCAI-93.pdf
Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
On Concepts and Emergence of Language 91
Dretske, F. I. (1981). Knowledge and the flow of information. Cambridge: MIT Press. Einstein, A., & Infeld, L. (1967). The evolution of physics. New York: Simon & Schuster. (German: 1938) Estes, W. K. (1994). Classification and cognition. Oxford University Press. Fodor, J. (1975). Language of thought. New York: Crowell. Fodor, J. (1995). Cognition on cognition. Cambridge: MIT Press. Fodor, J. A. (1990). Theory of content. Cambridge: MIT Press. Fong, T., Nourbakhsh, I., & Dautenhahn, K. (2003). A survey of socially interactive robots. Robotics and Autonomous Systems, 42, 143-166. Ghiglione, R. (1986). L’Homme communicant. Armand-Colin. (in French) Ghiglione, R. (1989). Le ‘qui’ et le ‘comment’. Perception, Action, LangageTraité de Psychologie Cognitive, 3, 175-226. (in French) Jacobson, R. (1963). Essai de linguistique générale. Editions de Minuit. (in French) Lakoff, G. (1987). Women, fire, and dangerous things, what categories reveal about the mind. Chicago: The University of Chicago Press. Medin, D. L. (1989). Concepts and conceptual structures. American Psychologist, 44, 1469-1481. Medin, D. L., Lynch, E. B., & Solomon, K. O. (2000). Are there kinds of concepts? Annual Review of Psychology, 51, 149-169. Minsky. M. (1975). A framework for representing knowledge. In P. H. Winston (Ed.), The psychology of computer vision (pp. 211-277). McGraw-Hill. Nolan, R. (1994). Cognitive practices: Human language and human knowledge. Oxford: Blackwell. Piaget, J. (1954). The construction of reality in the child. New York: Basic. Rosch, E. (1973). Natural categories. Cognitive Psychology, 4(3), 328-350. Ross, B. H., & Spalding, T. L. (1994). Concepts and categories. In R. Sternberg (Ed.), Handbook of perception and cognition, Vol. 12, Thinking and problem solving (pp. 119-148). San Diego, CA: Academic Press, Inc. Stojanov, G. (1997). Expectancy theory and interpretation of EXG curves in the context of machine and biological intelligence. Unpublished doctoral thesis, University in Skopje, Macedonia.
Tversky, A. (1997). Features of similarity. Psychological Review, 84, 327-352. Tversky, A., & Gati, I. (1978). Studies of similarity. In E. Rosch & B. Lloyd (Eds.), Cognition and categorization. Hillsdale, NJ: Lawrence Erlbaum Associates.
Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
92 Trajkovski
Vygotsky, L. (1986). Thought and language (rev. ed.). In A. Kozulin (Ed.). Cambridge: MIT Press.
Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
On Emergent Phenomena
93
Chapter IX
On Emergent Phenomena:
If I’m Not in Control, Then Who Is? The Politics of Emergence in Multi-Agent Systems Samuel G. Collins, Towson University, USA
Abstract “Emergence” is itself emergent; although originating in the context of the “sciences of complexity” — i.e., life sciences, cybernetics, multiagent systems research, and artificial life research — “emergent thinking” has spread to other parts of the academy, including the social sciences and business. Utilizing examples drawn from popular culture, this chapter looks to the ways IT has proven influential in other cultural contexts, but not without a price. The second part of the chapter interrogates the transportation of emergent thinking into these other discourses, taking them to task for not embracing the promises inherent in emergence and, in
Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
94 Trajkovski
fact, merely reproducing the old under the sign of the emergent new. Finally, by borrowing notions of “surprise” from robotics and multiagent systems, I suggest new possibilities for emergence to lead to genuine paradigm shifts in the ways we think.
Introduction In the 1990s, a value-free, Mertonian science was variously critiqued by interdisciplinary congeries of scholars who worked to show how scientific discourses were always already imbricated in Western, white, and male hegemony. As Ross (1996) writes: If stable sciences really are objective fields of knowledge and inquiry, why have so many (seismography, oceanography, and microelectronics, to name a few) evolved directly from military R&D as part of this spin-off system that is habitually cited to justify the benefits to society of the vast military budget? (p. 5) Turning a hermeneutics of suspicion against areas of the academy putatively above culture, these analyses look to the ways that the questions science asks derive from historical contexts (Haraway, 1989; Redfield, 1998), the “hidden transcripts” that inflect putatively empirical results (Latour & Woolgar, 1979; Traweek, 1988), and the cultural fields within which scientific discourse is embedded (Helmreich, 1998; Parisi, 2004). However, by the end of the 1990s, the tone of much of this work had shifted to more localized studies of specific disciplines. Clough (2004) attributes this — in part — to Sokal’s hoaxing of the journal, Social Text. The questions once raised about the legitimacy and authority of Western discourses of science, reason, truth, and disciplinary methods have been quieted, and the relationship of these questions to the interarticulated differences of gender, sexuality, class, race, ethnicity, and nation, for so long productively explored in the critical theories of the late twentieth century, have ceased to be central to social criticism. (p. 1) But what has happened, instead, is that these erstwhile critics of science have become, in a way, its servants, translating scientific discourse into more leaden theoretical language for its appropriation into the humanities and social sciences. Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
On Emergent Phenomena
95
For example, Rabinow (1996, 1998) has written extensively on Polymerase Chain Reaction (PCR), the human genome project, and on bioengineering in general. This, in turn, has inspired others in anthropology to appropriate the metaphors of bioengineering into border anthropologies, for example, Helmreich (2003) has used notions of nonfiliative descent and endosymbiosis to critique anthropological ideas of descent and to suggest new ways for thinking about kinship in a fractious, globalized world. This is the academic counterpart to popular surveys of chaos theory and quantum physics that have, in their own way, been appropriated by business and management theory and recycled into manifestoes for capitalist success. The following essay takes this suspect synergy as its starting point in a discussion of the transformations wrought upon the idea of emergence in its transposition into robotics, economy, and society. As ideas of emergence dehisce into variegated disciplines, the term’s possibly fatal ambiguities become more evident. But by returning to an idea from computer science, I suggest one way to reclaim the potential of emergence to gesture towards radical alterity and nontrivial change. This essay is an example, perhaps, of what Varela, Thompson, and Rosch (1991) term enaction, the process of “mutual specification” between “world and perceiver” (p. 172) that forms the basis for structural coupling, itself (in the later Varela) an engine for emergent change. At the interstices of multiple disciplines and multiple levels of signification, this vision of emergence incorporates the observer in the system, causing the observer “to become part of the system it generates” (Hayles, 1999, p. 8). Indeed, I would suggest that the power of multi-agent systems lies in their ability to (theoretically) link together what Sawyer (2001) terms “wildly disjunctive” levels of agency, action, and meaning together through open definitions of agency. Arising out of the computer sciences, in conjunction with insights gleaned from the cognitive sciences and biology, multi-agent systems, in the words of Wooldridge (2002): [Are] systems composed of multiple interacting computing elements, known as agents. Agents are computer systems with two important capabilities. First, they are at least to some extent capable of autonomous action — of deciding for themselves what they need to do in order to satisfy their design objectives. Second, they are capable of interacting with other agents — not simply by exchanging data, but by engaging in the analogues of the kind of social activity that we all engage in every day of our lives: cooperation, coordination, negotiation, and the like. (p. 11) Part of the popularity of multi-agent systems-as-generative-metaphor, however, lies in the synergy between multi-agent systems in computer science and the Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
96 Trajkovski
sciences of complexity in biology, particularly in the work of Kauffmann, Packard, and Langton, all of whom have various theorized the emergence of higher levels of collective behavior from the interactions of relatively simple agents. In the United States, this strand of multi-agent systems research has coalesced around Artificial Life (AL) research at the Sante Fe Institute (Helmreich, 1998). This has been in marked contrast to previous Artificial Intelligence (AI) paradigms that envisioned cognition as primarily the manipulation of symbolic representations. As Johnston (2002) explains: In contrast, the new AI gives primary importance to “bottom-up” processes by which intelligence emerges and evolves in biological life, particularly in interactions with the environment that enhance the agent’s present situation and increases its changes for survival, or in which new kinds of organization and cooperation among multiple agents emerge. (p. 493) In the humanities and social sciences, theories of emergence resonate powerfully with a variety of other coeval ideas, including the rediscovery of less mechanical models of evolutionary change in Bergson, the emphasis on difference, involution, and fragments introduced through postmodern and poststructuralist reinterpretations of Heidegger, and anthropological evocations of surrealism and bricolage (Clifford, 1989). Perhaps the best explanation for the popularity of multi-agent systems thinking lies in the ambiguity of the idea of agent itself (Johnston, 2002): Does it denote a class of subjects, objects, or functions? From its use in phrases like “Modeling cognitive agents,” “designing autonomous agents,” or simply “Embodied agents,” we can infer that an agent can be a person, animal, insect, robotic machine, or even a body of code (i.e., a software program). (p. 485) The polysemy of agent, perhaps, allows for its ready appropriation into a host of other discourses, even those far removed from the computer sciences. Contrast this to something like a neural net, which, while certainly productive of metaphor, still has strong connections to computation. By the end of the 1990s, multi-agent theory and emergence, on the other hand, could be applied to anything: economies, societies, cultures or even, as Varela (1999) suggests, to the process of cognition itself: If I may continue to use vision as an example, I can take the previous discussion up one level of generalization, to note that in recent years Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
On Emergent Phenomena
97
research has turned to the study not of centralized “reconstruction” of the visual scene for the benefit of an ulteriorhomunculus, but of a patchwork of visual modalities, including at least form (shape, size, rigidity), surface properties (color, texture, specular reflectance, transparency), and threedimensional movement (trajectory, rotation). It has become evident that these different aspects of vision are emergent properties of concurrent subnetworks, which have a degree of independence and even anatomical separability, but cross-correlate and work together so that a visual percept is this coherency. (p. 48) Like earlier generations of cybernetics before it, multi-agency systems can be applied to any phenomena, with emergence acting as a late 20th century vitalism unifying and transcending more mechanistic models of homeostasis and equilibrium. However, this ambiguity is also suspect. As multi-agent systems are applied to variegated social and cultural phenomena, what do they cover up even as they reveal? First-generation cybernetics, for example, emphasized the integration of society and culture against disequilibrium, agonistics, and hybridity. As Hayles (1998) and many others have noted, this emphasis on functional integration paralleled the tenor of Cold War politics and, against in the U.S., concerns over political movements that would, by the 1970s, give rise to identity politics (Collins, 2003). In the following, I suggest that, within the caesura of emergence, lay unacknowledged political elisions that, in many ways, belie the possibilities inherent in emergence itself. Looking at these through a variety of applications of multi-agent systems, I further suggest that one way to recover the potential of emergence in multi-agent systems is to return to computer science itself for insight.
Ambiguously Emerging But what exactly is emergence? Fisher (2003) identifies two varieties: First is the organizational concept that relations among physics, chemistry, and biology are “levels of organization” that emerged through evolution [ . . .] A second, if related, notion of “emergence” is that of contested “emergent forms of life,” the continued renegotiation of historical and emergent modalities of ethical and political reason. (p. 56)
Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
98 Trajkovski
Fisher’s (2003) emergence follows closely on science studies, on emergent discourses arising in the interstices of disciplines, states, and identities. On the other hand, Clark (2004) suggests several meanings more germane to AI research, among them: (1) emergence as collective self-organization; (2) emergence as unprogrammed functionality; and (3) emergence as interactive complexity (pp. 113-114). Finally, emergence may mean something very different to the computer scientists wrestling with a multi-agent system. For example, Ronald and Sipper (2001, p. 23) explore some of that ambiguity in their applications of notions of surprise to emergent behavior in robotics, where emergence could well be construed as inimical to design objectives. The more you think of engineering with emergence, or emergent engineering, as we call it, the more it comes to resound oxymoronically. Emergent engineering, while inherently containing a nonevanescent elements of surprise, seeks to restrict itself to what we call unsurprising surprise: though there is a persistent L1-L2 understanding gap, and thus the element of surprise does not fade into oblivion, we wish, as it were, to take this surprise into our stride. Yes, the evolved robot works (surprise), but it is in some oxymoronic sense expected surprise: as though you were planning your own surprise birthday party. That is, for many in the computer sciences, emergent novelty may not be entirely welcome and, in order to be useful needs to be vetted (in the previous example) through evolutionary techniques in order to be useful. In other words, emergence takes on another nuance, that of an unexpected disruption. Asimov’s (1950) I, Robot, chronicles the surprises and challenges accompanying the development of autonomous robots. And while these stories certainly say something about the perils of human-computer interaction (HCI) and, in general, the design of intelligent systems, they form a kind of mythic cycle on emergence (Clarke, 1994). Asimov begins by stipulating the three laws of robotics (noninjury to humans, obedience, and self-protection), which act as a hierarchy of drives, albeit an unrealistically abstract one. What drives Asimov’s stories are precisely the unexpected emergences involving these supervenient drives. For example, one of the initial stories pits two bricoleur engineers (Powell and Donovan) against an SPD Robot (“Speedy”) who, sent to procure badly needed fuel, instead circles a dangerously toxic “selenium pool” aimlessly, muttering inanities. However, Speedy is not broken; the reason for this neurasthenic state lies in the fundamental drives themselves. Rule 2 states that Speedy will obey his human masters, yet Rule 3 mandates self-protection. Faced by the potentially dangerous atmosphere around the selenium pool, a feedback loop results. Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
On Emergent Phenomena
99
You see how it works, don’t you? There’s some sort of danger centering at the selenium pool. It increases as he approaches, and at a certain distance from it the Rule 3 potentials, unusually high to start with, exactly balances the Rule 2 potential, unusually low to start with.” Donovan rose to his feet in excitement. “And it strikes an equilibrium. I see. Rule 3 drives him back and Rule 2 drives him forward — “So he follows a circle around the selenium pool, staying on the locus of all points of potential equilibrium. And unless we do something about it, he’ll stay on that circle forever, giving us the good old runaround.” Then, more thoughtfully: “And that, by the way, is what makes him drunk. At potential equilibrium, half the positronic paths of his brain are out of kilter. I’m not a robot specialist, but that seems obvious. Probably he’s lost control of just those parts of his voluntary mechanism that a human drunk has.” (Asimov, 1950, p. 46) This is not much different than the emergent pattern of the V-shaped flock of birds; both are examples of what Sawyer (2001, p. 554) would call a “supervenient” emergence, that is, the phenomena are entirely explicable in terms of the individual parts (the laws of robotics). The only difference lies in the perspective of the observer: following a V pattern is aesthetically pleasing to the human eye, while Speedy circling the selenium pool is interpreted by the two engineers as dysfunctional. But there is something else as well. Speedy is also described as “drunk,” and he speaks in a stream of nonsense: “It said, ‘Hot dog, let’s play games. You can me and I catch you; no love can cut our knife in two. For I’m Little Buttercup, sweet Little Buttercup. Whoops!’” (Asimov, 1950, p. 42). Even given the literary conceit of the positronic brain, the explanation that a robot stuck in a feedback loop would lose control of his voluntary mechanism seems far fetched. At the very least, however, much of his circling behavior might be explained with recourse to the three laws of robotics (a suspension of disbelief). Speedy’s speech would seem to be inexplicable at that level. In Sawyer’s words (2001), Speedy’s verbiage would not seem to be “supervenient on some physical state,” although we may assume that, on some level, it must be. But it is these elements of what might be called “unexpected surprise” (p. 556), that is, novelty that resist materialist reduction that drive Asimov’s robot fiction. In “Liar!” the RB-34 (“Herbie”) turns out to have mind-reading abilities, the ultimate etiology of which is unknown.
His voice became suddenly crisp, “Here’s everything in a pill-concentrate form. We’ve produced a positronic brain of supposedly ordinary vintage that’s got the remarkable property of being able to tune in on thought
Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
100 Trajkovski
waves. It would mark the most important advance in robotics in decades, if we knew how it happened. We don’t, and we have to find out. Is that clear?” (Asimov, 1950, p. 112) And yet, the scientists assigned to the case never do find out why Herbie can read minds. Herbie himself might answer, but he knows that the display of superior erudition would injure the feelings of the human scientists assigned to him.
Herbie’s voice rose to wild heights, “What’s the use of saying that? Don’t you suppose that I can see past the superficial skin of your mind? Down below, you don’t want me to. I’m a machine, given the imitation of life only by virtue of the positronic interplay in my brain — which is man’s device. You can’t lose face to me without being hurt. That is deep in your mind and won’t be erased. I can’t give you the solution.” (Asimov, 1950, p. 133) Before the problem can be adequately studied, Herbie breaks down — driven to dysfunctional insanity from his perspective on the contradictory, tortured thoughts of his creators, thoughts that precipitate a crisis in his primary drives. While this deus ex machina saves Asimov the trouble of actually tendering a technical explanation for Herbie’s mindreading, it also suggests the difference between this form of emergence and examples of unsurprising surprise. Neither reductionist and materialist (all behavior can be construed ultimately as material properties) nor nonreductionist and idealist/subjective (behaviors are in no way explicable with respect to material properties), Asimov gestures towards an understanding of emergence that is at once material and something more — and surprising. We might look to Varela et al. (1991) for a way out of this dualism: Our discussion of color suggests a middle way between these two chicken and egg extremes. We have seen that colors are not “out there” independent of our perceptual and cognitive capacities. We have also seen that colors are not “in here” independent of our surrounding biological and cultural world. Contrary to the objectivist view, color categories are experiential; contrary to the subjectivist view, color categories belong to our shared biological and cultural world. Thus color as a case study enables us to appreciate the obvious point that chicken and egg, world and perceiver, specify each other. (p. 172) For the robots in Asimov’s (1950) book, behaviors are simultaneously material (the expression of programmed drives) and ideal (emergent and inexplicable)
Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
On Emergent Phenomena
101
only with respect to a human observer — it is the human, after all, that sets the drives, their conflicts and their emergences into motion. For many, this may already constitute a fatal polysemy, with emergence taking on a mystical, black-box quality that erases any usefulness it may have had in research. But the ambiguities themselves are interesting. Consider Hallam and Malcolm’s (1994) wall-following robot. This robot follows walls encountered to the right by means of an in-built bias to move to the right, and a right-side sensor, contact activated, that causes it to veer slightly to the left. When these two biases are well calibrated, the robot will follow the wall by a kind of “veer and bounce” routine. (Clark 2001, p. 112) While Clark (2001) wonders if this counts as genuinely emergent behavior, it shares some of the characteristics of other behaviors termed emergent, for example, flocking, which might be additionally described as a shift in interpretive frames, that is, a question of the observer. From the perspective of the wallfollowing robot, for example, there is no emergent behavior, since there is no symbolic representation of wall, nor is there any introspection that would register following. Instead, the robot is merely acing on its program, and the emergent dimension only exists in the minds of humans who find this wall-following pattern unexpected and interesting. Adaptive advantages aside, is flocking really different? From whose perspectives do birds flock? In order for this behavior to be construed as emergent, flocking needs an observer to place the behavior in a different frame, that is, a scopic panorama of the whole flock against a landscape rather than from the perspective of an embedded agent. This is the classic problem of the observer, which Bateson and Mead (among others) identified as the sine qua non issue in first-generation cybernetics (Hayles, 1999), and which has been key to the develop of subsequent developments in cybernetics, including Maturana and Varela’s notion of autopoiesis. Here again the role of the observer becomes important, for Maturana is careful to distinguish between the triggering effect that an event in a medium has on a system structurally coupled with it when they perceive the system interacting with the environment. When my bird dog sees a pigeon, I may think, “Oh, he’s pointing because he sees the bird.” But in Maturana’s terms, this is an inference I draw in my position in the descriptive domain” of a human observer. (Hayles, 1999, pp. 138-139)
Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
102 Trajkovski
For emergence, the question of the observer becomes doubly important, since it seems to depend not only on the observer’s perspective, but also the observer’s surprise and valuation. After all, birds are not exactly nonplused by their flocking. It takes an outside observer to not only delineate the flock as a significant unit, but, in another hermeneutic series, to register surprise in the difference between individual flights and the perceived, emergent flocking pattern. This can make for a high degree of solipsism until we realize that in being human, we have little choice in the matter. As Varela (1999) states: Thus whenever we find regularities such as laws or social roles and conceive of them as externally given, we have succumbed to the fallacy of attributing substantial identity to what is really an emergent property of a complex, distributed process mediated by social interactions. (p. 62)
That is, we are inextricably embedded in the phenomena we observe; the only mistake we can make, according to Varela (1999), is to leave ourselves (qua observer) out of the system. “After all, the fundamental cognitive operation that an observer performs is the operation of distinction” (Maturana & Varela 1980, p. 22).
“Bottom-Up” Programming and the Promise of Emergence As the brain-as-computer model that dominated AI research in the 1950s and 1960s gradually lost popularity, other, less linear, models were proposed, among them a model of cognition as an emergent property. In robotics, Brooks and Flynn (1989) led the critique against older models of AI. In Brooks and Flynn’s then controversial 1989 article “Fast, Cheap and Out of Control,” the symbolic approach, with its centralized control and internal representations and foreordained percepts, is completely rejected for a subsumption architecture. As they write of the robot Genghis, the paradigmatic example of this reaction architecture: Nowhere in the control system is there any mention of a central controller calling these behaviors as subroutines. These processes exist independently, run at all times and fire whenever the sensory preconditions are true. (p. 481)
Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
On Emergent Phenomena
103
Brooks (1997) later describes this as the rejection of a “horizontal decomposition” for a “vertical decomposition” modeled on animal behavior and composed of parallel tracks “stacked vertically” (Murphy, 2000, p. 106). Each track need not possess the same representations of the environment, yet the agents behavior arises out of coordination between these vertical stacks coupled to the exigencies of a given environment. But as the specific goals of the robot are never explicitly represented, nor are there any plans — the goals are implicit in the coupling of actions to perceptual conditions, and apparent execution of plans unroll in real time as one behavior alters the robot’s configuration in the world in such a way that new perceptual conditions trigger the next sequence of actions. (Brooks, 1997, p. 292) The advantage of this, in cases like Genghis, is the life-like movement emerging in the robot, each sensor and activator firing independently yet coordinating together in a recognizable insect-like fashion. But life-like is hardly the foregone conclusion: one of the corollaries to Brooks’ and Flynn’s (1989) approach is that bottom-up architectures produce unexpected results. The power of a reactive architecture is, theoretically, the production of unexpected novelty. That is, the observer expects (and hopes) to be surprised by resultant behaviors. This is the essence of Varela’s (1999) theory of enaction: Thus cognition consists not of representation but of embodied action. Thus we can say that the world we know is not pre-given; it is, rather, enacted through our history of structural coupling, and the temporal hinges that articulate enaction are rooted in the number of alternative microworlds that are activated in every situation. (p. 17) Not only, then, are the robot’s behaviors products of an unfolding cognition inseparable from the environment within which they are realized, but the observer’s apperception of emergence unfolds in the same way. It is enaction all the way down. But, to another kind of observer, this emergence is not entirely welcome. Wooldridge (2002) explains that: One major selling point of purely reactive systems is that overall behaviour emerges from the interaction of the component behaviours when the agent is placed in its environment. But the very term ‘emerges’ suggests that the
Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
104 Trajkovski
individual relationship between individual behaviours, environment, and overall behaviour is not understandable. This necessarily makes it very hard to “engineer” agents to fulfill specific tasks. (p. 71) Brooks’s (1997) work describes systems literally “out of control”; reactive architectures elevate — by definition — the unprogrammed and the unexpected. It may not be the best way to engineer predictable results. Indeed, most scientists have rejected more catholic versions of the reactive paradigm for hybrid approaches combining subsumption architectures with evolutionary programming and other forms of computation. After all, not even novelty itself can be an expected outcome of a reactive architecture; not surprisingly, one of the critiques of the approach is the uncertainty of the outcome: when will higher-order cognitions emerge? The most trenchant critique for our purpose is that the sort of surprise hoped for in reactive architectures is not real surprise after all, but more like Ronald and Sipper’s (2001) “unsurprising surprise.” As Hayles (1999) writes: In a significant sense, however, AL researchers have not relinquished reductionism. In place of predictability, which is traditionally the test of whether a theory works, they emphasize emergence. Instead of starting with a complex phenomenal world and reasoning back from chains of inference to what the fundamental elements must be, they start with the elements, complicating the elements through appropriately nonlinear processes so that the complex phenomenal world appears on its own. (pp. 231-232) In other words, rather than assume the world through representational models, Brooks (1997) assumes the fundamental elements of that world and the teleologies by which that world might emerge. The surprise for Brooks would be not for wholly unfamiliar behaviors, but for recognizably familiar behaviors, insect like or even anthropomorphic. In a way, this is symbolic AI through the backdoor. As Brooks (1997) writes of the possibility for modeling human cognition: While in principle it might be possible to build an adult-level intelligence fully formed, another approach is to build the baby-like levels, and then recapitulate human development in order to gain adult human-like understanding of the world and self. (p. 298) Instead of the algorithmic models representing the world, Brooks proposes to grow those same models through the structural coupling of human-like systems Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
On Emergent Phenomena
105
to an environment. This is not, a la Varela, emergence all the way down, but the product of an observer under erasure, the measure of the initial conditions, the directionality of emergence and the success of the product. Just because the initial conditions seem elemental (obstacle avoidance, light seeking), there still has been a process of selection, as well as one of rejection, as an infinite variety of other putatively elemental drives must be deliberately not chosen for a given, reactive agent. We can see how this symbolic AI from below works itself out in AL simulations, those models of human and animal behavior popularized—most famously—at the Sante Fe Institute for the Sciences of Complexity (SFI). In a 2002 article in Atlantic Monthly, Rauch explores several AL simulations of human behavior, among them models of racial segregation and genocide. He begins his article with a hagiographic portrait of Thomas C. Schelling: One day in the late 1960s, on a flight from Chicago to Boston, he found himself with nothing to read and began doodling with pencil and paper. He drew a straight line and then “populated” it with Xs and Os. Then he decreed that each X and O wanted at least two of its six nearest neighbors to be of its own kind, and he began moving them around in ways that would make more of them content with their neighborhood [...] In the first frame blues and reds are randomly distributed. But they do not stay that way for long, because each agent, each simulated person, is ethnocentric. That is, the agent is happy only if its four nearest neighbors (one at each point of the compass) include at least a certain number of agents of its own color. (p. 19) In no time at all, Schelling’s simulation generates levels of racial segregation comparable to Chicago, Boston, or Detroit. In a more complex example, Rauch (2002) looks to the work of Brookings Institute analyst Joshua Epstein, who has attempted to model genocide using similar stacks of cellular automata. As usual in Epstein’s models: Each agent has its own personality — the relevant traits being, in this case, the agent’s degree of privation or discontent, his level of ethnic hostility, and his willingness to risk arrest when the police are around. [...] The agents’ environment is one of tension between blues and greens; the higher the tension, the more likely it is that agents will, in Epstein’s term, “go active” — which in real life could mean looting a neighbor’s store or seizing his house, but which in the current instance will mean killing him. When an Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
106 Trajkovski
agent turns red, his discontent or hatred has overcome his fear of arrest, and he has killed one randomly selected neighbor of the other color. Those are the rules. They are very simple rules. (p. 19) But of course, they are not simple rules. There are countless — and very complex — assumptions in Epstein’s genocide simulation, including: the historical construction of ethnicity, ‘ethnic hostility,’ the colonialism implicit in the police, the power of the state, the ‘actions’ (i.e., killing) appropriate to a surfeit of agent discontent. And it is similar for Schelling’s model of segregation. Here we have race, the perception of race, the complex linkage between race and the political economy of the city (where contiguous ethnicity is equated with happiness). Here, genocide and segregation imbricate the observer all the way down, from the initial units of interactions (races, ethnicities, housing) to the interpretations of stochastic patterns as representing (modeling) de facto genocide and de facto segregation. The outcome of these multi-agent systems is what the engineers have termed unsurprising surprise, a controlled, rather self-fulfilling emergence where the programmer can sit back and gasp in wonder only if he/she conveniently forgets the assumptions he/she has already programmed into the simulation. Thus, the only outcomes in the simulations previously mentioned have been assumed in advance: integration or segregation in the first, peaceful cohabitation or genocide in the second. The observer, here, is the American subject — completely acculturated to the striking, government and capital-supported segregation of cities, inured (from afar) to newspaper stories of genocides in Africa, the Balkans, the Holocaust, the Armenian genocide, and so forth.
The Emergence of the New Economy We can look at the 1990s-era intrusion of emergence into the new economy in the same way. Borne on the vertiginous profits of Silicon Valley start-ups, the volatility of venture capital, management theorists looked to multi-agent systems and emergence for inspiration, if not for models. Although there was a definite business link with SFI, multi-agent systems research was probably more generative of slogans than simulations and cellular automata. Yet, for a while, borne on the latest, overblown treatises by management gurus, emergence was it. Taylorism and govermentality are thus both rejected as unsuited to this new turbulent, but also hugely productive terrain. At the same time, however,
Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
On Emergent Phenomena
107
cybernetic controls defined by the first wave of cybernetics, that associated with the work of Norbert Wiener, is also rejected. It is no longer sufficient to neutralize all positive feedback, that is all new variations and mutations, by bringing the system back to a state of equilibrium (negative control is acknowledged as ultimately ineffective in staving off the forces of chaos). (Terranova, 2004, p. 122) After Norman Packard, this meant, for new economy theorists, that businesses could be operated on the edge of chaos (Lewin, 1999) and, in successive waves of influential best-sellers (from Kevin Kelly, Nicholas Negroponte, and others), CEOs and managers were chided for trying to control their organizations at all. The only way to survive the tumultuous dot.com boom was to make room for the emergence of creativity and entrepreneurial novelty. Where managers once operated with a machine model of their world, which was predicated on linear thinking, control, and predictability, they now find themselves struggling with something more organic and nonlinear, where limited control and a restricted ability to predict are the norm. (Terranova 2004, p. 197) But what counts as limited control? If the goals of business are still inviolate — that is, profit, productivity, growth — and the units (corporations represented by a variety of artifacts including flowcharts and spreadsheets) are specified in advance, then the only area out of control would be some, unforeseen way of realizing those goals. Similar to Brooks’ subsumption architecture, this is really an unsurprising surprise; the only outcomes of the out of control organization can be more or less profit, more or less productivity, more or less growth. The observer here is the executive — CEO, CFO, manager — in a position to perceive companies as autonomous agents interacting in an environment composed of market forces. As Lewin (1999) writes: It is possible in principle to think of any business ecosystem in terms of a network companies, each occupying a place in its own landscape of possibilities, and each landscape being coupled to many others: those of competitors, collaborators, and complementors [...] As the landscape of one company changes — perhaps through increased fitness as a result of powerful innovation — the landscapes of those connected to it will also change: some fitness peaks will increase in size, while other get smaller, or even disappear. (p. 207)
Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
108 Trajkovski
It is possible to think of companies this way, but only from the interested perspective of the managerial observer. Would it look like a complex ecosystem from the perspective of a downsized worker? From a government lobbyist? From an anti-globalization activist? Far from an elemental unit isomorphic to Brooks’ obstacle avoidance, corporations are the result of centuries of developments in political economy, giving corporations a legal, agential coherency — they stand at the end of a process, rather than at its beginning. This makes a big difference in what can emerge. Consider the case of Razorfish, the dot-com start-up company whose rise and fall Ross (2003) chronicles in his book, No-Collar: The Humane Workplace and Its Hidden Costs. A product of nineties managerial tracts, Razorfish expressly set out to maximize emergence through a working atmosphere designed to foster creativity and discord. In the firm’s heyday, all new recruits were issued a book titled, “Razorfish Creative Mission.” [...] “Creativity” in all its forms was the core value, and it was celebrated on every page, though mostly in an unchewed lingo that aped Business English: “We are working toward an improved efficiency where new kinds of thinking expand the limits of intelligent thought. Unprecedented thinking often comes into being in a nonlinear manner. Opposites, inconsistencies, and ambiguities produce creative sparks.” (p. 100) Inspired by von Neumann-style cellular automata, where initial simplicity produces sublime complexity, Razorfish hired the eccentric and individual (including a part-time stripper as a public relations person) to forge a company that could stay at the edge of chaos in the frisson of the new economy. For several years, this worked. By 2000, however, the meaning of such iconoclasm changed considerably, as investment quickly dried up for venture capital and as Silicon Valley firms began to fall one by one. Razorfish accordingly changed its tactics and strove to normalize its image for a much more conservative — and much more stingy — set of clients and investors. In the period of time I had been interviewing Habacker, she seemed to have accepted that “Value” always had a flip side in the business world. Many of the qualities that she and her cohort had once cherished had become liabilities as the company’s direction shifted. Now she was learning that the profile she had favored in recruiting others at Razorfish — eccentric, selfmotivated and self-disciplined individuals — was just as likely to be a handicap elsewhere. (Ross, 2003, p. 228)
Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
On Emergent Phenomena
109
If we consider the unpredictability of emergence, then this is just one of the possible outcomes. That is, from the perspective of our manager-observer, either profit and growth emerges from this multi-agent system, or it does not. By 2000, it was becoming clear that this experiment on the edge of chaos was sliding into a chaos of downsizing and devaluation. But in the midst of this Schumpeterian change, what remained inviolate? The market, of course. As Ross (2003) writes of this period in the companies history: Despite these criticisms, there was a more general acceptance among fish that no one really had control over the company’s financial destiny. The market was perceived as an unchallenged authority that was somehow denying the company permission to succeed, and very little could be done about it. (p. 210) Despite the end of the dot.com era and the end to experiments in creativity like Razorfish (which nevertheless continues today in a more restrained form), employees believed that, eventually, the “market would finally be cleansed of its imperfections and periodic dementia” and that “the business cycle of boom and bust to which capitalism was prone would be smoothed out” (Ross, 2003, p. 210). That is, just as the elemental units of the system are literally naturalized (as an ecosystem), the ultimate teleology is never in doubt, gradually building towards a perfect market where the promise of emergence could be realized. Again, with a known starting point and a known end, the new economy is another example of unsurprising surprise, that is, surprise that confirms one’s predictions and, in general, one’s Weltanschauung. This is exactly Best and Kellner’s (2000) point in their critique of Kevin Kelly, where they ask “how far contemporary economies would go without economic regulation and management, the intervention of the state in times of crisis, state expenditures for military and welfare to generate economic growth and provide a safety net for those who fall through, or suffer disadvantage” (p. 384). According to new economy theorists, then, we can only construe the new economy as a biologically isomorphic, multiagent system subject to emergence by limiting the field of possible emergences: new constellations of production, state intervention, and labor politics cannot intrude onto this nature in order to vouchsafe creative developments. But this is not, strictly speaking, a new economy at all.
Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
110 Trajkovski
Conclusion The late 1990s and early millennium gave many theorists a new hope that novelty could emerge. For example, Terranova (2004), echoing Hardt and Negri, waxes optimistic about the salubrious effects of emergence on politics. A multitude, for example, is quite foreign to sequentiality, whether it is the linear and closed sequentiality of the assembly line or the one-dimensional flow of broadcasting. When segments were connected together in a single line, they become immediately bound to each other and to the overall structure and hence geared towards reproduction rather than becoming. Similarly, a transversely-connected multitude is quite alien to the logic of mass societies, in as much as the solidity and boundedness of the mass tend towards the production of homeostatis, that is an increasing homogenization, while a multitude tends to engender, multiply and spread mutations. These are the promises of emergence, and yet, in the foregoing examples, what is striking is the absence of emergence. All of this echoes in vary familiar ways the assumptions and elisions of a cultural evolutionism, especially that practiced at the end of the 19th century. Much of the scaffolding of what has been called in retrospect unlinear evolutionary theory relied on elementary forms — family, social life, religion, individualism — that formed the irreducible building blocks resulting in Western civilization. Taking non-Western peoples, especially in Melanesia, Polynesia, and Africa, as the simple elements making up cultural systems, anthropologists from E.B. Tylor to Marcel Mauss showed how complex societies were built from these simpler forms. Much of the critique in anthropology over the past century has been targeted at exactly these assumptions, showing that the simple rules are, in fact, the products, rather than the precedents of the West. Thus, the nuclear family is not the unit of kinship, but the product of the 19th century, middle class. Magic and witchcraft are not simpler forms of science and religion, qua the West, but are themselves, products of modernity coeval with science and rationality (Meyer & Pels, 2003); magic forming the Derridean supplement to the growth of science. As Strathern writes: Children were. In the English view, genetic hybrids by nature: they were regarded as constellations of elements derive from each parent but mixed in such a way as to make them unique entities with a future that repeated the past of neither mother nor father. The uniqueness of the person as an individual as thus replicated in the supposed genetic constitution. (p. 60) Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
On Emergent Phenomena
111
That is, the novelty of kinships is that children confirm the truth of the parents by developing an individuality different from each. But this is not, in fact, novelty at all, simply the realization of one of a number of possibilities. Writing at the beginning of the 20th century, Bergson evokes a different order of novelty altogether: Our ordinary logic is a logic of retrospection. It cannot help throwing present realities, reduced to possibilities or virtualities, back into the past, so that what is compounded now must, in its eyes, always have been so. It does not admit that a simple state can, in remaining what it is, become a compound state solely because evolution will have created new viewpoints from which to consider it. [...] It sees in a new form or quality only a rearrangement of the old and nothing absolutely new. For it, all multiplicity resolves itself into a definite number of unities. It does not accept the idea of an indistinct and even undivided multiplicity, purely intensive or qualitative, which, while remaining what it is, will comprise an indefinite number of elements, as the new points of view for considering it appear in the world. Are the sorts of emergence lionized in the new economy is an indication of a hyperbolic rate of change or Hyperborean stasis? With the bending of time around the event horizon of advanced capitalism, does change itself disappear? Is this the confirmation of Fukuyama’s end of history? If the products of evolution are given in advance, in the form of pre-existent possibles, then the actual process of evolution is being treated as a pure mechanism that simply adds existence to something that already had being in the form of a possible. (Ansell-Pearson 2002, p. 72) This kind of emergence — increasing rates of profit, faster and faster financial transactions, futures markets, hedge funds — does not represent, qua Bergson, real change. Delimited to variables on a spreadsheet, the future devolves to a question of quantity. This is the kind of change that Kevin Kelly and others lionize, where the excitement of the new economy is a matter of degree: profits are higher, risk is greater, markets are more volatile, and so forth. From the perspective of, say, a worker getting her pink slip: more of the same. What would real change be like? What would emerge from real emergence? It is comforting to consider that real change will be thoroughly unexpected, even incommensurable from the perspective of the present. Explaining his idea of the utopian, Jameson (2004) explains: Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
112 Trajkovski
Its function lies not in helping us to imagine a better future bur rather in demonstrating our utter incapacity to imagine such a future — our imprisonment in a non-utopian present without historicity or futurity — so as to reveal the ideological closure of the system in which we are somehow trapped and confined. (p. 46) If we participated in a bonafide, new economy, would corporate profits still increase while wages stagnate? Would governments suspend human rights in order to woo corporate investment? Would skin color and national origin still be the best predictors of quality of life? What would, in Ronald and Sipper’s (2001) term “surprising surprise,” “where we are totally and utterly taken aback” (p. 22) look like? For one thing, all of the assumptions of the observer — whether the organization of flocking or the mechanics of the free market — would have to be thrown into disarray and reordered in such a way as to completely shake the position of the observer. This would continue true emergence—behaviors and phenomena that not only come unexpectedly, but that in the process also restructure the perspective of the observer as an agent embedded in the same systems he/she observes. So, to take this back to the engineer’s model, the movement from L1 to L2 ultimately spawns an L3, L:4, Ln . . . . (to infinity). That is, the movement from program to surprise reframes the system, composed of observer and agents, which results in new understanding and new emergence, and so forth, ad infinitum.
References Ansell-Pearson, K. (2002). Philosophy and the adventure of the virtual. New York: Routledge. Asimov, I. (1950). I, Robot. New York: Doubleday. Best, S., & Kellner, D. (2000). Kevin Kelly’s complexity theory. Democracy & Nature, 6(3), 375-399. Brooks, R. (1997). From earwigs to humans. Robotics and Autonomous Systems, 20(2-4), 291-304. Brooks, R., & Flynn, A. (1989). Fast, cheap and out of control. (AI Memo 1182). Cambridge: MIT, AI Laboratory. Clark, A. (2001). Mindware. New York: Oxford University Press. Clarke, R. (1994, January). Asimov’s laws of robotics. Computer, 57-66.
Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
On Emergent Phenomena
113
Clifford, J. (1989). The predicament of culture. Madison: University of Wisconsin Press. Clough, P. (2004). Future matters. Social Text, 22.3, 1-23. Collins, S. (2003). Sail on! Sail on!: Anthropology, science fiction and the enticing future. Science Fiction Studies, 30, 180-198. Fischer, M. (2003). Emergent forms of life and the anthropological voice. Durham, NC: Duke University Press. Harvey, D. (1989). Conditions of postmodernity. New York: Blackwell. Hayles, N. K. (1999). How we became posthuman. Chicago: University of Chicago Press. Helmreich, S. (1998). Silicon second nature. Berkeley: University of California Press. Helmreich, S. (2003). Tress and seas of information. American Ethnologist, 30(3), 340-358. Johnston, J. (2002). A future for autonomous agents. Configurations, 10.3, 473-516. Latour, B., & Woolgar, S. (1979). Laboratory life. Beverly Hills, CA: Sage Publications. Lewin, R. (1999). Complexity: Life at the edge of chaos. Chicago: University of Chicago Press. Maturana, H., & Varela, F. (1980). Autopoiesis and cognition. Dordrecht, The Netherlands: Rediel. Meyer, B., & Pels, P. (Eds.). (2003). Magic and modernity. Palo Alto, CA: Stanford University Press. Murphy, R. (2000). Introduction to AI robotics. Cambridge: MIT Press. Parisi, L. (2004). Information trading and symbiotic micropolitics. Social Text, 80(22.3), 25-49. Rabinow, P. (1996). Making PCR. Chicago: University of Chicago Press. Rabinow, P. (1998). French DNA in trouble. Chicago: University of Chicago Press. Rauch, J. (2002). Seeing around corners. Atlantic Monthly, 289(4), 35-47. Redfield, P. (1998). Space in the tropics. Berkeley: University of California Press. Ronald, E., & Sipper, M. (2001). Surprise versus unsurprise. Robotics and Autonomous Systems, 37, 19-24. Ross, A. (1996). Introduction. Social Text, 46/47(1-2), 1-13.
Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
114 Trajkovski
Ross, A. (2003). No-Collar. New York: Basic Books. Sawyer, K. (2001). Emergence in sociology. American Journal of Sociology, 107(3), 551-586. Terranova, T. (2004). Network culture. Ann Arbor, MI: Pluto Press. Traweek, S. (1988). Beamtimes and lifetimes. Cambridge, MA: Harvard University Press. Varela, F. (1999). Ethical know-how. Palo Alto, CA: Stanford University Press. Varela, F., Thompson, E., & Rosch, E. (1991). The embodied mind. Cambridge: MIT Press. Woolridge, M. (2002). An introduction to multiagent systems. West Sussex: John Wiley & Sons.
Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
On Emergent Phenomena
115
Section II Cases
Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
116 Trajkovski
Chapter X
On MASim:
A Gallery of Behaviors in Small Societies
Abstract In this chapter we overview our simulation environment (Multi-Agent Systems Simulations [MASim]) that we developed, with the intention of studying behaviors in smaller societies of agents. We will give a gallery of selected recorded behaviors and brief comments on each.
Introduction In order to give agents the ability to make decisions, each agent shall start with an inborn movement schema, which is a series of movements used to form an agent’s default path or directional pattern. In addition, for the purposes of creating an atmosphere where the agents learn and adapt to their environment, all agents are randomly placed within the environment. To represent an agent’s decision-making ability, each agent shall utilize two types of memory: exploratory and associative memory. An agent’s exploratory
Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
On MASim 117
memory can be thought of as a basic system of sensors used to map the agent’s immediate surroundings, while an agent’s associative memory can be compared to a set of unique environmental snapshots ascertained through the agent’s sensory system. An agent’s associative memory is its decision-making apparatus. It creates images of the environment and places significance on those images in an attempt to aid the agent’s efforts in finding food. An agent’s exploratory memory deals more with an agent’s relative positioning, steering agents clear of cycles and traps. An agent shall utilize its exploratory memory until food is found, at which point its exploratory memory is ported to the agent’s associative memory. Each agent will navigate through a randomly generated environment consisting of colored squares, obstacle squares, food squares, and other agents. The colored squares serve as fuzzy map elements for each agent, meaning the agent will see the colors of the squares as pieces of a whole, rather than storing specific paths. Square colors and agent’s direction are stored in an agent’s associative memory, once food is found, referred to, and executed on a recognition-scale basis, meaning the higher the agent’s recognition of the square type the more chance that agent will attempt to move onto that square type. For example, if an agent has several nodes in its associative memory where move is defined as north, the agent will always choose the move that offers the highest or most recognition. This is what is defined as the agent’s emotional context. The goal of the MASim project is to determine whether the use of fuzzy logic benefits the agents in their efforts to coordinate and adapt to the random environment they are placed in. Therefore, in terms of applying the above statement to the actual simulation, the purpose behind the simulation is to determine what parameters or settings, applied through the agents and the environment, work best in demonstrating an upward trend in agent learning ability as it pertains to agent motivation, which in this case is finding food. Thus, the ultimate measure of a successful simulation shall be determined by agent confidence, which reflects the amount of correct moves toward food made using associative memory. For example, if an agent moves north onto a red square and that move exists in its associative memory, the agent will execute the next move as determined by its associative memory, which is, let’s say for this example, west onto a yellow square. If the agent moves west onto a yellow square its confidence will increase, else confidence decreases, meaning when the agent moved west it did not encounter a yellow square.
Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
118 Trajkovski
Requirements and Design Specifications: Agents All agents will be objects spawned from an agent class. Each agent will be identical in composition; therefore, one agent class will spawn all four agent objects. Each agent object shall capture the following statistics: confidence, pain, surprises, square types, and emotional context of each move. All agents will have a preborn movement schema determined by the user at the start of the simulation. Movement schemas shall range from one to five incremental moves per schema, each incremental move being one of five directions: north, east, south, west, or stay (no movement). Agent attributes defined: •
Confused/Confident Level is incremented and decremented by number of surprises versus correct moves.
•
Pain Level is the number of times an agent collides with an obstacle.
•
Surprise Level is the number of times an agent chooses wrong move from associative memory.
•
Current Path is the total number of steps needed to find food. Includes all moves.
•
Steps to Food is the number of steps needed to find food. Does not include duplicate moves.
•
Memory is the size of agent’s associated memory.
Each agent will be equipped with two memory types: exploratory memory and associative memory. At the start of the simulation each agent’s exploratory memory and associative memory will exist as a blank slate. An agent’s exploratory memory shall store all current path moves, while an agent’s associative memory shall store steps to food. The exploratory memory will have each move appended to it until the agent satisfies its drive, such as by finding food; at which time the path is added to the agent’s associative memory and its exploratory memory is cleared. When adding a path to an agent’s associative memory the agent’s emotional context for each move within its associative memory is updated, as explained later. Finally, after an agent satisfies its drive it will be randomly respawned at a new position within the simulation environment; this allows for a true learning simulation, as the agent shall always start its path to food in a lost state.
Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
On MASim 119
Figure 1. Select schema for agent screen
At the start of the simulation, the user is prompted to create each agent’s inborn schema, as displayed in Figure 1. An agent’s inborn schema is the default path an agent shall follow. For example, if an agent’s inborn schema is set to five moves in the direction of north, that agent shall move north until either colliding into an obstacle (in which a random move is generated in order to evade the obstacle) or the agent invokes its associative memory. An agent’s inborn schema can be set between one to five moves heading north, south, east, west, or stay. To fully understand the concept of an agent’s inborn schema and how it works, take for example the following schema: north, east, north, east. On the start of the agent’s turn, it moves as indicated by its inborn schema: north. After a successful move north, the agent tries to move east. In this example, moving east results in a collision, meaning the square is either occupied by an obstacle or another agent. Since a collision occurred, the agent will try its next move in the schema, which is north, and so on. If an agent cannot execute any moves within its schema it is trapped and a random move will be generated. As an agent moves through the environment, it will capture the following data in its exploratory memory in the form of a structure for each incremental move: •
Direction of move,
•
Type/color of square, and
•
Relative position.
Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
120 Trajkovski
Figure 2. Displays an example of an agent’s Exploratory Memory, which consists of incremental moves devised from inborn schema, random and associative moves (R.P. stands for Relative Position). R.P. is kept for housekeeping purposes and is used only by the designer, not the agent.
Each structure, which represents a move, is then placed into an exploratory linked list node. For example, if an agent moves north, the direction of the move, the color of the square the agent has just moved on, and the square’s relative position from the agent’s starting point are all placed into a move node that is appended to the exploratory linked list, as displayed in Figure 2. All exploratory moves will first be checked for cyclic activity (cyclical verification) before being appended to the exploratory linked list. To do this, once an agent enters a square (a To move), the agent will verify whether it has entered into a new square by comparing the square’s relative position with all relative positions already captured in that agent’s exploratory memory. If any of the stored relative positions match the agent’s current relative position a cycle flag is set to true, meaning if the agent returns to that relative position again the cycle flag, being true, will activate a random move. A random move will be generated until the agent’s next relative position is unique to all relative positions stored in the agent’s exploratory memory. Therefore, an agent’s relative position is responsible for tracking already-identified squares as potential cycle or trap points. An example of a trap point would be when an agent is surrounded by three obstacles. Agents will not move in an opposite direction to the previous move unless as a last resort, meaning if an agent moves north, its next move can not be south unless it is trapped. So if an agent is surrounded by three obstacles its exploratory memory shall determine the relative position of each obstacle, ultimately notifying the agent that it is trapped and must move to where it came from. Thereby, an agent’s relative position can be compared to a basic sensory system capable of detecting its immediate surroundings. Upon an agent’s discovery of food, the agent’s exploratory memory will be ported into the agent’s associative memory. An agent’s associative memory will store each move in a sequence of From and To. Each associative memory move will consist of two incremental moves, where the agent came from and the destination it moved to. Therefore, an associative memory move cannot span more or less than two adjacent squares. Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
On MASim 121
Figure 3. Structure of an agent’s associative memory. The boxes stand for the (B, S) pairs.
Figure 3 displays the From and To concept of an agent’s associative memory. A vector will be used to store all of an agent’s associative memory moves. A vector offers the benefit of dynamic expansion, or, in other words, a vector can more than double its size on the fly. Therefore, if a path being ported to associative memory is 50 nodes in size and the agent’s associative memory is only 25 nodes in size, the vector will automatically expand to 150 nodes, allowing the inclusion of the 50-node path. Each new From and To move will be added to the vector through an associative memory move node, as displayed in Figure 3. Each move node in an agent’s associative memory will contain the following data: •
Current direction,
•
Current square type,
•
Previous direction,
•
Previous square type, and
•
Emotional context — updated when the agent finds food.
At the start of an agent’s movement process, the first node of the associative memory vector will not capture the initial From stats, which are attributed to the square the agent starts from. An agent’s emotional context is a quantitative value stored with each move in an agent’s associative memory; it is based on the move’s location in relation to food. Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
122 Trajkovski
The closer a move is to a food square, the higher the emotional context. For example, if an agent moves to a square containing food, the highest emotional context of the path being added to associative memory shall be applied to the previous move. An agent’s emotional context shall correspond to the number of moves within the steps to food. For example, Figure 4 depicts the emotional context for each move at the time an agent finds food. If an agent finds food, only the unique exploratory memory nodes (defined nodes not already existing in associative memory) shall be added to the agent’s associative memory. If a node defined in exploratory memory matches a node within the agent’s associative memory only the emotional context is updated. To update the emotional context of a duplicate node, the emotional context of the exploratory node is added to the emotional context of the associative node. Once the emotional context is calculated and the path to food nodes added to the agent’s associative memory, the associative memory is then sorted by emotional context. The sorting mechanism utilized shall be the QuickSort algorithm. As agent’s move through the environment and gather food, emotional context will be used by the agent to select the best available moves to make from its associative memory. For example, if an agent makes a move that is recognized or exists within its associative memory, the agent will select the next move based on the prior move by matching the previous move’s To move with the next move’s From move that offers the highest emotional context. When an agent finds food, that agent’s confused/confident level shall be incremented by 1 and the agent shall then be respawned in a randomly generated location within the environment. Obstacles, agent-occupied squares, and foodoccupied squares will be exempt from the agent’s placement. Therefore, an agent can only be placed in an empty or free square. If an agent is traversing a path and recognizes a location (meaning there has been a positive comparison in that agent’s associative memory), as stated earlier, that agent shall follow the direction defined within its associative memory. Once the
Figure 4. Emotional context for each move; this occurs at the time an agent finds food
Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
On MASim 123
associative memory node is identified and executed, the agent then expects to move to an already identified square. If the square the agent moves to is not identical to the associative memory node executed, the agent’s confused/ confident level is decremented by 1, which is known as a surprise. An agent’s confused/confident level will start at zero. The confused/confident level shall be displayed for each agent within the simulation interface. An agent’s pain level is based on collisions made with obstacles. If an agent collides with an obstacle, that agent’s pain level will be incremented by 1. An agent’s pain level cannot be decremented. Each agent’s pain level will start at zero at the beginning of the simulation and shall be displayed within the simulation interface. When agents collide, each agent shall receive and incorporate the other agent’s associative memory. For example, if agent A and agent B collide, agent A shall add agent B’s associative memory to its own associative memory and vice versa. Once an agent incorporates another agent’s associative memory into its own, the associative memory vector is sorted by emotional context. In addition, during a collision/negotiation, each agent shall reflect the higher confident level of the two agents. For example, if agent A collides with agent B and agent A’s confident level is higher than agent B’s confident level, agent B’s confident level shall equal agent A’s confident level and visa versa. Agents shall utilize a turn-based movement system. Each agent shall move its full move, a maximum of five moves, before the next agent may move. For example, agent B shall sit idle until agent A finishes its full move, and so on.
Environment The agent environment will consist of a grid of squares each consisting of one of five colors or a food element, where one of the colors is black and considered an obstacle. Figure 5 displays square colors.
Square types will be randomly generated by the agent environment class at simulation start-up. The environment grid shall be a fixed dimension, 12x18 squares in size. After the environmental grid colors have been established, food items will be randomly placed. No more than three food items will be placed on the grid. Once a food item is placed it is fixed to that position and cannot be moved for the duration of the simulation. Food items cannot be placed on obstacles.
Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
124 Trajkovski
Figure 5. Elements of agent environment
Classes Listed below are the main classes that make up the multi-simulation program: •
Agent // creates agent object
•
AssocMem // creates associative memory object for agent
•
CreateEnvironment // create environment object
•
ExploreMem // creates exploratory memory object for agent
•
LogUtility // generates data based on agent confidence, pain, surprises, and associative memory
•
MoveNode // creates move objects — sets to and from moves
•
PathNode // creates path object, which is made up of MoveNode objects
•
RelPosTableNode // tracks agent’s position for cycles
•
SimControl // controller class
•
StartSim // launches SimControl — controller class
•
StopWatch // creates timer object — tracks total running time
•
UserInterface // creates and updates graphical user interface (GUI)
•
XYCoord // creates x and y coordinate object
Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
On MASim 125
Figure 6. Depiction of the initial design of the multi-agent simulator. See text for details.
Graphical User Interface Figure 6 displays the GUI for the MASim. The notations used in the figure are as follows: Cn = confidence level; Pn = pain level; memory = amount of associative memory; schema = inborn schema; STF = current path steps to food; CP = current path.
Agents Lost, Agents Found In a situation when the agent’s inborn schema is relatively small compared to the size of environment, interesting phenomena can be observed. Perceptual aliasing then becomes a significant problem. The parameters that we observe, although not perfect, indicate on these problems. In this section we show a selection of cases produced under such circumstances. Interagent communication generally does not improve the success of the individual agents, as all agents are more or less equally incompetent to the environment, due to their inborn constraints. The importance of this study is to understand the relationship between the parameters measured in the agents — pain, surprise, confidence, and number of entries in the associative memory.
Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
126 Trajkovski
Figure 7. The confidence, pain, surprise, and size of associate memory statistics of a type 1/1 case simulation in MASim Confidence Chart
Pain Chart
25
90
80 20
70
10
Amount of Pain
Agent1
50 Agent1 40
30 5
20
10 0 1 14 27 40 53 66 79 92 105 118 131 144 157 170 183 196 209 222 235 248 261 274 287 300 313 326 339 352
0 1
Moves: increments of 5 = 1 move
14 27 40 53 66 79 92 105 118 131 144 157 170 183 196 209 222 235 248 261 274 287 300 313 326 339 352 Moves: increments of 5 = 1 move
Associative Memory Chart Surprise Chart 120 45
40
100
30
25 Agent1 20
15
Amount of Associative Memory
35
Amount of Surprises
Amount of Confidence
60 15
80
60
Agent1
40
10
20 5
0 1
14 27 40 53 66 79 92 105 118 131 144 157 170 183 196 209 222 235 248 261 274 287 300 313 326 339 352 Moves: increments of 5 = 1 move
0 1
14 27 40 53 66 79 92 105 118 131 144 157 170 183 196 209 222 235 248 261 274 287 300 313 326 339 352 Moves: 1 move per increment (approximately 350 moves)
The cases in this section will be denoted by N/M, where N denotes the number of agents and M the length of the inborn schema used. For the 1/1 case simulation (Figure 7), during the course of approximately 350 moves the single agent accumulated a maximum confidence level of 22 and finished with a confidence level of 9. The agent spiked in confidence between (as represented in Figure 7) move increments 222 and 274, which represents the agent using its associative memory to following a path already traveled. The single agent steadily increased its level of pain. This is mostly attributed to the random environment generation, and upon finding food, random agent placement. In addition, the agent was isolated without the influence of additional agents. The Surprise Chart displays the agent’s level of confusion. The agent’s surprise level increases heavily due to the random environment generation, and Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
On MASim 127
once food is found, the random agent placement procedures. In addition, the agent is not provided the influence of other agents. The Associative Memory Chart displays the number of time the agent found food (since no other agents were active in this simulation — no agent exchanges of memory). In this case, the agent found food three times, increasing its associative memory to approximately 105 nodes. During the course of approximately 700 moves the single agent in the 1/2 simulation (Figure 8) accumulated a maximum confidence level of 6 and finished with a confidence level of 1. Most of the simulation the agent had a negative confidence level, with a minimum -15. The chart also displays the agent gaining more confidence by the end of the simulation. During the course of approximately 700 moves the single agent steadily increased its level of pain. This is mostly attributed to the random environment generation, and upon finding food, random agent replacement. In addition, the agent was isolated without the influence of additional agents. Pain ration is approximately one pain per every 5.8 moves. The Surprise Chart displays the agent’s level of confusion. The agent’s surprise level, as displayed by the chart, increases heavily due to the random environment generation and, once food is found, random agent replacement. In addition, the agent is not provided the influence of other agents. The agent’s surprise level ratio is about one surprise per every 8.2 moves. The Associative Memory Chart displays the number of time the agent found food (since no other agents were active in this simulation — no agent exchanges of memory). In this case, the agent found food two times, increasing the agent’s associative memory to approximately 148 nodes. During the course of approximately 1,095 moves (case 1/3, Figure 9), the single agent accumulated a maximum confidence level of 82 and finished with a confidence level of -12. Most of the simulation the agent had a positive
Figure 8. Some of the case 1/2 statistics Confidence Chart
Associative Memory Chart
10
160
140
5
Amount of Confidence
1
14 27 40 53 66 79 92 105 118 131 144 157 170 183 196 209 222 235 248 261 274 287 300 313 326 339 352
-5
Agent1
-10
Amount of Associative Memory
120 0
100
80
Agent1
60
40
-15
20
0 -20
1 Moves: 2 moves per increment (approximately 700 moves)
14 27 40 53 66 79 92 105 118 131 144 157 170 183 196 209 222 235 248 261 274 287 300 313 326 339 352 Moves: 2 moves per increment (approximately 700 moves)
Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
128 Trajkovski
Figure 9. Case 1/3 statistics Pain Chart
Confidence Chart 180
100
160 80 140
120
40
Amount of Pain
Amount of Confidence
60
Agent1
100 Agent1 80
60 20 40 0 1
20
14 27 40 53 66 79 92 105 118 131 144 157 170 183 196 209 222 235 248 261 274 287 300 313 326 339 352 365
0 1
-20
14 27 40 53 66 79 92 105 118 131 144 157 170 183 196 209 222 235 248 261 274 287 300 313 326 339 352 365 Moves: 3 moves per increment (approximately 1095 moves)
Moves: 3 moves per increment (approximately 1095 moves)
Associative Memory Chart Surprise Chart
250
250
200
150 Agent1 100
Amount of Associative Memory
Amount of Surprises
200
150 Agent1 100
50
50
0 1
14 27 40 53 66 79 92 105 118 131 144 157 170 183 196 209 222 235 248 261 274 287 300 313 326 339 352 365 Moves: 3 moves per increment (approximately 1095 moves)
0 1
14 27 40 53 66 79 92 105 118 131 144 157 170 183 196 209 222 235 248 261 274 287 300 313 326 339 352 365 Moves: 3 moves per increment (approximately 1095 moves)
confidence level, with a minimum -12. During the course of approximately 1,095 moves the single agent steadily increased its level of pain. This is mostly attributed to the random environment generation, and upon finding food, random agent replacement. In addition, the agent was isolated without the influence of additional agents. Pain ration is approximately one pain per every 6.4 moves. The Surprise Chart displays the agent’s level of confusion. The agent’s surprise level increases heavily due to the random environment generation, and once food is found, random agent replacement. In addition, the agent is not provided the influence of other agents. The agent’s surprise level ratio is about one surprise per every 4.7 moves. The agent’s surprise level can be broken down into two sections: before move increment 260 (ration = 9.4 moves per surprise) and after move increment 260 (ration = 2 moves per surprise). The Associative Memory Chart displays the number of times the agent found food (since no other agents were active in this simulation — no agent exchanges of memory). In this case, the agent found food 13 times, increasing the agent’s associative memory to approximately 225 nodes.
Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
On MASim 129
Figure 10. Case 1/4 statistics Confidence Chart
Associative Memory Chart
50 300
0 15
29
43
57
71
85
250
99 113 127 141 155 169 183 197 211 225 239 253 267 281 295 309 323 337 351 365
Amount of Confidence
-100
Agent1
-150
Amount of Associative Memory
1
-50
200
150
Agent1
100
50
-200
0
-250
1 Moves: 4 moves per increment (approximately 1460 moves)
14 27 40 53 66 79 92 105 118 131 144 157 170 183 196 209 222 235 248 261 274 287 300 313 326 339 352 365 Moves: 4 moves per increment (approximately 1460 moves)
For the 1/4 simulation statistics, given in Figure 10, during the course of approximately 1,460 moves the single agent accumulated a maximum confidence level of about 5 and finished with a confidence level of -190. Most of the simulation the agent had a negative confidence level, with a minimum -190. The single agent steadily increased its level of pain. This is mostly attributed to the random environment generation, and upon finding food, random agent replacement. In addition, the agent was isolated without the influence of additional agents. Pain ration is approximately one pain per every 5.6 moves. The Surprise Chart displays the agent’s level of confusion. The agent’s surprise level increases heavily due to the random environment generation and, once food is found, random agent replacement. In addition, the agent is not provided the influence of other agents. The agent’s surprise level ratio is about one surprise per every 3.3 moves. The Associative Memory Chart displays the number of time the agent found food (since no other agents were active in this simulation — no agent exchanges of memory). In this case, the agent found food 18 times, increasing the agent’s associative memory to approximately 260 nodes. In the 1/5 case (Figure 11) during the course of approximately 1,825 moves the single agent accumulated a maximum confidence level of about 25 and finished with a confidence level of -245. Most of the simulation the agent had a negative confidence level, with a minimum -245. Pain ration is approximately one pain per every 5.8 moves. The agent’s surprise level ratio is about one surprise per every 3.3 moves. The Associative Memory Chart displays the number of time the agent found food. In this case, the agent found food 19 times, increasing the agent’s associative memory to approximately 260 nodes.
Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
130 Trajkovski
Figure 11. Selected 1/5 statistics: Confidence and associative table size charts Associative Memory Chart Confidence Chart 300 50
250
0 15
29
43
57
71
85
99 113 127 141 155 169 183 197 211 225 239 253 267 281 295 309 323 337 351 365
Amount of Confidence
-50
-100 Agent1 -150
Amount of Associative Memory
1
200
150
Agent1
100
-200
50 -250
0 1
-300 Moves: 5 moves per increment (approximately 1825 moves)
14 27 40 53 66 79 92 105 118 131 144 157 170 183 196 209 222 235 248 261 274 287 300 313 326 339 352 365 Moves: 5 moves per increment (approximately 1825 moves)
During the course of approximately 300 moves in the 2/1 simulation (Figure 12) Agent 1 accumulated a maximum confidence level of 12 and finished with a confidence level of 12; Agent 2 accumulated a maximum confidence level of 24 and finished with a confidence level of 23. Agent 2 spiked in confidence between (as represented in Figure 12) move increments 250 and 300, which represents the agent using its associative memory to following a path already traveled. In addition, the chart displays a memory exchange between Agent 1 and Agent 2 at move increment 220, where Agent 1 shared its memory with Agent 2. After the memory exchange, Agent 1’s confidence stayed fairly steady, while Agent 2’s confidence rose quiet steadily. Both agents steadily increased their level of pain, although Agent 1 had experienced a significant less amount of pain than did Agent 2. As displayed by the Pain Chart, Agent 1’s pain level seems to level off a bit as its confidence level increases (as displayed in the Confidence Chart in Figure 12). Pain ration for Agent 1 is approximately one pain per every 7.3 moves. Pain ration for Agent 2 is approximately one pain per every 4.7 moves. The Surprise Chart displays an agent’s level of confusion. Agent 1’s surprise level stays fairly low throughout the simulation, as opposed to Agent 2’s surprise level. Agent 1’s low surprise level corresponds with a steadily rising confidence. Agent 2’s surprise level increases considerably until the agent’s spike in confidence, which is a result of the memory exchange made from Agent 1 to Agent 2. Agent 1’s surprise level ratio is about one surprise per every 30 moves, which is exceptional; this may be due to fortunate agent placement and relocations or the lack of overall associative memory. Agent 2’s surprise level ration is about one surprise per every 8.6 moves. The Associative Memory Chart displays the number of times each agent found food or benefited from a memory
Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
On MASim 131
Figure 12. Statistics in the 2/1 case Pain Chart 70
25
60
20
50
15 Agent1 Agent2 10
Amount of Pain
Amount of Confidence
Confidence Chart 30
40 Agent1 Agent2 30
20
5
10
0 1
12 23 34 45 56 67 78 89 100 111 122 133 144 155 166 177 188 199 210 221 232 243 254 265 276 287 298 0
-5
1 Moves: 1 move per increment (approximately 300 moves)
Moves: 1 move per increment (approximately 300 moves)
Associative Memory Chart
Surprise Chart 40
140
35
120
25 Agent1 Agent2
15
Amount of Associative Memory
Amount of Surprises
30
20
12 23 34 45 56 67 78 89 100 111 122 133 144 155 166 177 188 199 210 221 232 243 254 265 276 287 298
100
80 Agent1 Agent2 60
40
10 20
5
0
0 1
12 23 34 45 56 67 78 89 100 111 122 133 144 155 166 177 188 199 210 221 232 243 254 265 276 287 298 Moves: 1 move per increment (approximately 300 moves)
1
12 23 34 45 56 67 78 89 100 111 122 133 144 155 166 177 188 199 210 221 232 243 254 265 276 287 298 Moves: 1 move per increment (approximately 300 moves)
exchange. In this case, Agent 1 found food or experienced a memory exchange two times, increasing its associative memory to approximately 60 nodes. Agent 2 found food or experienced a memory exchange five times, increasing its associative memory to approximately 122 nodes. For the 2/5 case (Figure 13), during the course of approximately 880 moves, Agent 1 accumulated a maximum confidence level of 1 and finished with a confidence level of -40; Agent 2 accumulated a maximum confidence level of 1 and finished with a confidence level of -35. Both agents struggled heavily with accumulating confidence. As depicted by the chart, Agent 1 made two major memory exchanges with Agent 2 to no avail; Agent 2’s confidence dropped sharply after both exchanges. Agent 1 was able to raise its confidence, almost entering a positive level, until sharply dropping at the end. This is most likely due to a very challenging environment for the agents. During the course of approximately 880 moves both agents steadily increased their level of pain, both finishing with exact amounts. Pain ration for both agents is approximately one pain per every 5.8 moves. The Surprise Chart displays an agent’s level of confusion. Agent 1’s surprise level is more than 50 points less that Agent 2’s. This should
Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
132 Trajkovski
Figure 13. Statistics on the 2/5 case Confidence Chart
Pain Chart
10
160
140 0 1
8
15
22
29
36
43
50
57
64
71
78
85
92
99 106 113 120 127 134 141 148 155 162 169 176
120
100
Agent1 Agent2
-20
Amount of Pain
Amount of Confidence
-10
Agent1 Agent2
80
60 -30 40
-40
20
0 1
-50
8
15
22
29
36
43
Moves: 5 moves per increment (approximately 880 moves)
50
57
64
Surprise Chart
78
85
92
99 106 113 120 127 134 141 148 155 162 169 176
Associative Memory Chart
250
300
250
150 Agent1 Agent2 100
50
Amount of Associative Memory
200
Amount of Surprises
71
Moves: 5 moves per increment (approximately 880 moves)
200
Agent1 Agent2
150
100
50
0
0 1
8
15
22
29
36
43
50
57
64
71
78
85
92
99 106 113 120 127 134 141 148 155 162 169 176
Moves: 5 moves per increment (approximately 880 moves)
1
8
15
22
29
36
43
50
57
64
71
78
85
92
99 106 113 120 127 134 141 148 155 162 169 176
Moves: 5 moves per increment (approximately 880 moves)
be evident in the Confidence Chart, as Agent 1’s confidence level, most of the time, was much higher than Agent 2’s confidence level. Agent 1’s surprise level ratio is about one surprise per every 5.9 moves. Agent 2’s surprise level ration is about one surprise per every 4.3 moves. The Associative Memory Chart displays the number of time each agent found food or benefited from a memory exchange. In this case, Agent 1 found food or experienced a memory exchange seven times, increasing its associative memory to approximately 245 nodes. Agent 2 found food or experienced a memory exchange nine times, increasing its associative memory to approximately 240 nodes. During the course of approximately 363 moves in a 3/3 scenario (Figure 14) Agent 1 accumulated a maximum confidence level of 23 and finished with a confidence level of 4; Agent 2 accumulated a maximum confidence level of 28 and finished with a confidence level of 25; Agent 3 accumulated a maximum confidence level of 26 and finished with a confidence level of -1. What is interesting about this simulation is that all the agents seem to find food or experience a memory exchange fairly early in the simulation, but they all have very different levels of confidence by the end of the simulation. Agent 1’s Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
On MASim 133
Figure 14. Statistics on a 3/3 scenario simulation Confidence Chart Pain Chart 30 90
25
80
70
60
15 Agent1 10
Agent2 Agent3
Amount of Pain
Amount of Confidence
20
5
50
Age Age Age
40
30
0 1
6
11
16
21
26
31
36
41
46
51
56
61
66
71
76
81
86
91
20
96 101 106 111 116 121
10
-5
0
-10
1
6
11
16
21
26
Moves: 3 moves per increment (approximately 363 moves)
31
36
41
46
51
56
61
66
71
76
81
86
91
96 101 106 111 116 121
Moves: 3 moves per increment (approximately 363 moves)
Surprise Chart
Associative Memory Chart
80
200 180
70
160
50 Agent1 Agent2 Agent3
40
30
Amount of Associative Memory
Amount of Surprises
60 140 120 Agent1 Agent2 Agent3
100 80 60
20 40 10
20
0
0 1
6
11
16
21
26
31
36
41
46
51
56
61
66
71
76
81
86
91
Moves: 3 moves per increment (approximately 363 moves)
96 101 106 111 116 121
1
6
11
16
21
26
31
36
41
46
51
56
61
66
71
76
81
86
91
96 101 106 111 116 121
Moves: 3 moves per increment (approximately 363 moves)
confidence spikes until halfway into the simulation, then begins to drop steadily into negative territory, only to finish at a positive 4. Agent 2’s confidence stays strong the entire simulation, constantly rising. Agent 3’s confidence, like Agent 2’s, constantly rises three fourths of the way through the simulation, only to finish at -1. During the course of approximately 363 moves all agents steadily increased their level of pain, although Agent 3’s level finished considerably lower than Agents 1 and 2. Pain ration for Agent 1 is approximately one pain per every 4.5 moves. Pain ration for Agent 2 is approximately one pain per every five moves. Pain ration for Agent 3 is approximately one pain per every 6.8 moves, a good ration considering Agent 3’s associative memory was initiated by move increment 20. The Surprise Chart displays an agent’s level of confusion. Here, Agent 2’s surprise level is significantly less than Agents 1 and 2. Agent 1’s surprise ratio is about one surprise per every 5.4 moves. Agent 2’s surprise level ration is about one surprise per every 10.1 moves. Agent 3’s surprise level ration is about one surprise per every 5.3 moves. The Associative Memory Chart displays the number of time each agent found food or benefited from a memory exchange. In this case, Agent 1 found food or experienced a memory exchange six times,
Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
134 Trajkovski
Figure 15. Case 4/5 statistics Confidence Chart Pain Chart 40 90
80
30
70
60
Agent1 Agent2 Agent3
10
Agent4
Amount of Pain
Amount of Confidence
20
Agent1 Agent2 Agent3 Agent4
50
40
30
0 1
4
7 10 13 16 19 22 25 28 31 34 37 40 43 46 49 52 55 58 61 64 67 70 73 76 79 82 85 88 91 94 20
-10
10
0
-20
1
Moves: 5 moves per increment (approximately 475 moves)
4
7 10 13 16 19 22 25 28 31 34 37 40 43 46 49 52 55 58 61 64 67 70 73 76 79 82 85 88 91 94 Moves: 5 moves per increment (approximately 475 moves)
Surprise Chart
Associative Memory Chart
180
250
160 200
Amount of Surprises
120 Agent1 Agent2
100
Agent3 Agent4
80
60
40
Amount of Associative Memory
140
150 Agent1 Agent2 Agent3 Agent4
100
50
20
0
0 1
4
7 10 13 16 19 22 25 28 31 34 37 40 43 46 49 52 55 58 61 64 67 70 73 76 79 82 85 88 91 94 Moves: 5 moves per increment (approximately 475 moves)
1
4
7 10 13 16 19 22 25 28 31 34 37 40 43 46 49 52 55 58 61 64 67 70 73 76 79 82 85 88 91 94 Moves: 5 moves per increment (approximately 475 moves)
increasing its associative memory to approximately 189 nodes. Agent 2 found food or experienced a memory exchange eight times, increasing its associative memory to approximately 125 nodes; this represents Agent 2 using lots of associative memory to find food due to the low amount of associative memory accumulated; associative memory does not contain duplicate move nodes. Agent 3 found food or experienced a memory exchange nine times, increasing its associative memory to approximately 154 nodes. In the 4/5 case (Figure 15), during the course of approximately 475 moves Agent 1 accumulated a maximum confidence level of 32 and finished with a confidence level of -8; Agent 2 accumulated a maximum confidence level of 35 and finished with a confidence level of 1; Agent 3 accumulated a maximum confidence level of 35 and finished with a confidence level of 2; Agent 4 accumulated a maximum confidence level of 35 and finished with a confidence level of -3. As the chart displays, Agents 1, 3 and 4 — who were all in negative territory at the time — greatly benefited from Agent 2’s high confidence memory exchange halfway
Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
On MASim 135
through the simulation, although the agents all dropped severely in confidence afterwards. During the course of approximately 475 moves all agents steadily increased their level of pain, although they ended the simulation all spaced out from one another. Pain ration for Agent 1 is approximately one pain per every 6.4 moves. Pain ration for Agent 2 is approximately one pain per every 7.7 moves. Pain ration for Agent 3 is approximately one pain per every 5.6 moves. Pain ration for Agent 4 is approximately one pain per every 10.3 moves. The surprise chart displays an agent’s level of confusion. Again, as with the Pain Chart, all of the agents end the simulation spaced out from one another. Agent 1’s surprise ratio is about one surprise per every 2.9 moves, which is very poor. Agent 1 was constantly confused throughout the simulation. This could be attributed to a challenging environment and/or challenging location and replacement. Agent 2’s surprise ration is about one surprise per every 5.3 moves. Agent 3’s surprise level ratio is about one surprise per every 4.4 moves. Agent 4’s surprise level ration is about one surprise per every 3.3 moves. Since all the agents have fairly poor surprise ratios, one could surmise that they were placed in a fairly challenging environment. The Associative Memory Chart displays the number of times each agent found food or benefited from a memory exchange. In this case, Agent 1 found food or experienced a memory exchange 13 times, increasing its associative memory to approximately 219 nodes. Agent 2 found food or experienced a memory exchange nine times, increasing its associative memory to approximately 182 nodes. Agent 3 found food or experienced a memory exchange eight times, increasing its associative memory to approximately 234 nodes. Agent 4 found food or experienced a memory exchange 12 times, increasing its associative memory to approximately 219 nodes.
Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
136 Trajkovski
Chapter XI
On a Software Platform for MASIVE Simulations with Adam Conover, Towson University, USA
Abstract Pattern-Aided Simulated Interaction Context Learning Experiment (POPSICLE) Agent Simulator began as a sample project in object-oriented agent programming, but quickly grew into a complete framework for the simulation of agent behavior based upon an associative memory model. The system began its implementation as a Java 2 (J2SE 1.4) application, but was later migrated to a Java 5 application (J2SE 1.5) to utilize the type-safe collections and enumerated types that became available in the latest Java version. Various design patterns were employed in the development; the most predominate being the Model/Controller/View (MCV) architecture. As we will see later, the system also relies heavily on delegation and observers. Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
On a Software Platform for MASIVE Simulations 137
Introduction In this section, we overview the key notions in the agent model, as implemented in this project, followed by a description of the simulator. The system simulates the behavior of autonomous agents in a two-dimensional world (grid) of cells, which may include cells the agent is free to move into or walls that block movement. The goal of an agent is to satisfy specific user-defined drives, such as hunger, thirst, and so forth. An agent may have any number of drives, but only one is active at any given point in time. Additionally, the user populates select world cells with drive satisfiers that are used to satisfy the active drive of any agent entering the cell. In other words, when an agent moves into a cell containing a drive satisfier, the drive is only satisfied if it is the agent’s active drive. In the beginning, the agents navigate around the world using a user-defined inherent schema, which is a short series of moves the agent will make by default. Gradually, an agent builds up an internal associative memory table as it explores the environment in search of drive satisfiers. As the agent moves in search of a drive satisfier, observations are made and recorded in the emotional context of the active drive. Once a drive has been satisfied, the recorded observations (leading to drive satisfaction) are recoded in the agent’s associative memory table. As the agent continues to explore the world, other drives may become active, leading to new observations in new contexts. As the agent continues to build a model of the world in relationship to its drives, the agent will begin to use its associative memory to plan a route to the drive satisfier. The agent uses current observations to derive expectations from the associative memory table. When a matching observation is found in the table, the agent temporarily abandons its inherent schema and uses the expectation to execute the next series of moves. If the observations made during this next series of moves match another observation in the table, the process continues until the drive satisfier is reached. If at any time, a subsequent observation does not match the expectation the agent records a surprise and returns to its inherent schema to continue exploration—all the while, continuing to make new observations. Additionally, if the agent cannot make a move because a path is blocked by a wall or world boundary, the agent registers this as pain, skips the move, and continues execution of schema or expectation.
Interface Before delving too deeply into the internals of the system, let us look at the system from a users point of view. Figure 1 shows the four main interface areas in one Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
138 Trajkovski
main window. Across the top is a small menu bar that holds the File, Edit, and Help menu options, which provide the expected menu options and commands. For example, the File menu contains the Load and Save commands for loading and saving the simulation. Under the Edit menu are the standard Cut and Paste options and under the Help menu are the options for the About dialog and other help related materials. Under the menu is the top toolbar that holds the buttons for the primary interaction with the simulation. Each of these buttons will be explained in further detail in the remainder of this section. Also, note that—as a feature of Java Swing toolbars—any toolbar can be torn off the interface into its own window, if desired by the user. On the left is the master World Tree for displaying any object that exists in the simulation. When the application is first launched, there are no agents and the world is empty, but a single hunger satisfier is created (which can later be renamed), since the simulation must always have at least one drive satisfier in it. The right side of the screen shows the initially empty World View that must be populated by the user. These left and right panes are divided by a movable/ collapsible slider, so the tree view can be moved out of the way if desired. A series of buttons along the top launch the dialog boxes that are used to configure the simulation. Along the bottom of the interface, a customizable Cell Color Pallette (comprised of 10 colored tiles) is used for choosing new free-cell colors when constructing or editing the world grid. The interface imposes a 10-color limit to the pallet, but right clicking on any color tile will display a color selection dialog where the color can be customized. Since it is not possible to edit the world while the simulation is executing, this bottom toolbar is hidden once the simulation begins. The user may configure the components of the system in any order. For example, if the user elects to build the world first, the dialog shown in Figure 2 would be displayed. The user has a number of choices for setting up the initial world. The first consideration is the height and width of the world grid. Though there is no artificial limitation on the world size, a practical upper limit for experimentation seems to be about a 10x10 grid. The graphical user interface (GUI) will attempt to scale the grid to fit within the world view area, but scrollbars will appear if the grid exceeds what can be displayed comfortably. Once the height and width are specified, the user may choose from several creation modes. The All Walls mode simply creates a world that is all wall cells, while All Blank Cells just creates a world with all free cells of the default color. The default cell creation color can be chosen by selecting any of the color tiles from the Cell Color Palette prior to generating the world grid. A free cell is defined as any cell that an agent can move into, or can be populated by an entity, such as a drive satisfier. Internally, the creation modes are all implemented as modules that extend an abstract CellGenerator class, so adding new creation modules is a
Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
On a Software Platform for MASIVE Simulations 139
Figure 1. Initial user interface
relatively simple task. For example, the Random Corridors cell generator first generates a world of solid wall cells then carves the desired number of connected, random length corridors (row or column of free cells) into the world. Random Corridors generation is done by the generation of a first corridor in the center of the world and then randomly picking an existing free cell as the starting location for the next corridor. This process continues through the number of iterations designated by the user on the New World dialog. In the random modes, all cell colors are assigned randomly from the Cell Color Palette. The Palette Limit represents the total number of unique cell colors in the construction of the world, and only applies to the random modes. Since the colors used in the random generation modes are chosen from the color palette, the upper limit to this value is 10. In the example dialog box in Figure 2, we have chosen to generate a world of 50% free cells, which will be placed randomly within the world and assigned a random color from the cell color palette. An important thing to note about the purely random cell generation is that there is no guarantee that all cells will be reachable by an agent. For example, it is entirely possible to generate a world with a cell that is surrounded by walls. It is therefore occasionally necessary to edit the world manually in these modes. This may also be the case if too many like colors are grouped together, or the user simply dislikes the random arrangement. The
Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
140 Trajkovski
Figure 2. Create new world dialog
Random Corridors cell generator helps alleviate the problem of unreachable cells, but the user may still wish to perform edits or even completely regenerate the world. To edit the world manually, the user selects any of the cell color tiles (or the wall tile) at the bottom of the screen and then clicks in the desired world cell. Editing or world regeneration can continue until the user is satisfied with the world representation. Figure 3 shows the result of a sample world generation, after a few minor manual tweaks to adjust colors.
Figure 3. Result of world build
Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
On a Software Platform for MASIVE Simulations 141
Before the simulation can begin, agents and drive satisfiers must be added to the world. As mentioned, the simulation defaults to a single hunger drive satisfier, but we can add as many as we like. Pressing the Conf. Drives button on the top toolbar will display the drive configuration dialog shown in Figure 4. Here we have added a Thirst drive to the list of available drive satisfiers in the simulation. Each drive may also have its own color for representation in the world grid. When a new drive is created, it is assigned a random color, but the color can be changed by a simple right click on the displayed color, which presents the user with a color chooser dialog. Note that the distinction between drives and satisfiers in the simulation is mostly dependant on the context. A drive is motivator of agent behavior, where a satisfier is an object in the world that is capable of satisfying the agents’ drives. Due to the tight (generally one-to-one) correlation between drives and satisfiers, the terms are often used in conjunction. Once the drives have been created to the user’s satisfaction, the agents may then be configured. As mentioned, it does not really matter in what order configuration takes place, but newly created agents will only see the drives that exist in the simulation when the agent is created. This was an intentional design decision for two reasons: (1) propagating retroactive drive changes to the existing agents introduces unnecessary programmatic complexities that do not benefit the simulation, and (2) new drives can be created after the creation of one agent, but before the creation of the next, allowing for experiments where certain drive satisfiers remain unknown to certain agents. This functionality may change in the future by propagating master drive list updates to all agents and setting the default emotional context to 0 for that particular drive in the agent. Creating a new agent is just a matter of pressing the Add Agent button. Like the drive satisfiers, the agent’s initial color will be chosen randomly, but can easily be changed by clicking on the Agent Color button in the Agent Configuration dialog. Figure 5 shows all of the dialogs involved in adding a new agent and changing its default color. In this figure we have already added Agent
Figure 4. Drive configuration dialog
Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
142 Trajkovski
Figure 5. Adding a new agent
1 and are now adding a second agent, Agent 2, and changing its default (random) color. The Agent Configuration dialog itself requires a bit of explanation. There are numerous fields on the dialog for configuring the various user-definable attributes of the agent. •
Agent Name determines how the name will be displayed in association with the agent.
•
Context field determines the number of surrounding cells that are examined by the sensors when making an observation. (The current version has only been implemented for context 0, meaning that only the current cell is examined with each observation. The ability to observe beyond the current cell will be added in a later version of the simulator).
•
Schema represents the agents default inherent schema.
•
Size determines the number of moves in a randomly generated schema and is only relevant when the Randomize button is pressed. The user may manually enter a schema of arbitrary length.
Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
On a Software Platform for MASIVE Simulations 143
•
The Randomize button builds a randomized schema of the length entered in the Size field.
•
Agent Color displays a button showing the color of the newly generated agent. Pressing this button displays the Choose Color for Agent Color dialog, where a new user-defined color can be chosen.
•
Memory Size is the maximum number of entries in the agent’s associative memory table. This effectively allows configuration of how much the agent can remember.
The table at the bottom of the Agent Configuration dialog lists all the drives available to the agent. The user may configure; the user may weight each drive in terms of how important that drive is to the agent. In the sample screen, the drives are weighed as percentages adding up to one. However, this is just done for clarity and is not strictly necessary. All values are simply totaled and the internal weight of each individual drive is calculated relative to the sum of all drives. Once the agent is configured and the OK button is pressed, the agent is added to the simulation. A representation of the agent will be added to the left tree view of the world, and an Agent Tab will be created where real time agent stats can be viewed. Once an agent is created, there is currently no way for it to be deleted, without starting a new simulation. Though right clicking on any object in the World Tree will allow its properties to be altered at any time before the simulation is actually begun.
Running the Simulation Once the environment (world, agents, and drives) has been customized to the user’s satisfaction, the world objects must actually be added to the world. Any object from the World Tree can be dragged onto the world to place it in a cell. If the user wishes to remove the contents of a cell, the cell must be replaced by clicking on a color tile and clicking on the desired world cell. This will replace the cell with a fresh blank cell. The system will prohibit agents or satisfiers from being dropped on walls and will prevent more than one instance of an agent from being added to the world. Any number of satisfier instances may be added to the simulation and multiple satisfiers may occupy the same cell. Only one instance of a specific satisfier may be added to a cell, as it does not make sense to have two thirst satisfiers in the same cell, for example, Figure 6 shows an example world after complete setup.
Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
144 Trajkovski
Figure 6. Completed simulation setup
The simulation may be run in Single Step mode or a continuous mode by pressing the Start button. The Delay field in the top tool bar determines the delay in milliseconds between movements in the continuous run mode. While the World View tab is active, we can watch the agents wander around their environment, but we see the inner workings of the mind of the agent by clicking on one of the tabs associated with a particular agent. Figure 7 shows the World Tree view collapsed out of the way and the Agent View tab for Agent 2. This screen presents the details of the inner workings of the agent. The left side of the screen presents the Agent Statistics area, which shows a user-sortable table for displaying the various activity counters within the agent, while the Buffer Stack shows the current set of moves in the executing schema in real time. The bottom element of the stack shows the color of the cell that the agent currently inhabits. The right hand side of the screen shows Associative Memory Buffer and Associative Memory tabs for viewing the associative memory states in real time. Each entry that is displayed in the Associative Memory Buffer represents the history of move sets as progress is made toward a satisfier; the entries in the buffer are a history of Buffer Stacks. In other words, each movement set from
Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
On a Software Platform for MASIVE Simulations 145
Figure 7. Associative memory buffer in action
the buffer stack becomes an entry in the buffer. In the buffer table displays, to aid in comprehension of the display, each item is colorized to correspond to the observed color of the cell. In Figure 7 the Buffer Stack contains the observations: North ’! magenta, South ’! cyan, South ’! green, West ’! yellow. The last line in the Associative Memory Buffer display shows the sequence using the same colorizations. As the agent navigates through the world and eventually locates a drive satisfier for an active drive, the Associative Memory Buffer is committed the agent’s actual associative memory in the form of movements and expectations. The Associative Memory tab view is the heart of the simulation. In this view, all mappings between movements (sets of observations) and expectations (again, sets of observations) can be observed along with the associated emotional context for all drives possessed by the agent. Figure 8 shows a snapshot of the associative memory for Agent 1 in addition to another handy GUI feature; the ability to tear off the world tab for viewing in a separate window. The ability to view both the world and the internal elements of the agents proves to be invaluable for observing the complete behavior of the system, in either step mode or a continuous run. It should also be noted that while
Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
146 Trajkovski
Figure 8. Associative memory and tear-off
viewing the agent details, the most relevant aspects of the agent’s configuration are displayed across the bottom of the display. Shown here is the schema for the agent, the default observation context, and the size of the associative memory.
System Architecture Here we address the implementation aspects of the project. The system consists of 77 Java source objects, which are divided into packages as shown in Figure 9. The top-level package contains several classes that are relevant to the whole system and three subpackages: agent, world, and GUI. The world package contains all classes necessary for world representation while the GUI package contains all classes necessary to visually render and interact with the environment. The agent package contains several subpackages for dealing with various aspects of agent management: sensors, motors, communicator, drives, inherentSchema, and associativeMemory. Table 1 briefly describes the contents of each of the packages shown in Figure 9. Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
On a Software Platform for MASIVE Simulations 147
Figure 9. Package hierarchy edu.towson.popsicle agent sensors
communicator
world inherentSchema
associativeMemory
motors
drives
gui
Control of the system is broken down into four primary controllers, each dealing with a corresponding subsystem as shown in Table 2. Each controller is responsible for interaction with the four corresponding subsystems (GUI, Simulation, World, and Agent). Each controller also implements event listeners to respond to various events produced by other parts of the system. For example, when an agent moves, an AgentMoved event is generated by the AgentController, which is listened for by the SimulationController. The SimulationController processes the event by querying the WorldController for information about the cell the agent has moved into and firing an update event to the GuiController to allow the user interface to reflect to movement made. Figure 10 shows the UML sequence diagram of the startup sequence of the simulation. When the AgentSimulation is run, the GuiController is first created followed by the SimulationController, which in turn launches the WorldController and AgentController. Only one instance of any controller exists in the simulation. The world consists of a grid of abstract Cell objects that may be realized as a concrete instance of either a FreeCell or a Wall. (To keep the agent within the bounds of the experiment area, the entire visible grid is implicitly bounded by nonvisible Wall objects). Each Cell contains a CellColor attribute that determines rendering color of the cell. A color of a cell provides the agents sensory input, which they use to build their associative memory tables as they explore the environment. In the current implementation of the simulator, the color of a wall is not used and the wall call is rendered with a simple brick graphic. However, the decision was made to allow walls to have color (since the color attribute is declared in its super-class, Cell) because later versions of the agent artificial intelligence (AI) may wish to consider other attributes of boundaries, aside from simply their existence. Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
148 Trajkovski
Table 1. Package summary edu.towson.popsicle
For classes related to the system as a whole, this includes main AgentSimulation class as well as the classes for the internal coordinate system, direction mapping, logging, and so forth.
edu.towson.popsicle.agent
This is the top-level class for the agent. The primary agent controller, the agent configuration objects, agent statistics data structures, and so forth, are all maintained in this package.
edu.towson.popsicle.agent.associativeMemory This large package contains classes necessary to implement the agent’s associative memory system, including associated buffers and renderers. edu.towson.popsicle.agent.commuicator
Provides a communication infrastructure for interagent communication
edu.towson.popsicle.agent.drives
Manages agent drives and drive collections
edu.towson.popsicle.agent.inherentSchema
The data structures for the representation of an agent’s inherent schema
edu.towson.popsicle.agent.motors
Agent extension classes for agent motors, which define how the agent is able to move in the environment. This package was added to simulate the actual movement behavior of a physical robot. The current implementation simply moves the agent north, south, east, or west. Additional motors could be added to move, such as Move Forward, Turn Left, and so forth.
edu.towson.popsicle.agent.sensors
Agent extension classes for agent sensors. The sensors allow the agent to sense different types of objects in the environment: cell color, satisfiers, other agents, and so forth. Any type of object added to the simulation should have a corresponding sensor.
edu.towson.popsicle.gui
Code for rendering the user interface
edu.towson.popsicle.gui.images
All image textures that are used in the GUI
edu.towson.popsicle.world
World Representation, Cells, Entities, and soforth.
As was seen in Figure 5 of the previous section, we looked the AgentConfiguration dialog. Each agent has an associated AgentConfiguration class that is configured by this dialog before the simulation begins. The class maintains the agent attributes such its name, inherent schema, and other static attributes. Once the simulation begins, no agent’s configuration object should change. An AgentStats class exists to track the dynamic elements of the agents, such as number of surprises, and so forth. Each agent also maintains a DriveManager that coordi-
Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
On a Software Platform for MASIVE Simulations 149
Table 2. Controller overview GuiController
• Controls all interaction with the user interface • Responds to notification events when the internal state of the simulation is updated • When constructing a new world, generates events necessary to populate the word with agents and entities
SimulationContr oller
• Oversees the coordination of the autonomous agents and the entities within the world
WorldController
• Responsible for maintaining all static entities in the world, such as walls, cell colors, and drive satisfiers
AgentController
• Responsible for moving the agents within the actual world • Coordinates all agent activity such as environment sensing and communication • All agents share a single controller
Figure 10. Startup UML sequence diagram
nates the active drives and goal direction. The DriveManager is responsible for determining which drive is active at any given moment. The agents AssociativeMemory subsystem contains several classes implementing the agent’s memory system. Figure 11 shows a UML class diagram of the essential class associations. Though many more supporting classes exist, these are the core of the system. The AgentSimulation is the main entry point into the program, and aside from some initial startup housekeeping, it does little more than instantiate the Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
150 Trajkovski
Figure 11. High level UML class diagram AssociativeMemoryBufferTableModel
AgentSimulationGUI
AssociativeMemoryTableModel GuiController
AgentSimulation SimulationEventListener CellMapper SimulationController
AgentEventListener «subsystem» AssiciativeMemory
AgentEventDispatcher «uses»
WorldController
AgentController «uses» InherentSchema
World
«interface» CellGenererator
Cell
CellColor
Agent agentConfig
DriveManager Wall DriveInstanceList
FreeCell
AgentStats
World representation of a drive satisfier
«interface» Entity DriveInstance
AgentStatsTableModel
GuiController and the SimulationController. The GuiController is responsible for creating and updating the actual Java Swing interface and responding to user events, such as agent creation, world population, and so forth. The SimulationController is responsible for managing everything related to the simulation itself. The GuiController also implements a SimulationEventListener interface, which allows for a clean separation between the GUI and the rest of the system. Since the GUI needs to remain synchronized with the state of the running simulation, the SimulationController acts as an event source, and the GuiController acts as an even sink, updating the GUI in response to simulation events. The only direct communication (i.e., not through an interface) between the GuiController and the SimulationController is during initial world construction, simply to minimize the complexity of the one-time construction of the initial environment. As can be seen in Figure 11 the World is an aggregation of abstract Cell objects. Each cell is either a Wall or a FreeCell. Each FreeCell maintains a list of the Agents and Entities contained within. The association between Agent and FreeCell is actually bidirectional — in other words, the agent always holds a
Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
On a Software Platform for MASIVE Simulations 151
reference to the cell that it is currently occupying. Otherwise, every time an agent wished to move, all the world cells would have to be searched for the agent. Maintaining a bidirectional mapping allows the agent’s sensors and motors to query the current cell quite easily. When the world grid is first generated, the SimulationController instructs the WorldController to generate the environment using the CellGenerator discussed previously. The cells that comprise the agents’ world are stored as a graph; each cell possessing links to its four neighbors, which can be viewed as a four-way linked list. Therefore, any given Cell object in the world can be said to know all of its neighbors. No absolute coordinates are ever used to reference the individual cells. One reason for this is that no agent should ever have knowledge of its absolute position in the world — all positions should be inferred by observation/expectation pairs without regard to absolute coordinates. The other reason deals with the implementation of movement and observation. Representing the world as a graph structure allows for simple object queries to find adjacent cells. For example, currentCell.getNorth() returns the cell directly to the north of the currently occupied cell, while currentCell.getNorth().getNorth() returns the cell two cells north of the current cell. Recursive queries then become trivial when implementing more elaborate sensors or motors. Additionally, the graph representation actually allows for interesting potential future extensions to the simulation. For example, the world could extend into multiple dimensions or even be linked to other worlds running in other simulations. The exploration of the potential value of these more esoteric possibilities may come at a later date.
Agent Control As mentioned, each agent is controlled by the AgentController object that maintains a reference to all agents within the simulation. All agent movement is synchronized by an AgentMovementManager nested class within the agent controller. AgentMovementManager (which extends java.util.TimerTask) is used to coordinate the movements of the agents and instructs each agent to process its environment after each movement. The movement delay is dynamic and can be altered while the simulation is running by changing the Delay field in the interface. At each movement step, the agent is moved into the next cell, based upon the inherent schema or associative memory, as discussed previously. At each processing step, the agent’s sensors are checked against the active Drive and the AgentStats and associative memory system are updated accordingly. As shown in Figure 11 the SimulationController acts listener of events generated by the agent by implementing the AgentEventListener interface and
Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
152 Trajkovski
registering itself with every agent. The AgentEventDispatcher is responsible for routing events to their proper destination and each agent maintains its own event dispatcher. Any object wishing to receive events from the agents, passes an AgentEventListener instance along with the desired event class token to the dispatcher’s addEventHandler(…) method. All objects implementing AgentEventListener must implement the method, handleEvent(AgentEvent event), which handles the AgentEvent object of interest. The current implementation only concerns itself with a few events and one listener, but the dispatch model was used to allow for easy extensibility. Extensibility and class tokens will be discussed in the next section. Not all events require an event dispatcher, however. Each table in the AssociativeMemory object, AssociativeMemoryBuffer object, and AgentStats object, have corresponding model objects that inherit from javax.swing.table.TableModel. The Java Swing architecture allows views of the models to exist in the GUI, which update themselves automatically with changes to the underlying data structures. For example, when the associative memory buffer is rendered, the drive columns are pulled directly from the agents available drive list, making the table display completely dynamic. The use of models and model listeners, as defined as part of the Java Swing API, allow for the dynamic update of the GUI simply by creating the appropriate GUI components and registering the corresponding models. When the internal state of an agent changes, the table models fire the appropriate javax.swing.event.TableModelEvents, causing the GUI to update automatically displaying the new values in the table.
World Rendering Representation and Generation When the user (or CellGenerator) indicates whether a wall of a free cell should be constructed, the request is passed to the SimulationController, which in turn delegates the request to the WorldController. This was done to allow for maximum separation between the GUI and the simulation engine. For example, in a loosely coupled design the GUI should not be responsible for creating new objects that belong to the world. The GUI merely instructs the simulation to construct an object of the desired type and indicates where it should be placed in the world. One way of accomplishing this would be to have unique methods for the construction of each type of object in which calls would be made to createFreeCell(…) or createWall(…). However, this in itself introduces a
Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
On a Software Platform for MASIVE Simulations 153
degree of coupling that we did not want. If a new cell type is ever created, then new methods would have to be added to accommodate their construction. The other option for cell creation is the use of a special CellType enumeration class that contains the tokens for the cell types. When a new cell is to be constructed, a call to a single method createCell(CellType type,…) is made. For example, the GUI might make a call like this: createCell(CellType.WALL,…). The addition of a new cell type would just require the addition of a new entry in the CellType enumeration and the appropriate handlers added to the WorldController. This was the technique used in the first (J2SE 1.4) iteration, but it still had its limitations. In the migration to J2SE1.5 an even better solution presents itself that will be discussed in the next section. Do to the loose coupling between the GUI and the core engine, the GUI rendering of the world cells requires the use of a CellMapper object. This object maps all world Cell objects to visually rendered CellComponent objects that extend from javax.swing.jbutton. The GUI representation of the grid of CellComopnents is handled by creating a javax.swing.jPanel with a java.awt.GridLayout layout manager of the user specified dimensions. Each CellComponent is then added to the panel. The object is responsible for drawing the contents of the cell as the simulation executes. When the GuiController receives an AgentMoved event via the SimualtionEventListener, the relevant CellCompnents are redrawn. The separation of the GUI’s concept of a cell and the agent/world concept allows for much greater flexibility in executing the simulation. Eventually, it will be possible to script the creation of the environment and run the simulation completely detached from the local GUI. For example, the client could run as an applet on a client machine and the simulation itself could run on a separate server, or the simulation could run while simply logging data to a file or database.
Development Environment and Java 1.5 Migration This section focuses on the technical details and challenges of the implementation and migration of the product to Java 1.5. When the Agent Simulation project began, J2SE1.4 (Java 1.4.2) was the current stable release and Netbeans was the Integrated Development Environment (IDE) of choice. Figure 12 shows an example of the project workspace. Throughout the development life cycle, the project has been maintained under the popular GPL licensed versioning system, Concurrent Versioning System (CVS)
Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
154 Trajkovski
Figure 12. Development environment
(CVS, n.d.). CVS was chosen do to its maturity, its high-level of integration into most modern IDEs, and its sophisticated team development and change documentation features. For example, CVS has the capability to review any changes to any source file throughout the development cycle, noting who made the change, when it was made, why it was made, and so forth. An example of Netbeans/CVS integration is shown in Figure 13. On this screen, a partial history of the agent.java class is represented. Clicking on a revision displays the comments associated with the file commit. Additionally, file versions can be visually compared to trace back through any code alterations. The CVS server was also running StatCVS to provide near real time metrics of ongoing development. StatCVS is a web-based metrics system that displays various informative charts, graphs, and summaries of the development activity. Figure 14 shows a collage of some of the various information graphs provided by StatCVS. Moving clockwise from the upper left image, the number of lines of code versus time, the ratio of code changes per package for a given user, average activity based upon time of day, average activity based upon the day of the week, the ratio of code modifications to code additions, and (center) an aggregated scatter plot of changes per user over time. The need for type-safe collections and enumerations in many areas of the program lead to the use of many of the traditional Java design pattern hacks to
Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
On a Software Platform for MASIVE Simulations 155
Figure 13. Project management with CVS
Figure 14. StatCVS collage
emulate these language features; often leading to overly verbose code. The code in Figure 15 is an excerpt from the Direction class, which represents North, South, East, and West for the world and the agents. Converting the Direction class to J2SE1.5 (Java 1.5.0) semantics greatly simplified the code, and the necessary refactoring throughout the rest of the program took less than 15 minutes. The addition of a true enum data type Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
156 Trajkovski
Figure 15. J2SE1.4 Direction class implementation
public class Direction implements Comparable { String humanString; public static final Direction Direction.NORTH(“North”); public static final Direction Direction.SOUTH(“South”); public static final Direction Direction.EAST(“East”); public static final Direction Direction.WEST(“West”);
NORTH = new SOUTH = new EAST = new WEST = new
public String toString() { return(this.humanString); } public String shortString() { return( this.humanString.substring(0,1) ); } /************ "Enumerated Types" **************/ private Direction(String humanString) { this.humanString = humanString; } private static class NORTH extends Direction { public NORTH() { this.humanString = "north"; } } private static class SOUTH extends Direction { public SOUTH() { this.humanString = "south"; } } private static class EAST extends Direction { public EAST() { this.humanString = "east"; } } private static class WEST extends Direction { public WEST() { this.humanString = "west"; } } private static class DOWN extends Direction { public DOWN() { this.humanString = "down"; } } public int compareTo(Object o) throws ClassCastException { // Ommited comparison code… } }
eliminates the need for a custom comparator as well, since enums are inherently comparable. Figure 16 is an example of the new implementation. Java 1.5 also eliminated the need to keep track of the different cell types in an awkward and difficult to maintain way. As discussed previously, when new cells are created they must be one of two types a Cell or a Wall. Previously, when the user (or CellGenerator) indicated which type of cell is being created, the cell had to be one of two enumerated types to help ensure type safety. Employing the Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
On a Software Platform for MASIVE Simulations 157
Figure 16. J2SE1.5 direction enumeration public enum Direction { NORTH("North"), SOUTH("South"), EAST("East"), WEST("West"); String humanString; Direction(String humanString) { this.humanString = humanString; } public String toString() { return(this.humanString); } public String shortString() { return( this.humanString.substring(0,1) ); } } }
traditional pre-J2SE1.5 enumeration pattern, a portion of the code of the CellType looked like Figure 17. In Java 1.5, Class can be parameterized so Cell subtypes can be treated as type tokens. Type tokens are essentially parameters passed to methods that represent the type of the object, instead of a normal object reference. In Java, .class evaluates to the class from which the object is instantiated, thus representing the type of the object. Since both FreeCell and Wall extend from Cell, we can now create a method that will take a type token as a parameter and eliminate the need for a special enumeration of valid types. For example, in J2SE1.4 if we wished to refer to an arbitrary object of type Wall, we need to invoke a method such as Figure 18. Using Java 1.5, we can eliminate the enumeration and change the structure to Figure 19.
Figure 17. J2SE1.4 CellType object implementation public static class CellType { public static final CellType WALL = new CellType.WALL(); public static final CellType FREE = new CellType.FREE(); /************ "Enumerated Types" **************/ private static class WALL extends CellType { } private static class FREE extends CellType { } }
Copyright © 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of Idea Group Inc. is prohibited.
158 Trajkovski
The new style uses a parameterized type to restrict the parameter in the callee method to any class that inherits from Cell. Though the total amount of code changes very little in this case, this kind of structure allows for much greater maintainability and code clarity, since an additional enumerated type does not have to be maintained just to designate valid object types; we just let the compiler handle it for us. Other features of the Java 1.5 provide for substantially more significant code reductions, and simplifications. The code in Figure 20 is a sample of the original Java 1.4 implementation of the DriveList data structure class that simply maintains a list of Drives in the system. For the sake of example, this class has been edited for conciseness by eliminating several utility methods. The class extends ArrayList, but to reduce the need for explicit casting of values being read from the ArrayList, a custom nested Iterator class was implemented. This proves to be a less than ideal solution, since the extension of Iterator requires the implementation of a next() method with an Object return type. Java 1.5 now supports covariant return types that allow for narrowing of the return type of an overridden method, but type save iterators are a superior solution in this case. If we wish to enforce some degree of type safety and minimize the need for explicit casting — returning only Drive objects — we must call a special version of next() called nextDrive(), as illustrated in Figure 20. An example of the usage
Figure 18. J2SE1.4 CellType usage example void callingMethod { createNewCell(CellType.WALL, ); } void createNewCell (CellType cellType, ) { // create the cell … }
Figure 19. J2SE1.5 usage of type tokens void callingMethod { createNewCell (Wall.class, ); } void createNewCell(Class