http://journals.wiley.com/cplx
Editor-in-Chief PETER SCHUSTER Institut fuer Theoretische Chemie, Universitaet Wien Waehringerstrasse 17, A-1090 Wien Austria e-mail /
[email protected] phone / 43-1-4277-527-36 fax / 43-1-4277-527-93
Executive Editor ALFRED W. HÜBLER University of Illinois, Department of Physics 1110 West Green Street Urbana, IL 61801-3080 e-mail /
[email protected] phone / (217) 244-5892 fax / (217) 244-8371
Aims and Scope Complexity is a bi-monthly, cross-disciplinary journal focusing on the rapidly expanding science of complex adaptive systems. The purpose of the journal is to advance the science of complexity. Articles may deal with such methodological themes as chaos, genetic algorithms, cellular automata, neural networks, and evolutionary game theory. Papers treating applications in any area of natural science or human endeavor are welcome, and especially encouraged are papers integrating conceptual themes and applications that cross traditional disciplinary boundaries. Complexity is not meant to serve as a forum for speculation and vague analogies between words like ‘‘chaos,’’ ‘‘self-organization,’’ and ‘‘emergence’’ that are often used in completely different ways in science and in daily life.
Editorial Coordinator TAYLOR BOWEN P.O. Box 263 Charlottesville, VA 22902 e-mail /
[email protected] phone / 434-977-5494
Associate Editors MARCUS FELDMAN Stanford University ATLEE JACKSON University of Illinois MARTIN SHUBIK Yale University JOSEPH TRAUB Columbia University WOJCIECH ZUREK Los Alamos
Copyright and Photocopying
National Laboratory
Complexity at Large Editor CARLOS GERSHENSON
Universidad Nacional Auto´noma de Me´xico
Editorial Board PHILIP W. ANDERSON Princeton University KENNETH J. ARROW Stanford University W. BRIAN ARTHUR Santa Fe Institute GREGORY CHAITIN IBM Research Division GEORGE COWAN Santa Fe Institute JIM CRUTCHFIELD University of California Davis MANFRED EIGEN Max Planck Institute JOSHUA EPSTEIN The Brookings Institution WALTER FONTANA Harvard University MURRAY GELL-MANN Santa Fe Institute ELLEN H. GOLDBERG Santa Fe Institute PETER GRASSBERGER University of Wuppertal GEORGE GUMERMAN School for Advanced
Copyright Ó 2011 Wiley Periodicals, Inc. All rights reserved. No part of this publication may be reproduced, stored or transmitted, in any form or by any means without the prior permission in writing of the copyright holder. Authorization to photocopy items for internal and personal use is granted by the copyright holder for libraries and other users registered with their local Reproduction Rights Organisation (RRO), e.g. Copyright Clearance Center (CCC), 222 Rosewood Drive, Danvers, MA 01923, USA (www.copyright.com), provided the appropriate fee is paid directly to the RRO. This consent does not extend to other kinds of copying such as copying for general distribution, for advertising or promotional purposes, for creating new collective works or for resale. Special requests should be addressed to
[email protected] Research, Santa Fe
W. DANIEL HILLIS Consultant JOHN HOLLAND University of Michigan C.S. HOLLING University of Florida ERICA JEN Santa Fe Institute KUNIHIKO KANEKO University of Tokyo STUART KAUFFMAN University of Calagary DAVID LANE Universita degli Studi di Modena SETH LLOYD Massachusetts Institute of Technology HAROLD MOROWITZ George Mason University RICHARD G. PALMER Duke University ALAN PERELSON Los Alamos National Lab DAVID PINES University of Illinois L.M. SIMMONS, JR. MIGUEL VIRASORO The Abdus Salam ICTP GE´RARD WEISBUCH Ecole Normale Supe´rieure
Disclaimer The Publisher and Editors cannot be held responsible for errors or any consequences arising from the use of information contained in this journal; the views and opinions expressed do not necessarily reflect those of the Publisher and Editors, neither does the publication of advertisements constitute any endorsements by the Publisher and Editors of the products advertised.
Edited by Carlos Gershenson
NEWS ITEMS NEUROSCIENCE: FROM THE CONNECTOME TO THE SYNAPTOME The following news item is taken in part from the November 26, 2010 issue of Science titled ‘‘From the Connectome to the Synaptome: An Epic Love Story,’’ by Javier DeFelipe. A major challenge in neuroscience is to decipher the structural layout of the brain. The term ‘‘connectome’’ has recently been proposed to refer to the highly organized connection matrix of the human brain. However, defining how information flows through such a complex system represents [an extremely] difficult (. . .) task (. . .). Circuit diagrams of the nervous system can be considered at different levels, although they are surely impossible to complete at the synaptic level. Nevertheless, advances in our capacity to marry macroscopic and microscopic data may help establish a realistic statistical model that could describe connectivity at the ultrastructural level, the ‘‘synaptome,’’ giving us cause for optimism. A link to this article can be found at http://dx.doi.org/10.1126/science.1193378.
COLLECTIVE INTELLIGENCE OR INDIVIDUAL EXPERTISE? The following news item is taken in part from the October, 2010 issue of PLoS ONE titled ‘‘Swarm Intelligence in Animal Groups: When Can a Collective Out-Perform an Expert?,’’ by Konstantinos V. Katsikopoulos and Andrew J. King. Using a set of simple models, we present theoretical conditions (involving group size and diversity of individual information) under which groups should aggregate information, or follow an expert, when faced with a binary choice. We found that, in single-shot decisions, experts are almost always more accurate than the collective across a range of conditions. However, for repeated decisions—where individuals are able to consider the success of previous decision outcomes—the collective’s aggregated information is almost always superior. A link to this article can be found at http://dx.doi.org/10.1371/journal.pone.0015505.
EPIDEMIOLOGY ON NETWORKS The following news item is taken in part from the November 27, 2010 issue of arXiv titled ‘‘Networks and the Epidemiology of Infectious Disease,’’ by Leon Danon, Ashley P. Ford, Thomas House, Chris P. Jewell, Matt J. Keeling, Gareth O. Roberts, Joshua V. Ross, and Matthew C. Vernon. The science of networks has revolutionized research into the dynamics of interacting elements. It could be argued that epidemiology in particular has embraced the potential of network theory more than any other discipline. Here, we review the growing body of research concerning the spread of infectious diseases on networks, focusing on the interplay between network theory and epidemiology. The review is split into four main sections, which examine: the types of network relevant to epidemiology; the multitude of ways these networks can be characterized; the statistical methods that can be applied to infer the epidemiological parameters on a realized network; and finally simulation and analytical methods to determine epidemic dynamics on a given network. A link to this article can be found at http://arXiv.org/abs/1011.5950.
Q 2011 Wiley Periodicals, Inc., Vol. 16, No. 5 DOI 10.1002/cplx.20376 Published online 27 April 2011 in Wiley Online Library (wileyonlinelibrary.com)
C O M P L E X I T Y
1
SOCIAL CONTAGION IN NETWORKS The following news item is taken in part from the November 4, 2010 issue of PLoS Comput Biol titled ‘‘Infectious Disease Modeling of Social Contagion in Networks,’’ by Alison L. Hill, David G. Rand, Martin A. Nowak, and Nicholas A. Christakis. Information, trends, behaviors, and even health states may spread between contacts in a social network, similar to disease transmission. However, a major difference is that as well as being spread infectiously, it is possible to acquire this state spontaneously. For example, you can gain knowledge of a particular piece of information either by being told about it, or by discovering it yourself. In this article, we introduce a mathematical modeling framework that allows us to compare the dynamics of these social contagions to traditional infectious diseases. As an example, we study the spread of obesity. A link to this article can be found at http://dx.doi.org/10.1371/journal.pcbi.1000968.
NETWORK ANALYSIS OF GLOBAL INFLUENZA SPREAD The following news item is taken in part from the November 18, 2010 issue of PLoS Comput Biol titled ‘‘Network Analysis of Global Influenza Spread,’’ by Joseph Chan, Antony Holmes, and Raul Rabadan. As evidenced by several historic vaccine failures, the design and implementation of the influenza vaccine remains an imperfect science. On a local scale, our technique can output the most likely origins of a virus circulating in a given location. On a global scale, we can pinpoint regions of the world that would maximally disrupt viral transmission with an increase in vaccine implementation. We demonstrate our method on seasonal H3N2 and H1N1 and foresee similar application to other seasonal viruses, including swine-origin H1N1, once more seasonal data are collected. A link to this article can be found at http://dx.doi.org/10.1371/journal.pcbi.1001005.
CYCLES IN HISTORY The following news item is taken in part from the first issue of Cliodynamics: The Journal of Theoretical and Mathematical History titled ‘‘Cycling in the Complexity of Early Societies,’’ by Sergey Gavrilets, David G. Anderson, and Peter Turchin. Warfare is commonly viewed as a driving force of the process of aggregation of initially independent villages into larger and more complex political units that started several 1000 years ago and quickly lead to the appearance of chiefdoms, states, and empires. Here, we build on extensions and generalizations of Carneiro’s (1970) argument to develop a spatially explicit agent-based model of the emergence of early complex societies via warfare. A general prediction of our model is continuous stochastic cycling in which the growth of individual polities in size, wealth/power, and complexity is interrupted by their quick collapse. A link to this article can be found at http://escholarship.org/uc/item/5536t55r.
HIERARCHY AND INFORMATION IN NETWORKS The following news item is taken in part from the November 19, 2010 issue of arXiv titled ‘‘Hierarchy and information in feedforward networks,’’ by Bernat Corominas-Murtra, Joaquı´n Gon˜i, Carlos Rodrı´guez-Caso, and Ricard Sole´. In this article, we define a hierarchical index for feedforward structures taking, as the starting point, three fundamental concepts underlying hierarchy: order, predictability, and pyramidal structure. Our definition applies to the socalled causal graphs, that is, connected, directed acyclic graphs in which the arrows depict a direct causal relation between two elements defining the nodes. The estimator of hierarchy is obtained by evaluating the complexity of causal paths against the uncertainty in recovering them from a given end point. This naturally leads us to a definition of mutual information which, properly normalized and weighted through the layered structure of the graph, results in suitable index of hierarchy with strong theoretical grounds. A link to this article can be found at http://arXiv.org/abs/1011.4394.
THERE’S PLENTY OF TIME FOR EVOLUTION The following news item is taken in part from the December 13, 2010 issue of PNAS titled ‘‘There’s Plenty of Time for Evolution,’’ by Herbert S. Wilf and Warren J. Ewens.
2
C O M P L E X I T Y
Q 2011 Wiley Periodicals, Inc. DOI 10.1002/cplx
Objections to Darwinian evolution are often based on the time required to carry out the necessary mutations. Seemingly, exponential numbers of mutations are needed. We show that such estimates ignore the effects of natural selection, and that the numbers of necessary mutations are thereby reduced to about K log L, rather than KL, where L is the length of the genomic ‘‘word,’’ and K is the number of possible ‘‘letters’’ that can occupy any position in the word. The required theory makes contact with the theory of radix-exchange sorting in theoretical computer science and the asymptotic analysis of certain sums that occur there. A link to this article can be found at http://dx.doi.org/10.1073/pnas.1016207107.
MATHEMATICAL MODELING OF EVOLUTION The following news item is taken in part from the Online First articles of Theory in Biosciences titled ‘‘Mathematical Modeling of Evolution. Solved and Open Problems,’’ by Peter Schuster. Evolution is a highly complex multilevel process and mathematical modeling of evolutionary phenomenon requires proper abstraction and radical reduction to essential features. Examples are natural selection, Mendel’s laws of inheritance, optimization by mutation and selection, and neutral evolution. An attempt is made to describe the roots of evolutionary theory in mathematical terms. A link to this article can be found at http://dx.doi.org/10.1007/s12064-010-0110-z.
CRITICALITY IN BIOLOGICAL SYSTEMS The following news item is taken in part from the December 10, 2010 issue of arXiv titled ‘‘Are biological systems poised at criticality?,’’ by Thierry Mora and William Bialek. Many of life’s most fascinating phenomena emerge from interactions among many elements—many amino acids determine the structure of a single protein, many genes determine the fate of a cell, many neurons are involved in shaping our thoughts and memories. Physicists have long hoped that these collective behaviors could be described using the ideas and methods of statistical mechanics. In the past few years, new, larger scale experiments have made it possible to construct statistical mechanics models of biological systems directly from real data. We review the surprising successes of this ‘‘inverse’’ approach, using examples form families of proteins, networks of neurons, and flocks of birds. Remarkably, in all these cases the models that emerge from the data are poised at a very special point in their parameter space—a critical point. This suggests there may be some deeper theoretical principle behind the behavior of these diverse systems. A link to this article can be found at http://arXiv.org/abs/1012.2242.
CULTUROMICS WITH GOOGLE BOOKS The following news item is taken in part from the January 14, 2011 issue of Science titled ‘‘Quantitative Analysis of Culture Using Millions of Digitized Books,’’ by Jean-Baptiste Michel, Yuan Kui Shen, Aviva Presser Aiden, Adrian Veres, Matthew K. Gray, The Google Books Team, Joseph P. Pickett, Dale Hoiberg, Dan Clancy, Peter Norvig, Jon Orwant, Steven Pinker, Martin A. Nowak, and Erez Lieberman Aiden. We constructed a corpus of digitized texts containing about 4% of all books ever printed. Analysis of this corpus enables us to investigate cultural trends quantitatively. We survey the vast terrain of ‘‘culturomics,’’ focusing on linguistic and cultural phenomena that were reflected in the English language between 1800 and 2000. We show how this approach can provide insights about fields as diverse as lexicography, the evolution of grammar, collective memory, the adoption of technology, the pursuit of fame, censorship, and historical epidemiology. Culturomics extends the boundaries of rigorous quantitative inquiry to a wide array of new phenomena spanning the social sciences and the humanities. A link to this article can be found at http://dx.doi.org/10.1126/science.1199644.
BIOLOGISTICS The following news item is taken in part from the December 19, 2010 issue of arXiv titled ‘‘BioLogistics and the Struggle for Efficiency: Concepts and Perspectives,’’ by Dirk Helbing, Andreas Deutsch, Stefan Diez, Karsten Peters, Yannis Kalaidzidis, Kathrin Padberg, Stefan Lammer, Anders Johansson, Georg Breier, Frank Schulze, and Marino Zerial.
Q 2011 Wiley Periodicals, Inc. DOI 10.1002/cplx
C O M P L E X I T Y
3
The growth of world population, limitation of resources, economic problems, and environmental issues force engineers to develop increasingly efficient solutions for logistic systems. Pure optimization for efficiency, however, has often led to technical solutions that are vulnerable to variations in supply and demand, and to perturbations. In contrast, nature already provides a large variety of efficient, flexible and robust logistic solutions. Can we utilize biological principles to design systems, which can flexibly adapt to hardly predictable, fluctuating conditions? We propose a bioinspired ‘‘BioLogistics’’ approach to deduce dynamic organization processes and principles of adaptive self-control from biological systems, and to transfer them to man-made logistics (including nanologistics), using principles of modularity, self-assembly, self-organization, and decentralized coordination. Conversely, logistic models can help revealing the logic of biological processes at the systems level. A link to this article can be found at http://arXiv.org/abs/1012.4189.
A MODEL OF STRUCTURAL BALANCE The following news item is taken in part from the February 1, 2010 issue of PNAS titled ‘‘Continuous-time Model of Structural Balance,’’ by Seth A. Marvel, Jon Kleinberg, Robert D. Kleinberg, and Steven H. Strogatz. It is not uncommon for certain social networks to divide into two opposing camps in response to stress. This happens, for example, in networks of political parties during winner-takes-all elections, in networks of companies competing to establish technical standards, and in networks of nations faced with mounting threats of war. A simple model for these two-sided separations is the dynamical system dX/dt 5 X2, where X is a matrix of the friendliness or unfriendliness between pairs of nodes in the network. A link to this article can be found at http://dx.doi.org/10.1073/pnas.1013213108.
MODULAR RANDOM BOOLEAN NETWORKS The following news item is taken in part from the January 10, 2011 issue of arXiv titled ‘‘Modular Random Boolean Networks,’’ by Rodrigo Poblanno-Balp and Carlos Gershenson. Random Boolean networks (RBNs) have been a popular model of genetic regulatory networks for more than four decades. However, most RBN studies have been made with regular topologies, while real regulatory networks have been found to be modular. In this work, we extend classical RBNs to define modular RBNs. Statistical experiments and analytical results show that modularity has a strong effect on the properties of RBNs. In particular, modular RBNs are closer to criticality than regular RBNs. A link to this article can be found at http://arXiv.org/abs/1101.1893.
FINANCE ECOLOGY The following news item is taken in part from the January 19, 2011 issue of Nature titled ‘‘Systemic risk in banking ecosystems,’’ by Andrew G. Haldane and Robert M. May. In the run-up to the recent financial crisis, an increasingly elaborate set of financial instruments emerged, intended to optimize returns to individual institutions with seemingly minimal risk. Essentially no attention was given to their possible effects on the stability of the system as a whole. Drawing analogies with the dynamics of ecological food webs and with networks within which infectious diseases spread, we explore the interplay between complexity and stability in deliberately simplified models of financial networks. We suggest some policy lessons that can be drawn from such models, with the explicit aim of minimizing systemic risk. A link to this article can be found at http://dx.doi.org/10.1038/nature09659.
ECOEVOLUTIONARY DYNAMICS The following news item is taken in part from the January 28, 2011 issue of Science titled ‘‘The Newest Synthesis: Understanding the Interplay of Evolutionary and Ecological Dynamics,’’ by Thomas W. Schoener. The effect of ecological change on evolution has long been a focus of scientific research. The reverse—how evolutionary dynamics affect ecological traits—has only recently captured our attention, however, with the realization that evolution can occur over ecological time scales. This newly highlighted causal direction and the implied feedback loop—ecoevolutionary dynamics—is invigorating both ecologists and evolutionists and blurring the distinction between them. A link to this article can be found at http://dx.doi.org/10.1126/science.1193954.
4
C O M P L E X I T Y
Q 2011 Wiley Periodicals, Inc. DOI 10.1002/cplx
SWARM INTELLIGENCE IN PLANT ROOTS The following news item is taken in part from the December, 2010 issue of Trends in Ecology & Evolution titled ‘‘Swarm intelligence in plant roots,’’ by Frantisˇek Balusˇka, Simcha Lev-Yadun, and Stefano Mancuso. Swarm intelligence occurs when two or more individuals independently, or at least partly independently, acquire information that is processed through social interactions and is used to solve a cognitive problem in a way that would be impossible for isolated individuals. We propose at least one example of swarm intelligence in plants: coordination of individual roots in complex root systems. A link to this article can be found at http://dx.doi.org/10.1016/j.tree.2010.09.003.
HARVESTING AMOEBA The following news item is taken in part from the January 19, 2011 issue of Nature titled ‘‘Primitive agriculture in a social amoeba,’’ by Debra A. Brock, Tracy E. Douglas, David C. Queller, and Joan E. Strassmann. Here, we show that the social amoeba Dictyostelium discoideum has a primitive farming symbiosis that includes dispersal and prudent harvesting of the crop. About one-third of wild-collected clones engage in husbandry of bacteria. Instead of consuming all bacteria in their patch, they stop feeding early and incorporate bacteria into their fruiting bodies. They then carry bacteria during spore dispersal and can seed a new food crop, which is a major advantage if edible bacteria are lacking at the new site. A link to this article can be found at http://dx.doi.org/10.1038/nature09668.
EVOLUTIONARY MECHANICS The following news item is taken in part from the January 21, 2011 issue of arXiv titled ‘‘Evolutionary Mechanics: New Engineering Principles for the Emergence of Flexibility in a Dynamic and Uncertain World,’’ by James M. Whitacre, Philipp Rohlfshagen, and Axel Bender. Engineered systems are designed to deftly operate under predetermined conditions yet are notoriously fragile when unexpected perturbations arise. In contrast, biological systems operate in a highly flexible manner, learn quickly adequate responses to novel conditions, and evolve new routines/traits to remain competitive under persistent environmental change. A recent theory on the origins of biological flexibility has proposed that degeneracy—the existence of multifunctional components with partially overlapping functions—is a primary determinant of the robustness and adaptability found in evolved systems. While degeneracy’s contribution to biological flexibility is well documented, there has been little investigation of degeneracy design principles for achieving flexibility in systems engineering. A link to this article can be found at http://arXiv.org/abs/1101.4103.
MORPHOLOGICAL CHANGE AND EVOLUTION OF BEHAVIOR The following news item is taken in part from the January 25, 2011 issue of PNAS titled ‘‘Morphological Change in Machines Accelerates the Evolution of Robust Behavior,’’ by Josh Bongard. Most animals exhibit significant neurological and morphological change throughout their lifetime. No robots to date, however, grow new morphological structure while behaving. This is due to technological limitations but also because it is unclear that morphological change provides a benefit to the acquisition of robust behavior in machines. Here, I show that in evolving populations of simulated robots, if robots grow from anguilliform into legged robots during their lifetime in the early stages of evolution, and the anguilliform body plan is gradually lost during later stages of evolution, gaits are evolved for the final, legged form of the robot more rapidly—and the evolved gaits are more robust—compared to evolving populations of legged robots that do not transition through the anguilliform body plan. A link to this article can be found at http://dx.doi.org/10.1073/pnas.1015390108.
COMPLEXITY THROUGH RECOMBINATION The following news item is taken in part from the January, 2011 issue of Entropy titled ‘‘Complexity through Recombination: From Chemistry to Biology,’’ by Niles Lehman, Carolina Dı´az Arenas, Wesley A. White, and Francis J. Schmidt. Recombination is a common event in nature, with examples in physics, chemistry, and biology. This process is characterized by the spontaneous reorganization of structural units to form new entities. On reorganization, the com-
Q 2011 Wiley Periodicals, Inc. DOI 10.1002/cplx
C O M P L E X I T Y
5
plexity of the overall system can change. In particular, the components of the system can now experience a new response to externally applied selection criteria, such that the evolutionary trajectory of the system is altered. The link between chemical and biological forms of recombination is explored. The results underscore the importance of recombination in the origins of life on the Earth and its subsequent evolutionary divergence. A link to this article can be found at http://www.mdpi.com/1099-4300/13/1/17/.
GROSS DOMESTIC HAPPINESS The following news item is taken in part from the January 19, 2011 issue of Knowledge@Wharton, titled ‘‘Gross Domestic Happiness: What Is the Relationship between Money and Well-being?’’ What exactly is the relationship between money and happiness? It is a difficult question to pin down, experts say. While more money may make us happier, other considerations—such as whether you live in an economically advanced country and how you think about your time—also play into the equation. An increasing number of economists, sociologists, and psychologists are now working in the field, and most agree that there is a strong link between a country’s level of economic development and the happiness of its people. A link to this article can be found at http://knowledge.wharton.upenn.edu/article/2675.cfm.
CONFERENCE ANNOUNCEMENTS International Conference on Swarm Intelligence (ICSI 2011), Cergy, France, 2011/06/14-15 http://icsi11.eisti.fr/ International Conference on Complex Systems (ICCS 2011), Boston, MA, 2011/06/26-07/01 http://www.necsi.edu/ events/iccs2011/ GECCO 2011: Genetic and Evolutionary Computation Conference, Dublin, Ireland, 2011/07/12-16 http://www.sigevo.org/gecco-2011/ IJCAI 2011, The 22nd International Joint Conference on Artificial Intelligence, Barcelona, Spain, 2011/07/16-22 http://ijcai-11.iiia.csic.es/ Third International Workshop on nonlinear Dynamics and Synchronization—INDS0 11; Sixteenth International Symposium on Theoretical Electrical Engineering—ISTET0 11, Klagenfurt am Wo¨rthersee, Austria, 2011/07/25-27 http://inds11.uni-klu.ac.at/ ECAL 11: European Conference on Artificial Life, Paris, France, 2011/08/8-12 http://www.ecal11.org/ The 2011 International Conference on Adaptive & Intelligent Systems—ICAIS0 11, Klagenfurt, Austria, 2011/09/0608 http://icais.uni-klu.ac.at/ European Conference on Complex Systems 2011, Vienna, Austria, 2011/09/12-16 http://eccs2011.eu/
6
C O M P L E X I T Y
Q 2011 Wiley Periodicals, Inc. DOI 10.1002/cplx
Fabrication and Programming of Large Physically Evolving Networks
P
hysically Evolving Networks (PENs) were first proposed by Alan Turing in his 1948 paper ‘‘Intelligent Machines’’ [1]. PENs capture some important features of the information processing of biological neural systems, such as mimicking animal and human error patterns, and implement massively parallel, non-algorithmic processes, where the sequence and concurrency of operations is determined at run time. PEN simulations on conventional digital computers have been successfully used for speech recognition, image analysis, and adaptive control. PEN implementations on computers with von Neumann architecture are slow compared with other algorithms, which are optimized for the sequential processing of explicit instructions. In the following, we discuss hardware implementations of PENs. Would it be possible to fabricate a PEN of the size of a human brain with a billion times more neurons? And if so, how could such a PEN be programmed or trained? Metal particles in oil are a potential PEN hardware implementation. The metal particles form wires if a voltage is applied and a current starts to flow [2]. Figure 1 shows that these wire networks often form ramified structures like the branches of a tree. Therefore, they are called arbortrons [3]. Arbortrons are not perfect conductors, because there tend to be small gaps between the particles, but if they are used as electrical conductors their conductivity increases [4]. This behavior is similar to the neural plasticity of human neurons introduced by Hebb [5]. Human neurons strengthen and become more conductive if they are used frequently and weaken or decay if they are unused. Unused arbortrons are decaying as well, i.e., their particles separate and drift away. Unused arbortrons repair only if the applied current exceeds a threshold, because of static friction and gravity. For that reason, they tend to have a high resistance for small current and become good conductors if the current exceeds a threshold. Therefore, arbortrons are nonlinear conductors with a threshold. Neural plasticity and conductance with thresholds are the key features of PENs. Therefore, arbortrons may be considered hardware implementations of PENs. Hardware implementations of PENs may have some features which exceed biological neural nets. Human axons and dendrites in the central nervous systems system are about 1 lm thick, whereas nanoparticle arbortrons have a diameter of a few nanometers. Because nanoparticle arbortrons are a factor of 1000 thinner, 1000 3 1000 3 1000 5 1 billion nanoparticle arbortrons occupy the same volume as one human neuron. A PEN hardware implementation of the size of a human head with nanoparticle arbortrons could theoretically have a billion times more neurons than the human brain. The power consumption of such a large PEN is probably less than Q 2011 Wiley Periodicals, Inc., Vol. 16, No. 5 DOI 10.1002/cplx.20378 Published online 27 April 2011 in Wiley Online Library (wileyonlinelibrary.com)
ALFRED HU¨ BLER, CORY STEPHENSON, DAVE LYON, AND RYAN SWINDEMAN Alfred Hu¨bler is the director of the Center for Complex Systems Research at the University of Illinois at Urbana-Champaign, Urbana, Illinois (e-mail:
[email protected])
Cory Stephenson and Dave Lyon are PhD students of Alfred Hubler in the Physics Department at the University of Illinois at Urbana-Champaign, Urbana, Illinois
Ryan Swindeman is an undergraduate student majoring in Physics at the University of Illinois in UrbanaChampaign, Urbana, Illinois C O M P L E X I T Y
7
FIGURE 1
Steel spheres (diameter 1 mm) in a 1 mm horizontal layer of castor oil agglomerate and form arbortrons under the influence of an electric field. There are two input electrodes (black) and one output electrode (red). The right input electrode is activated. When the particles are close but do not touch, there is arcing between them.
the power consumption of human brain (20 W), because nanoparticle arbortrons are better conductors than human neurons. Training a large PEN would be a challenge. It takes about 20 years to train human brains before they become productive. Therefore, one might conclude that it might take 20 billion years to train a large PEN with a billion times more neurons. However, on human neurons pulses travel with a speed of 1 m/s to 100 m/s, whereas electrical pulses travel along arbortrons with at the speed of light (0.3 billion
m/s). Because the pulse speed is roughly a factor of a billion larger, it might be possible to train a large PEN within a couple of dozen years. But how can a large PEN be trained? How could it learn to recognize speech or act like a word processor? Experiments with small arbortron PENs suggest that they form patterns which minimize their resistance, i.e., learn to extract energy from complex environments. For instance, the system depicted in Figure 1 forms wires to the electrodes. If the location of the electrodes is changed then the wires disin-
tegrate and new wires form and connect the electrode at the new location. If an electrode is never charged, no wires connect to it. Recent experiments suggest that arbortron PENs can learn to play simple computer games, such as Tetris. Initially, the arbortron PEN is random. The current configuration of the game activates the corresponding input electrode. In response to the activated input electrode, the arbortron output electrodes trigger a move in the game. If the move is ‘‘incorrect’’ the input electrode is turned off immediately. If the move is ‘‘correct’’ then the input current continues for a certain period of time, which strengthens the arbortron branch which produced the ‘‘correct’’ move. The extra current can be considered as an energy reward for the arbortron network. This experiment offers some insights into how a large PEN could be trained by a human or a complex environment: with energy rewards. If the PEN hardware implementation responds ‘‘correctly,’’ it is rewarded a small amount of energy. In summary, training a large PEN is not like programming a digital computer, it is more like training a pet: If its behavior matches the expectations it is rewarded with a treat. The treat is energy.
ACKNOWLEDGMENTS The authors gratefully acknowledge the support for this work by Defense Advanced Research Projects Agency (DARPA) Physical Intelligence subcontract (HRL9060-000706).
REFERENCES
1. Turing, A.M. Intelligent machinery. In: Machine Intelligence, Vol. 5; Meltzer B.; Michie D., Eds.; Edinburgh University Press: Edinburgh, 1969; pp 3–23. ¨ bler, A. Formation and structure of ramified charge transportation networks in an electromechanical system. Proc Natl 2. Jun, J.; Hu Acad Sci 2005, 102, 536–540. 3. Hubler, A.; Crutchfield, J. Order and disorder in open systems. Complexity 2010, 16, 6–9. ¨ bler, A. Hebbian learning in the agglomeration of conducting particles. Phys Rev E 1999, 59, 4. Sperl, M.; Chang, A.; Weber, N.; Hu 3165–3168. 5. Hebb, DO. The Organization of Behavior; Wiley: New York, 1949. 8
C O M P L E X I T Y
Q 2011 Wiley Periodicals, Inc. DOI 10.1002/cplx
The Brazilian Nut Effect by Void Filling: An Analytic Model JUNIUS ANDRE´ F. BALISTA, DRANREB EARL O. JUANICO, AND CAESAR A. SALOMA National Institute of Physics, University of the Philippines, Quezon City, Diliman 1101, Philippines
Received January 11, 2010; revised July 12, 2010; accepted July 21, 2010
We propose an analytic model of the Brazilian nut effect (BNE) that utilizes void filling as the primary mechanism behind the rise to the surface of one large buried intruder particle in an externally driven confined mixture with many small particles. When the intruder rises upward, it creates a void underneath it that is immediately filled by small particles preventing the intruder from sinking back to its previous position. Even though the external driving is only along the vertical direction, the small particles are able to move transversely into the void due to the Janssen effect, which postulates that the magnitudes of the vertically directed and transversely directed forces in a confined granular system are directly proportional to each other. The Janssen effect allows us to calculate the transverse speed distribution in which the small particles fill up the void and the temporal dynamics of the intruder particle in the vertically shaken container. We determine the time-dependent behavior of the intruder vertical position h, its rise velocity dh/dt, and the phase (dh/dt vs. h) as a function of particle size ratio F, container diameter Dc , kinetic friction coefficient l, and packing fraction c. Finally, we show that the predictions of our BNE model agree well with published experimental and simulation results. Ó 2010 Wiley Periodicals, Inc. Complexity 16: 9–16, 2011 Key Words: Brazil nut effect; Janssen effect
1. INTRODUCTION
V
oid filling (VF) has been used to explain qualitatively the Brazilian nut effect (BNE), the phenomenon associated with externally driven segregation in a confined mixture of large and small particles, with the large particles ascending to the top and the small ones descending to the bottom of the container [1–3]. The idea of VF as the Corresponding author: Caesar A. Saloma, National Institute of Physics, University of the Philippines, Diliman, Quezon City, Philippines 1101 (e-mail: caesar.saloma@gmail. com)
Q 2010 Wiley Periodicals, Inc., Vol. 16, No. 5 DOI 10.1002/cplx.20345 Published online 31 August 2010 in Wiley Online Library (wileyonlinelibrary.com)
primary mechanism behind the BNE is simple and intuitive—the rising of the large particle (often referred to as the ‘ intruder’’ if there is only one in the mixture) creates a void underneath it that is immediately filled up by the small particles preventing it from returning back to its initial position. There is no available analytic formulation that utilizes VF as the primary mechanism of BNE as VF is a local geometric, short time-scale process, and therefore the kinetic and hydrodynamics theories are not plausible starting points. Both theories assume the prevalence of binary collisions and a spatiotemporal dynamics that slowly varies in time—conditions that are not fulfilled in the BNE
C O M P L E X I T Y
9
[3–5]. The filling of the void necessitates the transverse movements of small particles that are difficult to justify in a confined mixture that is shaken only along vertical direction. Here, we study the simplest case of BNE—the one that happens in a mixture of one large intruder particle together with many other small particles inside a cylindrical container that is shaken vertically (along the direction of the gravitational force). The shaking and the VF process cause the intruder particle to move toward the surface, and we utilize the Janssen effect to calculate the mean transverse velocity of the small particles that are filling up the void that is created by the rising intruder. The Janssen effect postulates that the pressure at the bottom of a granular material in a container becomes independent of the amount of material after a certain depth [6]. The independence of the pressure with depth is due to the balance between the weight of the granular material and the frictional force that the material experiences with the wall. Janssen conjectured that the transverse and vertical components of the forces in the granular material are directly proportional with each other. The Janssen effect plays a crucial role in our goal to develop an analytic model of BNE via the VF mechanism. It provides us with a phenomenological basis for relating the magnitudes of the vertical and transverse forces that are acting on the moving particles inside the vertically shaken container. We assume that the Janssen effect remains valid even for an externally driven container—an assumption that is likely to hold when the driving is weak (low amplitude) and slow (low modulation frequency). The validity of our assumption is discussed further in Section 4. The Janssen effect was originally developed for static granular systems. It is the subject of a number of critical concerns [7] that remain relevant for the periodically perturbed container. We do not address these concerns here as in the final analysis, they could be resolved only by additional experimental evidence that remains lacking at the moment. It is also worth mentioning that aside from VF, other mechanisms have also been proposed to explain BNE. The most notable is convection, a phenomenon that involves the vertical and transverse movements of small particles along and near walls and base of the container [8, 9]. Its distinct feature is the existence of friction between the walls and the particles. Current models utilize either VF or convection but not both as the main mechanism for the occurrence of BNE [9, 10]. We do not attempt to resolve the issue regarding the true mechanism behind the BNE given the inadequate number of accurate and precise measurements done on the phenomenon itself and the incomplete state of scientific understanding of the dynamics of granular systems. We can only say that the basic difference between a VF-based model with one that utilizes
10
C O M P L E X I T Y
convection as the primary mechanism of BNE is in the location of the momentary void to be filled up. In the former, the void is directly underneath the rising intruder while in the latter, it is near the container walls. Our presentation proceeds with a quantitative description of VF and the use of Janssen effect to calculate the transverse velocities of the small particles that fill the void in Sections 2.1 and 2.2, respectively. The model for the simplest BNE case is formulated in Section 2.3 and its predictions are compared with previously published results in Section 3. The outcome of the comparison is discussed in Section 4.
2. THEORY 2.1. Void Filling Let us consider a large spherical particle of diameter D that is initially at the bottom of a cylindrical container that is also filled with small spherical particles (each of mass m and diameter d) up to height H [see Figure 1(a)]. The container is vertically shaken causing the large intruder particle to rise to a height h ( ; p: : > : > Fr ¼ jFz ¼ jp> 2 2
(5)
The negative sign is dropped from the third expression of Eq. (5) as we are only comparing the magnitudes of Fz and Fr and not their respective directions.
C O M P L E X I T Y
11
FIGURE 2
(a) dh/dt versus h (log plot) for different void lifetimes dt and (b) Intruder trajectory h(t) (semilog plot) for different void lifetimes dt to illustrate the eventual rise of intruder to the top regardless of chosen dt value. Other parameter values: d 5 1 mm, Dc 5 12.25 mm, H 5 43 mm, F 5 3.16 and f 5 0.107 Hz, g 5 10, ho 5 d, j 5 0.8, and c 5 0.64.
Substituting the equivalent Fr expression into Eq. (3) and integrating, we obtain the general expression for the transverse velocity:
v¼
8 9 jpd2 kqgc z H> : ; dt 1 exp> 4m k
(6)
Equation (6) shows that v decreases exponentially with height. The transverse velocity is fastest for a particle that is at the base (z 5 0) and zero (v 5 0) for a particle that is located at the top of the mixture (z 5 H). Clearly, no transverse motion is possible for the particles in the absence (dt 5 0) of a void. In the next section, we use Eq. (6) to derive the specific expression for the mean transverse velocity vt of the small particles that are filling up the void that is created in the wake of an upward moving intruder particle that momentarily located at an arbitrary height z 5 h < H [see Figure 1(a)].
2.3. Analytic Model of BNE The region of the z 5 h plane that is within the imaginary cylindrical void will be filled when the net transverse velocity of the moving small particles, vt, is directed toward (positive) the void and not away (negative) from it. In general, a number of small particles will transversely move out of the abovementioned void plane with mean exit speed, vo, while another number enters the same plane from the outside of the cylindrical void with mean entry speed, vi. The mean exit speed is given by:
12
C O M P L E X I T Y
vo ¼
9 8 0 jpd2 kqgc z h> ; dt : 1 exp> 4m k
(7)
The other small particles that enter the void would come from the outside of the void, i.e., the rest of the z 5 h plane that is not within the void. Their mean entry speed, vi, is given by:
vi ¼
8 0 9 jpd2 kqgc z H> : ; dt 1 exp> 4m k
(8)
Note that Eq. (7) is just Eq. (6) but with H being replaced by h as the exiting particles are underneath the large intruder particle with no other small particles above them. On the other hand, Eq. (8) takes into account that the entering small particles come from outside the void, and therefore their mean entry velocity is affected by the presence of other small particles that are located above them in the mixture of height H. In Eqs. (7) and (8), we have replaced variable z, which is a dummy variable, with z0 in preparation for the integration that will be performed later. The directions of vi and vo are opposite of each other and their sum yields the net transverse velocity of the small particles in the void, vt 5 vi 2 vo. The filling (emptying) of the imaginary void is designated as a positive (negative) process and the negative sign is therefore attached to the exit velocity, vo. Note that vt 0 as vi > vo according to Eqs. (7) and (8).
Q 2010 Wiley Periodicals, Inc. DOI 10.1002/cplx
During transverse flow we assume that no direct interaction happens between the entering and exiting small particles as well as between the small particles that are above the intruder allowing us to neglect the complicated effect of ‘ heaping’’ or ‘ bulging’’ that is likely when the intruder rises up rapidly. Equations (7) and (8) permit us to express the net velocity, vt, of the small particles that are entering the void as:
8 9# " 8 9 0 :z0 H ; jpd2 kqgc :z kh; k e dt: e vt ¼ 4m
(9)
Figure 1(c) plots the value of net velocity, vt, as a function of the distance z0 from the container base. Note that vt attains its maximum value at the location of void. Equation (9) provides a specific expression for vt that permits us to derive an analytic expression for the intruder rise velocity dh/dt via Eq. (2):
8 9 8 9# Z " 0 :z0 H ; jpd2 kqgcdt h :z kh; k vt dz ¼ e dz0 e Dm 0 0 8 9# 8 9" 8 9 (" #) :hk; :Hk ; :hk; 3D2c gcdt ¼ 1e e 1 dt e 8Ddjl2
dh 4 ¼ dt D
Z
h
0
(10) where k 5 Dc/4lj and m 5 pd3q/6. The distinctive feature of BNE is the upward displacement of the intruder particle (of diameter D) relative to the other small particles (of diameter d) in a cylindrical container (of diameter Dc). The following constraint sets the relation between the void cross section, Avoid, and the cross section, Asp, of the small particle:
9 8 9 8 2, 2 9 8 > > A pD pd D2 > > > > > > void > > > > 1 ¼ 1> 1 > > ;¼> ;>0 : : : 4 ; Asp 4 d2 (11) Note that vt and dh/dt are both zero when (D/d)2 2 1 0. The particle size ratio, F 5 D/d, is used as a primary parameter of interest in past BNE models. The product of Eqs. (10) and (11) leads to an expression for dh/dt that contains the term (F2 2 1):
8 9# 8 9" 8 9 #) (" :hk; :Hk ; :hk; dh 3D2c gc U2 1 ¼ e e 1 dt 1 e dt 8d2 l2 jU (12) The intruder moves upward (dh/dt > 0) if it is initially immersed in a sea of small particles, i.e., h < H. On the other hand, no vertical motion (dh/dt 5 0) is possible in the absence of a void (dt 5 0).
Q 2010 Wiley Periodicals, Inc. DOI 10.1002/cplx
Equation (12) reveals that the intruder does not rise (dh/dt 0) when F 1—a condition that is not approximated well by published experimental results [11] where it was found that dh/dt 0 even with F > 1. We surmise that in such cases, VF, which is a local geometric effect that depends on the size difference between the intruder and the small particles, is no longer a dominant mechanism behind the BNE and the contributions of other processes like convection need to be taken into account. We note that an expression for h may be obtained from Eq. (12) by separation of variables to obtain expression for dh/dt as an explicit function of t. In principle, the void lifetime dt is an observable quantity. In practice, its measurement in the vertically shaken cylindrical container is not straightforward because the intruder particle is surrounded by many small particles at all times. The rising of the intruder particle is difficult to track in time even for a see-through container with one marked intruder surrounded by many small transparent particles. To our knowledge, only Ellenberger et al. [12] reported performing such measurements using a high-speed camera and they observed a void that lasted for 20 ms. To understand the nature of the void more accurately, additional precise measurements are still needed. We surmise that the dt value is dependent on the type of particle materials, mixture medium, driving amplitude strength, and frequency as well as different particle geometrical shapes. Figure 2(a) presents the phase plot (dh/dt vs. h) for different possible dt values. It reveals that the slope of the phase curves remains unchanged with dt. Figure 2(b) plots the corresponding trajectory (h vs. t) of the intruder particle for the same set of dt values as in Figure 2(a). Figure 2(a,b) illustrates that BNE happens sooner or later, i.e., the intruder particle would eventually rise to the top independent of the chosen dt value. The BNE is completed more quickly for voids that last longer (larger dt values).
3. COMPARISON OF MODEL PREDICTIONS WITH PUBLISHED EXPERIMENTAL OR SIMULATION RESULTS We calculate the time-dependent behavior of the intruder position h, the intruder rise velocity dh/dt, and the phase plot (dh/dt vs. h) for different values of the size ratio F, container diameter Dc, kinetic friction coefficient l, and packing fraction c. We then compare our results with published experimental values. First, we plot the intruder trajectory h(t) for a set of F values that are chosen because they are similar to those used in the experiments of Duran et al. [11]. Figure 3(a) compares the predicted h(t) values (solid lines) with the experimental results of Duran et al. To avoid visual confusion, we only present sampled values of the experimental plots.
C O M P L E X I T Y
13
FIGURE 3
(a) Time-dependent intruder trajectory h(t) and (b) Mean intruder rise velocity dh/dt for different particle size ratios F where d 5 0.15 cm, Dc 5 15 cm, and H 5 10 cm (same values as in Ref. [11]), j 5 0.8, g 5 1000 cm/s2, ho 5 0.015 mm, dt 5 1026 s, c 5 0.64, and l 5 0.97 (for oxidized aluminum used in Ref. [15]). Sampled data points of h(t) and dh/dt represent experimental values reported in Ref. [11].
The analytic h(t) curves that are generated via our BNE model share the same nonlinear characteristics (presence of knee, finite saturation level, rightward shift of knee location with decreasing F) as the experiment results. The discrepancies (different saturation levels and rise velocities) between theory and experiment may be attributed to the difficulty of maintaining all the other parameters constant while F was being varied. Duran et al. measured the rise velocity from the slope at the upper part of the trajectory as it depends nonlinearly with time. Assigning a single-valued ascent velocity is therefore only a first-order approximation because the velocity is not constant in time. We also determined the slope of the middle (longest) portion of the trajectory for various size ratio, F, and compare our results with those of Duran et al. Figure 3(b) illustrates that the theoretical predictions are consistent with the experimental results— the intruder rise velocity is directly proportional with F in both cases. Figure 4(a) plots trajectory h(t) for different Dc values that are similar to the ones used in the experiments of Nahmad-Molinar et al. [13]. Consistent behavior is seen between the analytic curves and the experimental results of Nahmad-Molinar et al. Figure 4(b) presents a set of h(t) plots for l values that are the same as the ones utilized by Sun et al. [14]. Theoretical predictions agree well with the simulation results of Sun et al. Figure 5(a) plots h(t) for a set of c values that are similar to those used by Saez et al. in their simulation results [15]. The theoretical h(t) plots and the simulation results of Saez et al. share similar nonlinear characteristics (presence of
14
C O M P L E X I T Y
knee, rapid rise after threshold, and finite saturation level). However, there are differences in details—for c 5 0.64, the theory predicted a quick rise to saturation level of h 5 100 mm. In the simulations however, the intruder initially had a slow start before finally rising up quickly (at t 7.2-min mark) to the top. For c 5 0.74 (hexagonal packing), it is the other way around—the intruder was observed to actually rise up much quickly to the top than is predicted. The discrepancy may be due to unavoidable variations in the l and mass values of the small particles and slight deviations from pure vertical rise of the intruder. Figure 5(b) plots the phase (dh/dt vs. h) for c 5 0.64 and 0.74. Theory predicts that dh/dt reaches a finite maximum value with h at 48 mm < h < 52 mm for both c values. Such behavior was also observed in the simulations at a different range of 72 mm < h < 76 mm. Theory predicts a unimodal phase curve but a more complicated bimodal curve was observed with two-peak velocities occurring at 48 mm < h < 52 mm and at 72 mm < h < 76 mm. Our model assumes that the intruder particle rises up to the top in a straight line and an efficient VF process that does not allow to intruder particle to sink in a ‘‘twostep forward one-step backward’’ manner. In practice, the said assumptions may not strictly hold throughout the entire BNE process due to imperfect packing of the small particles, slight variations in the small particle sizes, and masses as well as the possible dependence of the various parameter values with each other (e.g., q with c and l) given the constraint of the container size and geometry.
Q 2010 Wiley Periodicals, Inc. DOI 10.1002/cplx
FIGURE 4
(a) Intruder trajectory h(t) for container diameter Dc 5 2.5, 4.4, and 5.3 cm. Other parameter values: d 5 0.2 cm, H 5 12 cm, F 5 3.16, f 5 7.5 Hz (same as in Ref. [13]); g 5 1000 cm/s2, ho 5 d; j 5 0.5, dt 5 3 $ 1026 s, c 5 0.64, and l 5 0.2. Sample points represent experiment results reported in Ref. [13]. (b) h(t) for l 5 0.4, 0.5, and 0.6. Other parameters values: d 5 do 5 1, to 5 1, Dc 5 12.25 do, H 5 43 do, F 5 3.16 and f 5 0.107/to (same as in Ref. [14]), g 5 10, ho 5 d, j 5 0.8, dt 5 5 $ 1025 to, and c 5 0.64. In Ref. [14], all lengths were expressed in units of the diameter do of small particle and time is expressed in unit time to 5 H(do/g) where g is the acceleration due to gravity. The assumption that other variables scale with do and to seems justified by the qualitative agreement between their results with those of Ref. [13].
4. DISCUSSION Our model has produced families of h(t) and dh/dt curves with nonlinear properties that are consistent with results previously published by four different groups [11, 13–15]. We also noticed higher-order discrepancies
between our theoretical predictions and the experimental findings. We point out that the predicted values represent averaged quantities that are calculated using constant values for the packing fraction c and kinetic friction coefficient l.
FIGURE 5
Plots of (a) intruder trajectory h(t) and (b) dh/dt versus h for packing fraction c 5 0.64 and 0.74. Other parameters values: d 5 1.5 mm, Dc 5 50 d, D 5 19.5 mm, H 5 66 d, F 5 13, f 5 15 Hz, and l 5 0.97 (same as in Ref. [15]), g 5 3.6 $ 107 cm/min2, j 5 0.8; dt 5 1028 min, and ho 5 0.15 mm. Sampled data points represent experimental results reported in Ref. [15].
Q 2010 Wiley Periodicals, Inc. DOI 10.1002/cplx
C O M P L E X I T Y
15
The said predictions are best compared with experimental results that are ensemble-averaged values. Unfortunately, the available experimental results did not benefit from a large number of trials. The discrepancy between theory and experiments may be also due to the contributions of other driving mechanisms (e.g., convection) that were neglected in our model. The Janssen effect was originally developed for static granular materials. Its validity in perturbed confined granular mixtures is still an open issue but the findings of Bertho et al. [16] provide a possible justification in our use of the Janssen effect to explain BNE. Bertho et al. have determined that the Janssen assumption remains valid for granular materials contained inside a moving cylinder in the weak excitation range from 1025 to 1022 m/s. We also mention that the VF mechanism was found to be significant in BNE cases within the weak excitation regime [3, 5]. The BNE is a complex phenomenon that could be caused by two or more driving mechanisms [3, 5]. While the VF mechanism is easy to visualize, its corresponding mathematical formulation is not straightforward due to the difficulty of finding a physical basis for the transverse motion of small particles that move in and out of the void created by the vertical displacement of the intruder particle. The Janssen effect has provided us with a way out of the difficulty. It arises from the frictional interaction between the granular material and the container wall, and the rise of the intruder particle in the BNE was hypothesized to be due to the long-range interaction between the intruder and the wall with the help of the smaller particles [11, 14]. The efficacy of Janssen’s assumption in explaining the BNE also proves its robustness against prevailing criticisms [7]. For example, the Janssen’s assumption of a fully mobilized frictional interaction is more suitable for a vibrated granular material than in a static granular material as vibration creates more chances of changing contacts and
configurations. There remains a need for a more accurate theory for BNE that can make better sense of existing discrepancies between theory and experiment. The task will be made easier if we can improve our current understanding of the physical meaning of the phenomenological Janssen proportionality constant j. Exploring the possible link between the Janssen effect and BNE is likely to advance our understanding of granular materials.
5. CONCLUSIONS We have developed an analytic model of the BNE that utilizes VF as the only possible mechanism responsible for the eventual rise of an intruder particle to the top of a confined mixture. The Janssen effect provides the physical basis for VF to happen in BNE. It allows the small particles to move transversely toward the void that is created by the vertical motion of the large intruder particle. We have calculated the time-dependent behavior of the intruder vertical position h, the rise velocity dh/dt, as well as the phase (dh/dt vs. h), and compare the results with previously published results for different values of the size ratio F, container diameter Dc, kinetic friction coefficient l, and packing fraction c. We found a general agreement between the predictions of our BNE model and published simulation or experimental results. Higher-order discrepancies also exist between theory and experimental or simulation results that can be resolved with the conduct of more precise measurements that will be used to validate a more sophisticated BNE theory.
Acknowledgments The authors benefited from valuable discussions with M. Lim and R. Sarmago and the comments of the two reviewers. J.A.F. Balista was supported by a PCASTRDDOST scholarship.
REFERENCES 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16.
16
Rosato, A.; Strandburg, K.J.; Prinz, F.; Swendsen, R.H. Phys Rev Lett 1987, 58, 1038. Rosato, A.; Prinz, F.; Strandburg, K.J.; Swendsen, R.H. Powder Technol 1986, 49, 59. Kudrolli, A. Rep Prog Phys 2004, 67, 209. Kadanoff, L.P. Rev Mod Phys 1999, 71, 435. Schro¨ter, M.; Ulrich, S.; Kreft, J.; Swift, J.B.; Swinney, H.L. Phys Rev E 2006, 74, 011307. Janssen, H.A. Zeitschr. d. Vereines deutscher Ingenieure 1895, 39, 1045; Translation: M.Sperl, Granular Matter 2006, 8, 59. Nedderman, R.M. Statics and Kinematics of Granular Materials; Cambridge University Press: Cambridge, England, 1992. Knight, J.B.; Ehrichs, E.E.; Kuperman, V.Y.; Flint, J.K.; Jaeger, H.M.; Nagel, S.R. Phys Rev E 1996, 54, 5726. Knight, J.B.; Jaeger, H.M.; Nagel, S.R. Phys Rev Lett 1993, 70, 3728. Rosato, A.D.; Blackmore, D.L.; Zhang, N.; Lan, Y. Chem Eng Sci 2002, 57, 265. Duran, J.; Mazozi, T.; Clement, E.; Rajchenbach, J. Phys Rev E 1994, 50, 5138. Ellenberger, J.; Vandu, C.O.; Krishna, R. Powder Technol 2006, 164, 168. Nahmad-Molinar, Y.; Canul-Chay, G.; Ruiz-Suarez, J.C. Phys Rev E 2003, 68, 041301. Sun, J.; Battaglia, F.; Subramaniam, S. Phys Rev E 2006, 74, 061307. Saez, A.; Vivanco, F.; Melo, F. Phys Rev E 2005, 72, 021307. Bertho, Y.; Giorgiutti-Dauphine, F.; Hulin, J.P. Phys Rev Lett 2003, 90, 144301.
C O M P L E X I T Y
Q 2010 Wiley Periodicals, Inc. DOI 10.1002/cplx
Optimization in ‘‘Self-Modeling’’ Complex Adaptive Systems RICHARD A. WATSON, 1 C. L. BUCKLEY, 2 AND ROB MILLS 1 1
Natural Systems Group, Electronics and Computer Science, University of Southampton, SO17 1BJ, United Kingdom; and 2School of Informatics, Sussex University, BN1 9RH, United Kingdom
Received March 3, 2010; accepted July 23, 2010
When a dynamical system with multiple point attractors is released from an arbitrary initial condition, it will relax into a configuration that locally resolves the constraints or opposing forces between interdependent state variables. However, when there are many conflicting interdependencies between variables, finding a configuration that globally optimizes these constraints by this method is unlikely or may take many attempts. Here, we show that a simple distributed mechanism can incrementally alter a dynamical system such that it finds lower energy configurations, more reliably and more quickly. Specifically, when Hebbian learning is applied to the connections of a simple dynamical system undergoing repeated relaxation, the system will develop an associative memory that amplifies a subset of its own attractor states. This modifies the dynamics of the system such that its ability to find configurations that minimize total system energy, and globally resolve conflicts between interdependent variables, is enhanced. Moreover, we show that the system is not merely ‘ recalling’’ low energy states that have been previously visited but ‘ predicting’’ their location by generalizing over local attractor states that have already been visited. This ‘ self-modeling’’ framework, i.e., a system that augments its behavior with an associative memory of its own attractors, helps us better understand the conditions under which a simple locally mediated mechanism of self-organization can promote significantly enhanced global resolution of conflicts between the components of a complex adaptive system. We illustrate this process in random and modular network constraint problems equivalent to graph coloring and distributed task allocation problems. Ó 2010 Wiley Periodicals, Inc. Complexity 16: 1726, 2011 Key Words: complex adaptive systems; Hopfield networks; self-organization; optimization; associative memory
LOCAL CONSTRAINT SATISFACTION AND ASSOCIATIVE MEMORY Corresponding author: Richard A. Watson, Natural Systems Group, Electronics and Computer Science, University of Southampton, SO17 1BJ, United Kingdom (e-mail: raw@ ecs.soton.ac.uk)
Q 2010 Wiley Periodicals, Inc., Vol. 16, No. 5 DOI 10.1002/cplx.20346 Published online 30 September 2010 in Wiley Online Library (wileyonlinelibrary.com)
M
any natural dynamical systems have behaviors that can be understood as the local minimization of an energy or potential function [1]. The Hopfield network [2] is a well-understood exemplar of such
C O M P L E X I T Y
17
dynamical systems, exhibiting only point attractors, that has provided a vehicle for studying dynamical systems across many disciplines [Eq. (1)]. In this article, we investigate the interaction of two well-known properties of complex systems that have each been independently well studied in the Hopfield network: (i) The energy minimization behavior of dynamical systems [2], which can be interpreted as a local optimization of constraints [3, 4], and (ii) Hebbian learning [5] with its capacity to implement associative memory [2, 6]. The former is analogous in some circumstances to the behavior of multiple autonomous agents in a complex system, such as servers in a grid computing system or people in a social network, attempting to maximize productivity or consensus using local rules, given intrinsic pairwise constraints. The latter is generally assumed to be relevant only to neural networks and cognitive systems—but in this article, we introduce the idea of implementing associative memory in a distributed complex adaptive system and discuss its effects on system behavior. The dynamics of a Hopfield network, consisting of N discrete states si 5 61, can be described by updates to individual states:
" si ðt þ 1Þ ¼ u
N X
# xij sj ðtÞ
(1)
j
locally optimal resolutions of the opposing influences between variables or of the systems’ constraints [3, 4] (see also the stochastic counterpart, the Boltzmann machine [7, 8]). However, networks with interactions that are difficult to resolve simultaneously create a dynamical system that has a large number of local attractors. In this case, when the state configuration of the system is set to an arbitrary initial condition and allowed to relax to an attractor, it will generally not result in a configuration that is globally minimal in energy or a globally optimal resolution of constraints [9]. In a quite unrelated scenario, training a dynamical system to have a particular energy function may be interpreted as a model induction process which takes as input a set of points in configuration space, ‘ training patterns,’’ and returns a network which ‘ models’’ those points by exhibiting point attractors that correspond to those configuration patterns. The system may then act as an associative or content addressable memory [2], which takes as input a partially specified or corrupted input pattern and ‘ recalls’’ the training pattern that is most representative of that input pattern. A Hopfield network may be trained to implement such a dynamical system via Hebbian learning [5], i.e., the distributed application of Hebb’s rule to all connections in the system (i.e., the change in weight, Dxij 5 dsisj, d > 0). That is, for all xij, i = j:
xij ðt þ 1Þ ¼ xij ðtÞ þ dsi ðtÞsj ðtÞ where xij are elements of the connection matrix X, and y is the Heaviside threshold function (taking values 21 and 11 for negative and positive arguments, respectively). The Hopfield network is run by repeatedly choosing a unit, i, at random and setting its state according to the above formula. Hopfield showed that if the connection matrix is symmetric, xij 5 xji, and under suitable constraint on the self-weights (here xii 5 0), all trajectories described by Eq. (1) converge on point attractors, which are minima of the energy function given by:
ES ¼ HðSðtÞ; XÞ
N X
xij si ðtÞSj ðtÞ
(2)
ij
Consequently, one can describe the asymptotic behavior of such a network in terms of a process that locally minimizes this function. The energy function [Eq. (2)] intuitively corresponds to the sum of ‘ tensions’’ in all state variables or the degree to which influences from other state variables act to oppose the current state. A state change under Eq. (1) necessarily resolves more constraints than it violates (under the Hopfield conditions), i.e., creates a net reduction in tensions and reduces the total energy of the system. Minima in this function thus correspond to attractors in the network dynamics that are
18
C O M P L E X I T Y
(3)
where d > 0 is a constant controlling the learning rate. During training, si and sj represent the states of a given training pattern and each pattern in the training set is presented in turn repeatedly. Although the energy minimization behavior of the Hopfield network and its interpretation as a local optimization process is well known [1, 3, 4], and similarly, the ability of Hebbian learning to implement an associative memory of a set of training patterns and ‘ recall’’ them or ‘ recognize’’ them from noisy or partial examples is also well known [2], the idea of combining these behaviors in the same network may seem unnatural. In the former, the weights of the network take fixed values that represent the constraints between variables in a combinatorial optimization problem, the objective is to discover configurations that optimize the satisfaction of these constraints, and local optima in this function are a hindrance to global optimization. In contrast, in training an associative memory, the weights of the system are initially neutral (xij 5 0) but change over time such that local optima are created that represent the patterns to be stored. These seem like incompatible objectives [10]. For example, if a network is being used to find solutions to a combinatorial optimization problem, then it does not obviously make sense to change the weights that represent the problem. Can it be
Q 2010 Wiley Periodicals, Inc. DOI 10.1002/cplx
useful for a network to store patterns at the same time as recalling patterns? In this article, we show that under certain conditions the application of associative memory and repeated energy minimization in the same network on different timescales creates a positive feedback process that significantly improves the constraint satisfaction ability of the system—i.e., enhances its ability to find configurations that minimize constraints between system variables and globally minimize energy. We refer to this as a ‘ self-modeling’’ complex adaptive system. The concurrent evolution of state dynamics and changes in weights has been studied in the Hopfield network previously [11], and more generally, the notion that network topology affects behaviors on a network and, vice versa, that behaviors on a network may affect network topology, is gaining increasing attention [12]. This article illustrates conditions where self-organization of topology in an adaptive network alters its ability to minimize energy and hence its optimization capabilities.
A ‘‘Self-Modeling’’ Dynamical System We consider systems with the following conditions: (1) The initial dynamics of the system (given the initial connections between variables) exhibit multiple point attractors; (2) The system configurations are repeatedly relaxed from different random initial conditions such that the system samples many different attractors on a timescale where connections change slowly; and (3) The system spends most of its time at attractors. The first condition is consistent with scenarios where the initial network represents a difficult optimization problem [3, 4]. To satisfy condition (2), the system takes a random state configuration, R 5 {21|1}N, every s time steps (state updates). We refer to the behavior of the network between these perturbations as a relaxation of the network. This effects multiple attempts at solving the optimization problem. The second condition also asserts that d is sufficiently small that the distribution of attractor states visited changes slowly—we show examples of this below. The third condition above asserts that s t*, where t* is the number of time steps required to reach a local attractor state. Under these conditions, the state configurations that are experienced most often will be the attractor states of the system’s own dynamics, and as changes to connections are slowly accumulated, these modifications to the network constitute an associative memory of its own attractor states. What does it mean for a system to ‘ learn’’ its own attractor states? From a neural network learning point of view, a network that forms a memory of its own attractors is a peculiar idea (indeed, the converse is more familiar [13]). Forming an associative memory means that a system
Q 2010 Wiley Periodicals, Inc. DOI 10.1002/cplx
forms attractors that represent particular patterns or state configurations. For a network to form an associative memory of its own attractors therefore seems redundant; it will be forming attractors that correspond with attractors that it already has. However, in accumulating weight changes that constitute an associative memory of the original attractors, the system will nonetheless alter its attractors; it does not alter their positions in state configuration space but it does alter the size of their basins of attraction (i.e., the set of initial conditions that can lead to a given attractor state via local energy minimization).1 Specifically, the more often a particular state configuration is visited, the more its basin of attraction will be enlarged, and the more it will be visited in future, and so on. Because attractor basins jointly cover the entire configuration space, it must be the case that some attractor basins are enlarged at the expense of others. Accordingly, attractors that have initially small basins of attraction will be visited infrequently and as the basins of other attractors increase, these attractors will decrease. Eventually, with continued positive feedback, one attractor will out-compete all others and there will only be one attractor remaining in the system. However, what has this got to do with resolving the constraints that were defined in the original weights of the system? To understand the relationship between the original constraints of the system and the new dynamics of the system given its learned connections, it is informative to define the ‘ true’’ or original energy, E0S , defining the degree to which a configuration of the system successfully resolves the original constraints between problem variables, using aij : xij (t 5 0), as follows:
ES0 HðS; Xðt ¼ 0ÞÞ ¼
N X
aij si sj
(4)
ij
It is this original energy function (solid curve Figure 1) that we are interested in minimizing to resolve the true constraints between problem variables. In contrast, xij and Eq. (2) define a modified or augmented energy function (dashed curves Figure 1) which determines the behavior of the system at any given time, as per Eq. (1). The behavior of the system given the modified connections will, in general, be different from that given the original connections. Thus, the true energy of configurations that the system 1
The application of Hebbian learning does not ‘ form’’ a memory de novo since the ‘ learned’’ attractors were already present in the system’s initial dynamics. Nonetheless, the modifications that accumulate to these weights constitute a memory in the proper sense that they represent state configurations that the system has visited in the past (and generalizations thereof).
C O M P L E X I T Y
19
FIGURE 1
Schematic overview of the relationship between the original energy function and the modified energy function. (a) The original connections in the system determine the distribution of points (dots) in state space (here represented one-dimensionally) that the system is likely to visit. (b) As the connections of the network are slowly modified, forming an associative memory of the distribution in (a), the energy function and hence the distribution of points visited is altered. The new energy function (dotted) is a simplified and generalized model created by the associative memory, which may remove small basins and merge or enlarge others. (c) As attractors in the modified energy function compete with one another, the learned model of the system’s own dynamics becomes an increasingly simplified caricature of its original behavior and eventually all local optima are removed. The final distribution of points found at the single attractor of the new energy function corresponds to a state configuration that has especially low energy in the original energy function (solid curve).
reaches may change over time, not because the true energy of any given configuration is different, but because the distributions of configurations that are visited is different. In particular, if only one attractor state remains, we want to know what the energy of this state configuration is under Eq. (4). One might expect, given naı¨ve positive feedback principles, that it would have the mean or perhaps modal energy of the attractor states in the original system but this is not the case. To understand whether the competition between attractors in a self-modeling system enlarges attractors with especially low true energy or not, we need to understand the relationship between attractor basin size and the energy of their attractor states. At first glance, it might appear that there is no special reason why the largest
20
C O M P L E X I T Y
attractor should be the ‘ best’’ attractor—after all, it is not generally true in optimization problems that the basin of attraction for a locally optimal solution is proportional to its quality. However, in fact, for systems that are additively composed of many low-order interactions, existing theory tells us that this is highly probable. Specifically, in systems that are built from the superposition of many symmetric pairwise interactions, the depth (with respect to energy) of an attractor basin is positively related to its width (the size of the basin of attraction). A robust relationship between minima depth and basin size [14] is complicated by the possibility of correlations between minima [15], but minima depth and basin size are, in general, strongly correlated on average as evidenced by recent numerical work [1618]. Accordingly, the global minimum is likely to have the biggest basin of attraction. One must not conflate, however, the idea that the global optimum has the largest basin with the idea that it is easy to find the global optimum: in particular, the global optimum may be unique whereas there will generally be many more attractors that lead to inferior solutions. The basins of these suboptimal attractors will collectively occupy much more of the configuration space than the basin of the global optimum. Given that low-energy attractors have larger basins than high-energy attractors, they are therefore visited more frequently and therefore out-compete high-energy attractors in a self-modeling system. Thus (in the limit of low learning rates such that the system can visit a sufficient sample of attractors), we expect that when a dynamical system augments its dynamics with an associative memory of its own energy minimization behavior, it will produce a dynamics with ultimately only one attractor, and this attractor will correspond to a minimization of constraints between variables in the original system that is likely to be near globally optimal. This is depicted schematically in Figure 1, but basins of attraction in real systems may be much more complex. Although this behavior will be reliable on average, it should be clear that the system is sensitive to initial random events. Accordingly, when the learning rate is too high, this will cause an arbitrary local optimum to be reinforced.
AN ILLUSTRATION OF A SELF-MODELING DYNAMICAL SYSTEM Here, we illustrate the effects of this self-modeling process in a simple dynamical system. First, we show that in general cases self-modeling can cause a system to find lowenergy configurations more reliably over time, and then we show that it can find low-energy solutions faster in some scenarios. The state dynamics and connection dynamics are defined above [Eqs. (13) and previous section]. We examine a system where each initial connection of the system aij 5 {21,1} takes the value 21 or 1 with equal probability. Figure 2 illustrates how the dynamics of
Q 2010 Wiley Periodicals, Inc. DOI 10.1002/cplx
FIGURE 2
Self-modeling in a random problem structure. (a) Ten example trajectories of system behavior before learning (N 5 100, relaxation length 10N), (b) attractor states visited [end points of curves in (a)] without learning (relaxations 11000) and with learning (relaxations 10012000, d 5 0.001/10N), (c) example trajectories after learning (note that energy minimization in the modified energy function can result in transient increases in true energy), and (d) histogram of attractor energies before and after learning. Showing that after learning the system finds one of the lowest energy configurations reliably, i.e., from any initial condition.
the system are changed by the application of a self-modeling associative memory. The original distribution of attractor energies [Figure 2(c), relaxations 11000] shows that this problem, built with random constraints, has many local optimal solutions when each node acts to minimize constraint violations independently. However, Figure 2 also shows that good solutions to this problem can be found more reliably with this self-modeling approach. In an optimization framework, the initial weights of the system represent a weighted-Max-2-SAT problem. Each aijsisj term represents a clause that is satisfied when aijsisj > 0 and unsatisfied when aijsisj < 0. Thus, each weight dictates whether the two problem variables it connects should be the same sign (aij > 1) or different (aij < 0), and the magnitude of aij denotes the importance of satisfying this constraint. The objective is to find an assignment to the state variables that maximizes the number of satisfied clauses, weighted by their importance. Weighted-Max-SAT includes Max-SAT, and the maximization problem Max-2-
Q 2010 Wiley Periodicals, Inc. DOI 10.1002/cplx
SAT is NP-hard [19], and equivalent to other well-known problems. For example, the system can be interpreted as a distributed constraint optimization problem such as graph coloring [19]. That is, each node in the network represents an area to be colored with one of two colors (21/11), and the edges in the network represent constraints with other areas determining whether a connected area should be colored the same (aij > 1) or differently (aij < 0). The objective is to minimize the number of constraint violations. Finally, to take a slightly more applied example, the system is also equivalent to a simple distributed resource allocation problem such as grid or cloud computing [20]. Suppose each processing node in a network receives jobs of two types at an equal rate from a user. At any one time, however, a processor can only service jobs of one type and jobs of the other type are sent to other connected processors of that node in proportion to a fixed weighting (aij) for that pair of nodes (e.g., inversely related to distance to
C O M P L E X I T Y
21
minimize communication costs). To keep things simple, jobs are only redirected in this manner once, i.e., one ‘ hop.’’ Each processor seeks to service the most jobs possible and adopts the servicing mode that services the majority of jobs they receive. The mode they adopt is therefore sensitive to the weighting with each other processor and the modes currently adopted by those processors (In our implementation, all jobs are of the same size, requiring 100% of a processor for one time step, and all processors have the same capabilities—but in principle, such symmetries may be relaxed). In general, this type of system may exhibit many locally optimal configurations where no processor wants to change mode, but the number of jobs arriving at processors that are in the incorrect mode is not globally minimal. This maps to a distributed graph-coloring problem where connected nodes will ideally adopt complementary services/colors (aij < 0) and in each case where this is not achieved, the cost is proportional to the magnitude of aij for each job that is forwarded (but remains unserviced).
RECALL VERSUS OPTIMIZATION In other instantiations of the system, a more surprising result can be observed. We examine the number of trials where a system with learning finds a lower energy configuration than the same system run under the same conditions without learning. When this happens, it indicates that the learning process is enabling the network to find better solutions faster, even for the first time, and not merely to recall good solutions that have already been visited. Two properties are necessary to observe this. First, the problem must admit good solutions that are difficult for local search (i.e., a single relaxation of the system) to find. In this case, we can create a problem that is difficult for local search by introducing modularity. Specifically, we define a nearly decomposable [10, 21, 22] modular structure (or subdivided network [23]) (30 modules of five variables each, N 5 150) where intramodule connections are much stronger than intermodule connections (i.e., |aij| 5 1, i j if ¼ (i = j); |aij| 5 0.01, otherwise). Strong local 5 5 connections and weak intermodule connections have the effect of producing many local optima that are distant in Hamming space [22, 24], and the balance of these weights can be used to control the relative size of the basin of attraction for the global optima and local optima [10]. Second, local optima that are found by local search must reveal some regularity that associative learning can exploit. This is straightforwardly introduced by biasing the initial connections/constraints. Specifically, here, we use aij > 0 with probability 0.8 (rather than 0.5 as in the previous case) creating some consistency in the problem, and increasing the likelihood that local optima contain partial solutions that coincide with one another.
22
C O M P L E X I T Y
We conducted 100 trials on different random instances of this class of system. Each trial compares one run with learning and one run without; 300 relaxations of each (length of each relaxation is 10N state updates). The same 300 random initial conditions and the same random order of state updates are used for the learning and nonlearning runs of each trial—thus, differences between learning and nonlearning runs can only be observed if learning changes the basins of attraction (d 5 0.00025/10N). In 90 of the trials, the learning system finds a lower energy configuration than the nonlearning case (see Figure 3), and there are no trials where the nonlearning system finds a superior solution to the learning system. Thus, the learning system not only finds low-energy configurations with greater reliability over time but in a given number of relaxations finds lower energy configurations than the nonlearning case. Finding these low-energy configurations is therefore not simply a matter of recalling good configurations that have already been visited by chance. The significance of this modular problem structure warrants some discussion: consider a weighted-Max-2-SAT problem where problem variables are clustered into subsets exhibiting strongly weighted clauses, with weakly weighted clauses between clusters. This corresponds, for example, to a system where subgroups of processor nodes have high-bandwidth connections between them, such that the majority of job redirection occurs within subgroups, and hence, coordination of processors within a subgroup is more important than coordination between subgroups. The strongest constraints have a large impact on individual node energy and state changes respond reliably to these constraints. However, this then makes it difficult/ unlikely for the untrained system (i.e., before learning) to respond to/satisfy the remaining dependencies between clusters even though, collectively, their effect is still significant for the system as a whole. However, when locally optimal configurations have some consistency with one another that associative learning can exploit, initially weak and unreliable intergroup coordination can be gradually strengthened and reinforced. This means that locally optimal configurations tend to be ‘ variants on a theme,’’ exhibiting partially reliable subpatterns. For example, this might be created if there is a biased pattern of incoming job types across the network of processors, creating a consistency in the pattern of satisfied and unsatisfied constraints that occur in a sample of locally optimal solutions. In a similar manner, the bias we use in this example creates a weak consistency in the resolution of constraints between modules. Although this consistency is not revealed in any one relaxation, it can be recognized and exploited by generalizing over many relaxations [This bias toward states that agree does not make the solution trivial, however; the energy of the state con-
Q 2010 Wiley Periodicals, Inc. DOI 10.1002/cplx
FIGURE 3
Self-modeling in a modular problem structure. Example trial of nonlearning (relaxations 1300) and learning system (relaxations 301600) in a nearly decomposable system. Solid line is the best configuration found without learning. After 250 relaxations with learning, the system converges on a single attractor that is superior to all attractors found without learning over 300 relaxations [It is also superior to the energy of the all-1s state configuration (broken line) the energy attainable as if by ignoring the negative connections].
figuration where all states agree is not optimal (Figure 3, broken line)].
DISCUSSION AND RELATED WORK Given that in a distributed complex adaptive system, there is no central mechanism to store and reapply the best result of previous experience, a mechanism that causes a dynamical system to increase the probability or reliability of visiting good configurations that have been visited in the past is significant for many types of engineered complex adaptive systems [2527]. Importantly, the mechanism demonstrated above is extremely simple and completely distributed. Updates to connections depend only on the states of the two variables they connect, and the method is therefore implementable in distributed complex adaptive systems where centralized optimization methods (calculating adjustments based on global information) are inapplicable. Its suitability also relies on only weak assumptions about the domain; i.e., systems built from the superposition of many pairwise constraints. Conceptually, the idea of using neural network mechanisms to enhance the performance of complex adaptive systems suggests a literally connectionist way of thinking about adaptation in complex systems such as, for example, ad hoc communications networks or grid computing, that draws attention away from the intelligence of individual nodes (Watson et al., submitted, [48]).
Q 2010 Wiley Periodicals, Inc. DOI 10.1002/cplx
Optimization Ability The initial results above illustrate an ability to recall good solutions that have already been visited, but the ability to find better solutions faster in the latter results is more intriguing. The illustrations with nearly decomposable systems show that the learning system is finding low-energy configurations on a timescale in which the nonlearning system has not yet visited them. It must therefore be enlarging the basin of attraction for these configurations before they are visited for the first time. We conclude that the system is in effect predicting the location of superior solutions by generalizing the patterns observed [47] in a stochastic sample of inferior local optima (Watson et al., submitted, [48]). In particular, ‘ spurious attractors,’’ although generally considered a nuisance in associative memory research [13, 28], in fact, represent a simple form of generalization producing new attractor states that are new combinations of features (subpatterns) observed in the training patterns [29]. This enables the globally optimal attractor to be enlarged even though it has not yet been visited/used as a training sample. However, this will not work well unless common subpatterns of local optima indicate the position of the global optima or other superior optima. This is not universally true in optimization problems, and this limitation is related to the observation that Hebbian learning can enhance classification [30] only when the intrinsic patterns of self-similarity in the data support the classification that needs to be learned. However, we suggest that for a problem that can be described in terms of weighted pairwise constraints as per the premise of these investigations (and certainly systems with modular structure, as illustrated), this relationship between local and global optima is a reliable heuristic. The scalability of an associative optimization process based on related principles is shown to be algorithmically superior to local search in a formal sense [24, 31] (see also Ref. [32]). However, it is not the aim of the current investigations to suggest that this provides a robust or universal optimization method in general [33]—only to use an optimization framework to demonstrate how this subtle, and completely distributed, self-modeling process influences the self-organization and hence future behavior of a complex adaptive system.
Spontaneous Self-Organization in Systems of Autonomous Agents Hebb’s rule is an extremely simple rule: it merely applies positive reinforcement to the current state correlations. In the above experiments, we have mandated that Hebb’s rule is applied to each connection, but if agents in a complex adaptive system were free to modify connections with other agents in the manner that suited their own self-interest, in which direction would they change them? In
C O M P L E X I T Y
23
fact, it is straightforward to show that Hebb’s rule is equivalent to a rule that changes connections in the direction that reduces the strength of constraints that oppose the current state and strengthens connections that support the current state—that is, the direction of changes to connections that reduce the energy of the current state configuration are necessarily Hebbian, i.e.,
N X
ðxij þ Dxij Þsi sj <
N X
ij
xij si sj , signðDxij Þ ¼ signðsi sj Þ
ij
This means that if an agent is motivated to choose its behavior to maximize a utility function that is a weighted sum of pairwise interactions with other agents (analogous to minimization of the energy function above), then if that agent also has the ability to slowly change connections with other agents [25, 34] in a manner that maximizes that same utility function, it will necessarily do so in a manner that is Hebbian. This is an observation that we are developing in related work on complex adaptive systems (Watson et al., submitted, [48]), social networks [35], and also in the context of coevolving species in an ecosystem where species may evolve the coefficients of a LotkaVolterra system [36] (see also Ref. 37) or evolve symbiotic relationships [38]. This connects the current work with concepts we refer to as ‘ social niche construction’’ [3942]—the idea that organisms manipulate their social context in a manner that modifies selection on their own social traits. In a different domain, recent results [43] have investigated how the evolvability of a population changes over time when it is subjected to a fluctuating but structured environment (we achieve the same conditions using repeated relaxation from random initial conditions in a static but structured environment [44]). Parter et al. find that organisms develop a ‘ memory’’ of their evolutionary history and observe that evolved networks ‘generalize to future environments, exhibiting high adaptability to novel goals.’’ Wagner et al. [45] explain part of the mechanism that might be involved by referring to genetic loci that affect the correlation of phenotypic traits [46] as follows:
‘ natural selection can act on [such loci] to either increase the correlation among traits or decrease it depending on whether the traits are simultaneously under directional selection or not . . . [resulting in] a reinforcement of pleiotropic effects among coselected traits and suppression of pleiotropic effects that are not selected together’’ [45]. This clearly describes a Hebbian modification of gene interactions and suggests intriguing parallels we have developed elsewhere [44]. Further work is required to develop these observations and also to examine the effects on systems with asymmetric connections/nonfixed-point attractors and to examine the sensitivity to learning rate (see Ref. 10). Also, here, we have applied a very clear separation of timescales between changes to network connections and behaviors on the network (our conditions 2 and 3)—a key that enables us to interpret the interaction of these two dynamics in a simple manner. Without some separation of these timescales, there is no useful ‘ signal’’ in the configuration states that an associative memory can learn but relaxation of these conditions deserves attention. In conclusion, the slow application of simple associative learning, given repeated relaxation, provides a fully distributed mechanism that enhances the ability of a dynamical system to resolve tensions between interdependent components of the system and find globally optimal resolutions of constraints both more reliably and more quickly. We do not claim that this provides a strong optimization method, and existing theory already tells us a lot about when Hebbian learning can and cannot provide useful generalization. However, the consequences of turning a complex system to the task of modeling its own dynamical attractors, via a mechanism as simple as Hebbian learning, have been previously overlooked and provide a novel frame of reference for thinking about simple modes of self-organization in complex adaptive systems.
ACKNOWLEDGMENTS Thanks to Jason Noble, Seth Bullock, David Iclanzan, Adam Davies, and Ton Coolen.
REFERENCES
1. Strogatz, S.H. Nonlinear Dynamics and Chaos; Addison-Wesley: Reading, MA, 1994. 2. Hopfield, J.J. Neural networks and physical systems with emergent collective computational abilities. Proc Natl Acad Sci USA 1982, 79, 25542558. 3. Hopfield, J.J.; Tank, D.W. Neural computation of decisions in optimization problems. Biol Cybern 1985, 52, 141152. 4. Hopfield, J.J.; Tank, D.W. Computing with neural circuits: A model. Science 1986, 233, 625633. 5. Hebb, D.O. The Organization of Behavior; Wiley: New York, 1949. 6. Hinton, G.E.; Sejnowski, T.J. Analyzing cooperative computation. In: Proceedings of the 5th Annual Congress of the Cognitive Science Society, Rochester, NY, May 1983. 7. Hinton, G.E.; Sejnowski, T.J. Learning in Boltzmann Machines. In: Proceedings of the Cognitiva, 85, Paris, France, 1985. 8. Ackley, D.H.; Hinton, G.E.; Sejnowski, T.J. A Learning algorithm for Boltzmann machines. Cognit Sci 1985, 9, 147169.
24
C O M P L E X I T Y
Q 2010 Wiley Periodicals, Inc. DOI 10.1002/cplx
9. Tsirukis, A.G.; Reklaitis, G.V.; Tenorio, M.F. Nonlinear optimization using generalized Hopfield networks. Neural Comput 1989, 1, 511521. 10. Watson, R.A.; Buckley, C.L.; Mills, R. The Effect of Hebbian Learning on Optimization in Hopfield Networks. Technical Report, ECS, University of Southampton: Southampton, UK, 2009. 11. Ito, J.; Kaneko, K. Spontaneous structure formation in a network of chaotic units with variable connection strengths. Phys Rev Lett 2002, 88, 028701. 12. Gross, T.; Sayama, H. (eds). Adaptive Networks; Springer: Berlin, 2009. 13. Hopfield, J.J.; Feinstein, D.; Palmer, R. Unlearning has a stabilizing effect in collective memories. Nature 1983, 304, 158159. 14. Gardner, E. The space of interactions in neural network models. J Phys A: Math Gen 1988, 21, 257. 15. Coolen, A.C.C. On the relation between stability parameters and sizes of domains of attraction in attractor neural networks. Europhys Lett 1991, 16, 73. 16. Kryzhanovsky, B.; Kryzhanovsky, V. Binary optimization: On the probability of a local minimum detection in random search. In: Proceedings of the ICAISC, Zakopane, Poland, 2008; Springer: Berlin, pp 89100. 17. Kryzhanovsky, B.; Magomedov, B.M.; Fonarev, A.B.On the probability of finding local minima in optimization problems. In: Proceedings of the IJCNN, Vancouver, BC, IEEE Press, 2006; pp 32433248. 18. Kryzhanovsky, B.; Kryzhanovsky, V.; Mikaelian, A.L. Binary optimization: A relation between the depth of a local minimum and the probability of its detection. In: Proceedings of the ICINCO-ICSO, Angers, France, INSTICC Press, 2007; pp 510. 19. Garey, M.R.; Johnson, D.S.; Stockmeyer, L. Some simplified NP-complete graph problems. Theor Comp Sci 1976, 1, 237267. 20. Lim, H.C.; Babu, S.; Chase, J.S.; Parekh, S.S. Automated control in cloud computing: Challenges and opportunities. In: Proceedings of the 1st Workshop on Automated Control for Datacenters and Clouds, Barcelona, Spain, 2009; pp 1318, ACM Press, New York, NY. 21. Simon, H.A. The Sciences of the Artificial; MIT Press: Cambridge, Mass, 1969. 22. Watson, R.A.; Pollack, J.B. Modular interdependency in complex dynamical systems. Artificial Life 2005, 11, 445457. 23. Bar-Yam, Y. Dynamics of Complex Systems; Addison-Wesley: Reading, MA, 1997. 24. Mills, R. How Micro-Evolution Can Guide Macro-Evolution: Multi-Scale Search via Evolved Modular Variation. Ph.D. Thesis, ECS, University of Southampton, Southampton, UK, 2010. 25. Pacheco, J.M.; Traulsen, A.; Nowak, M.A. Coevolution of strategy and structure in complex networks with dynamical linking. Phys Rev Lett 2006, 97, 258103. 26. Heylighen, F.; Gershenson, C.; Staab, S.; Flake, G.W.; Pennock, D.M.; Fain, D.C.; De Roure, D.; Aberer, K.; Shen, W.-M.; Dousse, O.; Thiran, P. Neurons, viscose fluids, freshwater polyp hydra-and self-organizing information systems. IEEE Intell Syst 2005, 18, 7286. 27. Nettleton, R.W.; Schloemer, G.R. Self-organizing channel assignment for wireless systems. IEEE Commun Mag 1997, 35, 4651. 28. Gascuel, J.-D.; Moobed, B.; Weinfeld, M. An internal mechanism for detecting parasite attractors in a Hopfield network. Neural Comput 1994, 6, 902915. 29. Jang, J.-S.; Kim, M.W.; Lee, Y. A conceptual interpretation of spurious memories in the Hopfield-type neural network. In: Proceedings of the Neural Networks, IJCNN, International Joint Conference on Baltimore, MD, IEEE Press, Vol. 1, 1992; pp 2126. 30. O’Reilly, R.C.; Munakata, Y. Computational Explorations in Cognitive Neuroscience: Understanding the Mind by Simulating the Brain; MIT Press: Cambridge, MA, 2000. 31. Mills, R.; Watson, R.A. Symbiosis enables the evolution of rare complexes in structured environments. In: Proceedings of European Conference on Artificial Life, Budapest, Hungary, Springer, Berlin, in press. 32. Iclanzan, D.; Dumitrescu, D. Overcoming hierarchical difficulty by hillclimbing the building block structure. In: Proceedings of the GECCO, London, UK, 2007; pp 12561263. ACM Press, New York, NY. 33. Wolpert, D.H.; Macready, W.G. No free lunch theorems for optimization. IEEE Trans Evol Comput 1997, 1, 6782. 34. Newman, M.E.J.; Barabasi, A.L.; Watts, D.J., Eds. The Structure and Dynamics of Networks; Princeton University Press: Princeton, NJ, 2006. 35. Davies, A.P.; Watson, R.A.; Mills, R.; Buckley, C.L.; Noble, J. If you can’t be with the one you love, love the one you’re with: How individual habituation of agent interactions improves global utility. In: Proceedings of ALIFE XII, Odense, Denmark, in press. 36. Lewis, M. An Investigation into the Evolution of Relationships Between Species in an Ecosystem. MSc Dissertation, ECS, University of Southampton, Southampton, UK, 2009. 37. Poderoso, F.C.; Fontanari, J.F. Model ecosystem with variable interspecies interactions. J Phys A: Math Theor 2007, 40, 87238738. 38. Watson, R.A.; Palmius, N.; Mills, R.; Powers, S.T.; Penn, A.S. Can selfish symbioses effect higher-level selection? In: Proceedings of the European Conference on Artificial Life, Budapest, Hungary, in press, Springer, Berlin. 39. Odling-Smee, F.J.; Laland, K.N.; Feldman, M.W. Niche Construction. The Neglected Process in Evolution. Monographs in Population Biology. 37; Princeton University Press, Princeton, NJ, 2003. 40. Penn, A.S. Ecosystem Selection: Simulation, Experiment and Theory. Ph.D. Thesis, University of Sussex, Brighton, UK: 2006. 41. Powers, S.T.; Mills, R.; Penn, A.S.; Watson, R.A. Social niche construction provides an adaptive explanation for new levels of individuality (ABSTRACT). In: Proceedings of Workshop on Levels of Selection and Individuality in Evolution, European Conference on Artificial Life, Budapest, Hungary, 2009. 42. Powers, S.T.; Penn, A.S.; Watson, R.A. Individual selection for cooperative group formation. In: Proceedings of European Conference on Artificial Life, Lisbon, Portugal, Springer, Berlin, 2007; pp 585594. 43. Parter, M.; Kashtan, N.; Alon, U. Facilitated variation: How evolution learns from past environments to generalize to new environments. PLoS Comput Biol 2008, 4, e1000206.
Q 2010 Wiley Periodicals, Inc. DOI 10.1002/cplx
C O M P L E X I T Y
25
44. Watson, R.A.; Buckley, C.L.; Mills, R.; Davies, A.P. Associative memory in gene regulation networks. In: Proceedings of ALIFE XII, Odense, Denmark, in press. 45. Wagner, G.P.; Pavlicev, M.; Cheverud, J.M. The road to modularity. Nat Rev Genet 2007, 8, 921931. 46. Pavlicev, M.; Kenney-Hunt, J.P.; Norgard, E.A.; Roseman, C.C.; Wolf, J.B.; Cheverud, J.M. Genetic variation in pleiotropy: Differential epistasis as a source of variation in the allometric relationship between long bone lengths and body weight. Evolution 2008, 62, 199213. 47. Fontanari, J.F. Generalization in a Hopfield network. J Phys 1990, 51, 24212430. 48. Watson, R.A.; Buckley, C.L.; Mills, R. Global Adaptation in Networks of Selfish Components: Emergent Associative Memory at the System Scale. Technical Report, ECS, University of Southampton: Southampton, UK, 2009.
26
C O M P L E X I T Y
Q 2010 Wiley Periodicals, Inc. DOI 10.1002/cplx
Static versus Dynamic Topology of Complex Communications Network during Organizational Crisis SHAHADAT UDDIN, 1 LIAQUAT HOSSAIN, 1 SHAHRIAR TANVIR MURSHED, 1 AND JOHN W. CRAWFORD 2 1
Project Management Graduate Program; and 2Faculty of Agriculture, Food and Natural Resources, The University of Sydney, Redfern, New South Wales 2006, Australia
Received May 25, 2010; revised July 25, 2010; accepted August 8, 2010
The significance of temporal changes in the topology of organizational communication networks during a crisis is studied using static and dynamic social network analysis (SNA). In static SNA, the network of interactions made during an entire data collection period is studied. For dynamic SNA, shorter segments of network data are used in the analysis. Using measures of degree centrality and core-periphery analysis, the prominence of actors is characterized and compared in the aggregate network (i.e., using static topology) and in daily networks (i.e., using dynamic topology) of a complex email network in a large organization during crisis. We show that while static typology cannot capture the network behavior completely, there are particular situations where the additional description provided by dynamic analysis is not significant. The limitations of dynamic topological SNA are discussed and we stress the importance of associating function with network structure in moving towards a more informative dynamical description. Ó 2010 Wiley Periodicals, Inc. Complexity 16: 27 36, 2011 Key Words: static typology; dynamic typology; social network analysis; organizational crisis; email network
1. INTRODUCTION
O
ur present understanding of structural analysis of social networks has been guided by the study of static topologies, which ignore the dynamic or timeevolving characteristics of social process that underlie these networks [1]. Although a static topology may be suf-
Correspondence to: Shahadat Uddin, Project Management Graduate Program, Room 318, PNR Building, The University of Sydney, Redfern, New South Wales 2006, Australia (e-mail:
[email protected])
Q 2010 Wiley Periodicals, Inc., Vol. 16, No. 5 DOI 10.1002/cplx.20349 Published online 12 November 2010 in Wiley Online Library (wileyonlinelibrary.com)
ficient for analyzing a network of social interactions to investigate some scientific questions (e.g., ‘ How does communication pattern affect the performance of a group?’’), there are situations such as the spread of an epidemic or a computer virus where the static topology does not capture the dynamics of the system [2]. In such situations, the temporal behavior of social interactions between individual actors is likely to be important. However, even in this case, the temporal resolution of the sampling will result in an integration of interactions over time and will produce a series of static snapshots. The effect of temporal resolution on the conclusions of the analysis is clearly of central importance.
C O M P L E X I T Y
27
FIGURE 1
Illustration of static and dynamic topology of SNA, from the concepts described in Braha and Bar-Yam [17] and Clauset and Eagle [21].
Social network analysis (SNA) is the mapping and measuring of social relationships that are represented in terms of nodes and ties where nodes represent the individual actor within the network and ties are the relationships between them [3]. The social relationships among actors may take many forms, such as an email network, a personal contact network, or global networks of organizations. SNA has been successfully applied to understand email networks and their participants [4] by evaluating locations of actors in the network. From the perspective of the temporal resolution of network data used for SNA, there are of two extreme typologies: static and dynamic. In static SNA, methods are applied to network data that is aggregated over the entire observation time [5]. In contrast, dynamic SNA is applied to a series of smaller intervals of data collected over the period to study how network interactions change over time [2, 6]. For example, a dynamic SNA could be based on a network analysis of an university’s email communications network, binned in daily, weekly, or monthly, an interval that evolves over a 5-year period. By comparison, a static SNA considers only one network—the network of links aggregated more than 5 years. Figure 1 illustrates schematically the difference between these two types of SNA. In static SNA, methods are applied to the aggregated network (i.e., the upper shaded network inside the square) at the end of day 3 (from Figure 1). In contrast, SNA methods are applied to the networks for each day in dynamic SNA (i.e., the three lower shaded networks in Figure 1). There is no aggregated network in this case. In this research, we analyzed a complex email communication network of a large organization during a crisis event using the methods and measures of SNA. Organizations are often viewed as complex system by complexity researchers because both organization and complex sys-
28
C O M P L E X I T Y
tem theories deal with similar issues such as adaptation and evolution over time, emergence of new forms, and naturally occurring patterns in systems [7]. The email networks of any large organization also exhibit basic properties of complex systems, for example, (i) like a complex system, email networks consist of large number of individuals who interact with each other; (ii) like complex systems, they can show emergent behavior in the pattern of communication that evolves due to collective interactions of a group of individuals working toward a common goal or in a project [7, 8]. Organizational crisis has been defined in many ways including organizational mortality, bankruptcy, and dramatic fall in market value [9, 10]. According to Weitzel and Johnson [11], an organizational crisis is a state in which firms fail to anticipate, recognize, avoid, neutralize, or adapt to external or internal pressures that threaten its long-term survival. Sheppard [12] defined a crisis as ‘ a critical and irreversible loss by the system’’ and also posited that an organization dies when it stops performing the functions that we would expect from it. A drastic form of critical loss occurs when firms move into bankruptcy as in the case of the Enron Corporation in the final quarter of 2001. During the course of operations, there are many types of organizational communication network that evolve including the face-to-face network, the social network, inter- and intra-departmental network, hierarchical communication network, and email networks among different actors within and outside the organization. Among these different types of communication networks, recent research suggests that email networks constitute the most useful proxy for the underlying communication networks within organizations. Guimera et al. [13] argued that the email network provides an important insight into organi-
Q 2010 Wiley Periodicals, Inc. DOI 10.1002/cplx
zational interactions amongst individuals. As described by Tyler et al. [14], an email network is a tantalizing medium for research and offers a promising resource for extracting and visualizing communication structure. The use of email networks to bring out hidden patterns of interactions has also been suggested by other researchers [15, 16]. In this article, first, we provide a systematic review of the study of email network topology. Second, we introduce and describe the source of data for the study and discuss the measures to be used. The research experiments are described in the following section, and, finally, we discuss the results and conclusions and suggest some priorities for future research.
2. REVIEW OF TOPOLOGICAL STUDY OF EMAIL NETWORKS We have become increasingly reliant on information technology (IT) due in part to the unprecedented advancements and growth in the area. Organizational communication has become largely dependent on the exchange of electronic messages [17] compared with other forms of communication. For researchers, this IT-enabled dependency has also created the useful proxy (i.e., email communication networks) of the organizational communication network. Although there have been many studies relating to different perspectives on email network analysis including the effect of structural positions of individuals in the network on their performance [18, 19], very few of them investigate the email network from both a static and dynamic typological perspective. Though there are numerous studies based on the static typology (i.e., considering only the aggregate network), which capture the actors’ dynamic behavior in a network [20 22], perhaps the first attempt to consider and compare both static and dynamic analysis of an email network has been done by Braha and Bar-Yam [2]. As our study is originally inspired by their seminal work, we discuss it and its conclusions briefly in the following paragraph. Braha and Bar-Yam [2] studied an email network of 57,158 users based on data sampled over a period of 113 days from log files maintained by the email server at a large university. In total, 447,543 messages were considered that reflect the flow of valuable information in their study; excluding spam and bulk emails using a filter. They found that all the daily networks are weakly correlated and networks obtained on different days are substantially different from each other; each of the daily networks has a degree distribution described by power law characteristics of a scale-free network. They measured actor prominence, which is quantified by degree centrality value, for each of 113 daily networks and compared actor prominence (of the top 1000 actors in the daily ranking list) in these daily networks with the actor prominences in the network aggregated over the whole period. It was shown that the
Q 2010 Wiley Periodicals, Inc. DOI 10.1002/cplx
prominence of nodes or actors within the network fluctuates widely from day to day, and that a high degree in the aggregate network (i.e., static topology) does not predict a high degree for individual days (i.e., dynamic topology). In previous work, Braha and Bay-Yam [23] also found similar results from the analysis of social network data relating to interactions recorded from the spatial proximity of personal Bluetooth wireless devices, which records the interactions between pairs of 80 students who are socially related in some ways (e.g., students in the same school or class) over a period of 31 days of October 2004. In summary, their study demonstrates that static SNA does not capture the dynamics of social networks and that only dynamic analysis can capture it accurately and effectively. In these studies, the analysis was carried out in a particular kind of social network which is outside of any imposed structure. Our study differs significantly, in that, it concerns the analysis of social networks in the context of a complex organizational structure in a large corporation. Thus, it serves both as a distinct analysis of this type of network, and also as a test of universality of the previous results of Braha and Bar-Yam [2]. The following two questions, therefore, motivate our research: (a) How does the SNA result of static topology differ from that of dynamic topology in the context of organizational email networks during crisis? (b) Can we make general conclusions relating to the appropriateness of static or dynamic SNA? We aim to address these questions using both static and dynamic SNA by measuring the out-degree centrality and applying core-periphery analysis. By applying the different kinds of SNA in this way, we will also explore the evolution of actor-level prominence during a crisis.
3. EMAIL DATA SOURCE AND SNA MEASURE USED FOR RESEARCH EXPERIMENTS We analyzed the modified Enron corpus, which has been corrected and cleaned by Shetty and Adibi [24]. This modified corpus contains 252,759 email messages from 151 top management level employees while the original Enron corpus has 619,446 email messages from 158 users and was released by the Federal Energy Regulatory commission in May 2002. We consider email messages in the 6 months running up to the collapse of Enron; from July 2001 to December 2001. This period covers the emergence of the organizational crisis and the subsequent collapse in December 2001 [25]. After excluding weekends and public holidays, we used an observation period of 131 days for analysis purpose. One of the important and primary uses of graph theory and network analysis is the identification of the most important or prominent actor(s) within a social network. The prominence of actors is described from two perspectives:
C O M P L E X I T Y
29
TABLE 1 This Table Shows Number of Times Top 10 Actors are in the Top-Ranking List Over the Period of 131 Days
Actor
Number of Times Having Position in the Daily Top-Rank List of Size 10
Percentage of Having Position in the Daily Top-Rank List of Size 10
Position in the Top 10 Ranking of Aggregated Network (Yes/No)
58 21 90 52 95 81 27 80 89 142
85 74 69 62 61 50 50 49 41 34
64.88 56.49 52.67 47.33 46.57 38.17 38.17 37.41 31.30 25.95
Yes Yes Yes Yes Yes Yes Yes Yes Yes No
It also reflects the presence or absence of corresponding 10 actors in the top-ranking list of the aggregated network.
(a) based on the direct network relations, which reflect popularity and activity of an actor, with respect to others in the network; (b) based on the structural positions of actors in the network. Social network literature has established definitions regarding these two rationales for measuring actor prominence that includes in-degree and out-degree [3] for direct network relation and core-periphery measure [26] for structural positions of actors. In this article, we used degree centrality and core-periphery measures to compare the static and dynamic topology of SNA. Degree centrality is the total number of links that a particular node has with others in the network. In this research, we consider only
out-degree centrality, that is, the number of emails sent by each actor. Core-periphery measures consider all actors to belong to a single group and classify them as either a core or peripheral member of the group by calculating a numerical coreness value for all of them [26].
4. RESEARCH EXPERIMENT To begin, we compared the positions (or ranks) of the most prominent actors (i.e., the actors which are found most times in the 131 daily rank lists) in daily networks with their positions in the network aggregated over 131 days. Of top 10 actors who are frequently located in the top-ranking list of size 10 in the 131 daily networks, 9 of them, except Actor 142, have also emerged as most prominent in the top-ranking list of the aggregated network (Table1). From the third column of Table 1, it is evident that most of the daily prominent actors (7 of 10) appear less than 50% of the time in the top-rank lists of daily networks. The range in percentage for the top-rank list size of 10 is [25.95 64.88%]. The minimum value (i.e., 25.95%) of this range is small; however, it is four times higher than the percentage values expected for random encounters as calculated in Table2. It is also evident from Table 2 that if we increase the top-rank list size, there is an increase in the upper value of the range. Furthermore, actors found mostly in the daily top-rank list also appeared in the toprank list of the aggregated network regardless of the size of top-rank list (compare column 1 and 4 of Table 2). Next, we calculate the percentage of actors that appear in both ranking lists for each pair of daily networks from the 131 daily top-ranking lists of all actors based on outdegree centrality. We have a total of 8515 pairs (131C2) of daily networks. For a given top-rank list size, if the same actors appear in the rank list of all daily networks then the value for the
TABLE 2 This Table Shows the Range of Number of Times That Actors Have Positions in the Daily Top-Rank List, Average Expected Value for Each Actor to be in the Top-Rank List of the Daily Networks, and Number of Actors That Found Mostly Both in the Daily Top-Rank List and in the Top-Rank List of Aggregated Network for Different Top-Rank List Size
Top-Rank List Size 5 10 15 20
30
Range of Number of Times Actors Having Position in Daily Top-Rank list for All Daily Networks/Value in Percentage [41, [34, [47, [48,
C O M P L E X I T Y
74]/(31.29 85]/(25.95 92]/(35.88 98]/(36.64
51.15%) 64.88%) 70.23%) 74.81%)
Average Expected Value for Each Actor to be in the Top-Rank List of the Daily Networks/Percentage 4.34 8.675 13.01 17.35
(3.31%) (6.62%) (9.93%) (13.24%)
How Many Actors That are Found Mostly in the Daily Top-Rank List are in the Top-Rank List of Aggregated Network 5 9 14 18
Q 2010 Wiley Periodicals, Inc. DOI 10.1002/cplx
TABLE 3 Variation of Top-Ranking List Size of Actors and Corresponding Number of Overlaps for All Pair of Daily Networks (8515 Pairs) Over the Data Collection Period and Between Top-Rank List of Aggregated Network and Top-Rank List of 131 Daily Networks Top-Ranking List Size No. of Overlaps Percentage of Centrality Overlap Mean of Centrality Overlap Normalization of Mean Centrality Overlap [0, 1]
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
(a) All pair of daily networks (8515 pairs) over the data collection period 513 0.000399 0.060247 1863 0.001449 0.21879 3998 0.003109 0.469524 6825 0.005308 0.801527 10110 0.007863 1.187317 13816 0.010745 1.622548 17844 0.013878 2.095596 22140 0.017219 2.600117 26583 0.020675 3.121903 31083 0.024175 3.650382 37651 0.029283 4.421726 44345 0.034489 5.207868 50899 0.039587 5.977569 57587 0.044788 6.763006 64697 0.050318 7.598004 71498 0.055607 8.396712 78243 0.060853 9.188843 84392 0.065636 9.910981 90154 0.070117 10.58767 95818 0.074522 11.25285 (b) Between top-rank list of aggregated network and top-rank list of 131 daily networks 19 0.000961 0.145038 46 0.002325 0.351145 86 0.004348 0.656489 144 0.00728 1.099237 166 0.008392 1.267176 238 0.012032 1.816794 273 0.013801 2.083969 336 0.016986 2.564885 413 0.020879 3.152672 490 0.024771 3.740458 540 0.027299 4.122137 635 0.032102 4.847328 712 0.035994 5.435115 818 0.041353 6.244275 897 0.045347 6.847328 1034 0.052272 7.89313 1161 0.058693 8.862595 1196 0.060462 9.129771 1242 0.062788 9.480916 1317 0.066579 10.05344
‘ normalization of mean centrality overlap’’ will be 1. As the top-rank list size increases, we find that this value also increases (see last column of Table3). It is also evident from Table 3(a) that the number of centrality overlap and the percentage of centrality overlap between any two daily networks increases with an increase in the size of the topranking list. The relationship between ‘ top-rank list size’’ (in percentage) and ‘ percentage of centrality overlap’’ is approximately linear as in Figure 2(a). This indicates that
Q 2010 Wiley Periodicals, Inc. DOI 10.1002/cplx
0.060247 0.109395 0.156508 0.200382 0.237463 0.270425 0.299371 0.325015 0.346878 0.365038 0.401975 0.433989 0.459813 0.483072 0.506534 0.524795 0.54052 0.55061 0.557246 0.562643 0.145038 0.175573 0.21883 0.274809 0.253435 0.302799 0.29771 0.320611 0.350297 0.374046 0.37474 0.403944 0.418086 0.44602 0.456489 0.493321 0.521329 0.507209 0.498996 0.502672
a few actors (having high centrality) appear repeatedly in the daily top-rank list. We also compared the top-rank list of the aggregated network with the 131 daily top-rank lists of different size. In total, we make 131 such comparisons (131 daily networks 3 1 aggregated network). We found similar results presented in Table 3(b) and Figure 2(b) when we consider all combinations (i.e., 8515) of daily networks [see Table 3(a) and Figure 2(a)].
C O M P L E X I T Y
31
FIGURE 2
(a) Linear relationship between the top-rank list size (in percentage) and percentage of centrality overlap for 8515 pair of daily networks. (b) Linear relationship between the top-rank list size (in percentage) and percentage of centrality overlap for 131 pair of networks (between 131 daily networks and 1 aggregated network).
An analysis of the percentage of emails sent by the top 5, 10, 15, and 20 actors of the aggregated network over the data collection period of 131 days (Figure 3) shows the distribution of emails sent by those top 5, 10, 15, and 20 actors in the aggregated network. The average numbers of emails sent by those actors range from 40.47% for the top five actors (3.29% of total number of actors) to 79.40% for the top 20 actors (13.16% of total number of actors). The percentages of emails sent for the time period of September 19, 2001 (onward), to December 8, 2001, fluctuate widely with the rest of data collection period. However, if the percentage of actors sending those emails during that time is taken into consideration, then we find that a small percentage of actors sent at least four times more than the expected number of emails for any daily network (by comparing the percentage of actors in top list and the minimum percentage of emails sent by
32
C O M P L E X I T Y
them—legend of Figure 3). This indicates that during a period of organizational disintegration, few actors become more prominent or central in the organizational communication network. We test the correlation between the degree centrality of all actors for daily networks that are separated by 2, 3, and 4 days. Figure 4 shows the corresponding correlation coefficients as a function of time during the observation period. The average values of these correlation coefficients are 0.55, 0.57, and 0.54 for daily networks separated by 2, 3, and 4 days, respectively. With few exceptions, there is a strong and statistically significant correlation between the degree values of actors for daily networks separated by 2, 3, and 4 days. The high correlation coefficient value implies that a high centrality value for an actor in a particular day makes it more likely that the same actor will have high a centrality value in
Q 2010 Wiley Periodicals, Inc. DOI 10.1002/cplx
FIGURE 3
Percentage of emails sent by top 5, 10, 15, and 20 actors of the aggregated network in each of 131 daily networks.
consecutive days (i.e., second day, third day, and forth day). Therefore, there is a degree of predictability in the dynamics of the actor network. Finally, by applying core-periphery analysis, we calculated the coreness value associated with the top 10 actors in the aggregated network. We checked the presence or absence of these top 10 actors from the aggregated network in each of the six top 10 actor lists based on the coreness value of the monthly networks (see Table4).
As in Table 4, all the top 10 actors from the aggregated network are found frequently (49 time out of 60) in the monthly top 10 lists based on coreness value. The minimum value is 4, the maximum value is 6, and the average value is 4.9. It seems that other actors appear in few positions of monthly top 10 list. However, the average value for the top 10 actors of the aggregated network to be appear in the top 10 list of the monthly network (i.e., 4.9) is bigger than their expected value (60/151 5 0.397).
FIGURE 4
Correlation coefficient values for the out-degrees between two consecutive days separated by 1, 2, and 3 days.
Q 2010 Wiley Periodicals, Inc. DOI 10.1002/cplx
C O M P L E X I T Y
33
TABLE 4 Static and Dynamic Topological Analysis of Core-Periphery Measures for the Aggregated Network of 131 Days and Each of Six Monthly Networks (July 2001 to December 2001) Top 10 Actors in Regards to the Coreness Value in the Aggregated Network of 131 Days Actor No 117 21 58 50 89 7 94 6 142 34
Do the Top 10 Actors of Aggregated Network of 131 Days are in the Top 10 Actor List in Regards to the Coreness Value of the Monthly Network of July 2001 to December 2001?
Coreness Value
July
August
September
October
November
December
0.410 0.400 0.288 0.186 0.163 0.141 0.094 0.076 0.058 0.018
Yes No Yes Yes Yes Yes Yes Yes No Yes
Yes Yes Yes Yes Yes Yes No Yes Yes No
Yes Yes Yes Yes No Yes Yes Yes No Yes
Yes Yes Yes Yes Yes Yes Yes No Yes Yes
Yes Yes Yes Yes Yes Yes Yes No Yes Yes
No Yes No Yes Yes No Yes Yes Yes Yes
5. DISCUSSION We studied both static and dynamic topology of Enron’s email communication data during its crisis period. In our first experiment relating to actor prominence, we can see that the static analysis cannot capture the dynamics of network completely. A small percentage of the actors of the aggregated network do not appear in the list of actors who are most frequently found in the 131 daily networks. The normalized mean centrality value from our second experiment indicates that the dynamic behavior of the network cannot be captured fully by analysis of static networks. It is also evident that as the top-rank list size increases, this value (i.e., the normalized mean centrality) also increases. Moreover, there is a linear relationship between the top-rank list size and the percentage of centrality overlap. A high fluctuation in the percentage of emails sent by top-ranked actors in the aggregated network in our third experiment indicates that analysis of static networks cannot capture 100% of the features of dynamic networks. In our fourth experiment where the correlation between every second day, third day, and fourth day was calculated, we found that the average correlation coefficient value is statistically significantly smaller than 1. This also indicates that the behavior of individual actors changes over time and an analysis of static networks cannot capture this. Our final experiment using core-periphery analysis shows that some of the topranked actors of the aggregated list do not appear (11 times out of 60) in the each of six monthly top-ranked lists, which eventually indicates that static cannot capture completely (100%) of network dynamic behavior.
34
C O M P L E X I T Y
As evidenced in all our experiments, analysis of the static network cannot capture all of the behavior observed in the analysis of dynamic networks. In this regard, our experiments show similar outcomes with the work of Braha and Bar-Yam [27]. However, in contrast to their results, analysis of the static network in our study can explain most of the behavior of the dynamic network. Moreover, we found from both analysis of static and dynamic networks that as the organization (i.e., Enron) goes into crisis, a few actors become prominent (as measured by degree centrality) and central (as measured by core-periphery analysis) in the email communication network. The difference in results between the experiments of Braha and BarYam [27], and our research on the email network of Enron may be explained by considering the nature of the network. Their network consisted of 57,158 students in a large university, who are likely to exhibit a high degree of dynamic and unstructured behavior in email communication. In contrast, the Enron email network was concerned only 151 employees, who were confined within the space of the organization and were part of a hierarchical organizational structure. Furthermore, since most of the employees in the network are from the senior management of Enron, there is a degree of predictability in the structure of communication networks, which is not present in the student’s email dataset used by Braha and Bar-Yam [27].
6. CONCLUSION AND LIMITATION OF THE STUDY In conclusion, application of the analysis of both static and dynamic networks has drawn from email communica-
Q 2010 Wiley Periodicals, Inc. DOI 10.1002/cplx
tion data for Enron results in similar conclusions regarding actor network prominence and core-periphery measures. This is in contrast to a similar study of a student email network and a spatially proximal personal network, where analysis of the static and dynamic networks showed different outcomes relating to actor prominence. We also found that a few actors become prominent in the communication network during crisis period. This research is not without its limitations. First, this research was conducted using email communication data from a single organization during a period of crisis. Similar type of data (i.e., of the same context of organizational crisis) from other organizations may or may not produce similar results. However, our results do show that the importance of the analysis of dynamic networks over static ones does depend on context, and so are general in that sense. Second, we only consider two SNA measures (out-degree centrality and core-periphery analysis) to calculate actor prominence in the network. We did not consider other possible SNA measures that could produce different results.
Finally, while it is important to understand the structure of the networks and how this changes over time, the ultimate goal is to understand the consequences of the observed structure. In SNA, this is usually achieved by correlating structure to measures of outcome in situations such as crisis responses. However, the application of dynamic analysis in these situations is less well developed, and so the conclusions of our work apply only to metrics of network structure. In other areas of network analysis, such as in systems biology, the consequences of network structure for function are the primary focus of the research. In these cases, links are associated with specific functions (e.g., fluxes) and models for the resultant dynamics show the fundamental importance of the temporal dimension in understanding the link between structure and network function [28]. Clearly in SNA where information flow is an emergent function, the full significance of the dynamics of the network can be appreciated only when suitable proxies for dynamics on the network are employed.
REFERENCES
1. Clauset, A.; Eagle, N. Persistence and periodicity in a dynamic proximity network. In: DIMACS Workshop on Computational Methods for Dynamic Interaction Networks. 2007. 2. Braha, D.; Bar-Yam, Y. From centrality to temporary fame: Dynamic centrality in complex networks. Complexity 2006, 12, 59 63. 3. Wasserman, S.; Faust, K. Social Network Analysis: Methods and Applications; Cambridge University Press: New York, 1994. 4. Diesner, J.; Carley, K.M. Exploration of communication networks from the Enron email corpus. Citeseer, 2005; pp 21 23. 5. Strogatz, S.H. Exploring complex networks. Nature 2001, 410, 268 276. 6. Barabasi, A.L.; Jeong, H.; Neda, Z.; Ravasz, E.; Schubert, A.; Vicsek, T. Evolution of the social network of scientific collaborations. Phys A: Stat Mech Appl 2002, 311, 590 614. 7. Anderson, P. Complexity theory and organization science. Organ Sci 1999, 10, 216 232. 8. Morel, B.; Ramanujam, R. Through the looking glass of complexity: The dynamics of organizations as adaptive and evolving systems. Organ Sci 1999, 10, 278 293. 9. Mellahi, K.; Wilkinson, A. Organizational failure: A critique of recent research and a proposed integrative framework. Int J Manage Rev 2004, 5, 21 41. 10. Probst, G.; Raisch, S. Organizational crisis: The logic of failure. Acad Manage Exec 2005, 19, 90 105. 11. Weitzel, W.; Jonsson, E. Decline in organizations: A literature integration and extension. Admin Sci Q 1989, 34, 91 109. 12. Sheppard, J.P. Strategy and bankruptcy: An exploration into organizational death. J Manage 1994, 20, 795 833. 13. Guimera, R.; Danon, L.; Diaz-Guilera, A.; Giralt, F.; Arenas, A. Self-similar community structure in a network of human interactions. Phys Rev E 2003, 68, 65103. 14. Tyler, J.R.; Wilkinson, D.M.; Huberman, B.A. E-mail as spectroscopy: Automated discovery of community structure within organizations. Inf Soc 2005, 21, 143 153. 15. Quan-Haase, A.; Wellman, B. How does the Internet affect social capital. Soc Cap Inf Technol 2004, 113, 135 113. 16. Gloor, P.; Laubacher, R.; Dynes, S.; Zhao, Y. Visualization of communication patterns in collaborative innovation networksanalysis of some w3c working groups. ACM, 2003; p60. 17. Byron, K. Carrying too heavy a load? The communication and miscommunication of emotion by email. Acad Manage Rev 2008, 33, 309 327. 18. Ebel, H.; Mielsch, L.; Bornholdt, S. Scale-free topology of e-mail networks. Phys Rev E 2002, 66, 35103. 19. Jeong, H.; Tombor, B.; Albert, R.; Oltvai, Z.; Baraba´si, A. The large-scale organization of metabolic networks. Nature 2000, 407, 651 654. 20. Milo, R.; Shen-Orr, S.; Itzkovitz, S.; Kashtan, N.; Chklovskii, D.; Alon, U. Network motifs: Simple building blocks of complex networks. Science 2002, 298, 824. 21. Jeong, H.; Mason, S.; Baraba´si, A.; Oltvai, Z. Lethality and centrality in protein networks. Nature 2001, 411, 41 42. 22. Bar-Yam, Y.; Epstein, I. Response of complex networks to stimuli. Proc Natl Acad Sci USA 2004, 101, 4341. 23. Braha, D.; Bar-Yam, Y. The spatial proximity raw data was collected by the MIT Media Lab, NECSI Technical Report 2005February-01; 2005.
Q 2010 Wiley Periodicals, Inc. DOI 10.1002/cplx
C O M P L E X I T Y
35
24. Shetty, J.; Adibi, J. The Enron email dataset database schema and brief statistical report. Information Sciences Institute Technical Report, University of Southern California, 2004. 25. Healy, P.; Palepu, K. The fall of Enron. J Econ Perspect 2003, 17, 3 26. 26. Carrington, P.J.; Scott, J.; Wasserman, S. Models and methods in social network analysis. Cambridge University Press, 2005. 27. Braha, D.; Bar-Yam, Y. Time-dependent complex networks: Dynamic centrality, dynamic motifs, and cycles of social interactions. Adapt Network 2006, 39 50. 28. Faratian, D.; Clyde, R.G.; Crawford, J.W.; Harrison, D.J. Systems pathology—Taking molecular pathology into a new dimension. Nat Rev Clin Oncol 2009, 6, 455–464.
36
C O M P L E X I T Y
Q 2010 Wiley Periodicals, Inc. DOI 10.1002/cplx
The Sigma Profile: A Formal Tool to Study Organization and Its Evolution at Multiple Scales CARLOS GERSHENSON1,2 1 Computer Sciences Department, Instituto de Investigaciones en Matemáticas Aplicadas y en Sistemas,
Universidad Nacional Autónoma de México; New England Complex Systems Institute, Cambridge, Massachusetts 02142; and 2 Centrum Leo Apostel, Vrije Universiteit Brussel, Krijgskundestraat 33, B-1160 Brussel, Belgium
Received April 14, 2010; revised August 12, 2010; accepted August 25, 2010
The σ profile is presented as a tool to analyze the organization of systems at different scales, and how this organization changes in time. Describing structures at different scales as goal-oriented agents, one can define σ ∈ [0, 1] (satisfaction) as the degree to which the goals of each agent at each scale have been met. σ reflects the organization degree at that scale. The σ profile of a system shows the satisfaction at different scales, with the possibility to study their dependencies and evolution. It can also be used to extend game theoretic models. The description of a general tendency on the evolution of complexity and cooperation naturally follows from the σ profile. Experiments on a virtual ecosystem are used as illustration. © 2010 Wiley Periodicals, Inc. Complexity 16: 37–44, 2011 Key Words: evolution; complexity; cooperation; organization
1. INTRODUCTION
W
e use metaphors, models, and languages to describe our world. Different descriptions may be more suitable than others. We tend to select from a pool of different descriptions those that fit with a particular purpose. Thus, it is natural that there will be several useful, overlapping
Correspondence to: Carlos Gershenson, Instituto de Investigaciones en Matemáticas Aplicadas y en Sistemas, Universidad Nacional Autónoma de México, Ciudad Universitaria, A.P. 20-726, 01000 México D.F. México (e-mail:
[email protected])
© 2010 Wiley Periodicals, Inc., Vol. 16, No. 5 DOI 10.1002/cplx.20350 Published online 10 November 2010 in Wiley Online Library (wileyonlinelibrary.com)
descriptions of the same phenomena, useful for different purposes. In this article, the σ profile is introduced to describe the organization of systems at multiple scales. Some concepts were originally developed for engineering [1]. Here, they are extended with the purpose of scientific description, in particular to study the evolution of complexity. This article is organized as follows. In the following sections, concepts from multiagent systems, game theory, and multiscale analysis are introduced. These are then used to describe natural systems and discuss the evolution of complexity. In the following sections, a simple simulation
C O M P L E X I T Y
37
and experiments are presented to illustrate the σ profile. Conclusions close this article.
2. AGENTS Any phenomenon can be described as an agent. An agent is a description of an entity that acts on its environment [1, p. 39]. Thus, the terminology of multiagent systems [2–5] can be used to describe complex systems and their elements. An electron acts on its surroundings with its electromagnetic field, a herd acts on an ecosystem, a car acts on city traffic, and a company acts on a market. Moreover, an observer can ascribe goals to an agent. An electron tries to reach a state of minimum energy, a herd tries to survive, a car tries to get to its destination as fast as possible, and a company tries to make money. We can define a variable σ to represent satisfaction, i.e., the degree to which the goals of an agent have been reached. This will also reflect the organization of the agent [1, 6, 7]. As agents act on their environment, they can affect positively, negatively, or neutrally the satisfaction of other agents. We can define the friction φi between agents A and B as φA,B =
−σA − σB . 2
Satisfactions at different scales can also be compared. This can be used to study how satisfaction changes of elements (lower level agents) affect satisfaction changes of the system they compose (higher level agents). φAi ...An − σsys . 2
(3)
Note that when satisfaction at one level or for some agent increases, it does not imply that satisfaction at another level or for other agents will not decrease. The precise relations of how agents affect each other’s satisfaction will depend on the particular system under study and can be obtained through experimentation.
38
C O M P L E X I T Y
Game theory [8, 9] and in particular the prisoner’s dilemma [10, 11] have been used to study mathematically the evolution of cooperation [12]. It will be used here to exemplify the concepts of the σ profile. A well-studied abstraction is given when players (agents) choose between cooperation or defection. A cooperator pays a cost c for another to receive a benefit b, whereas a defector neither pays a cost nor deals benefits [13].1 The possible interactions can be arranged in the two by two matrix (4), where the payoff refers to the “row player” A. When both cooperate, A pays a cost (−c), but receives a benefit b from B. When B defects, A receives no benefit, so it loses −c. This might tempt A to defect, because it will gain b > b − c if B cooperates and will not lose if B also defects as 0 > −c. A\B C D (4) C b − c −c . D b 0 We can use the payoff of an agent to measure its satisfaction σ . Moreover, we can calculate the friction between agents A and B with φA,B (1) as shown in (5): φA,B C D
(1)
This implies that when the decrease in σ (satisfaction reduced, negative σ ) for one agent is greater than the increase in σ (satisfaction increased, positive σ ) for the other agent, φA,B will be positive. In other words, the satisfaction gain for one agent is lesser than the loss of satisfaction in the other agent. The opposite situation, i.e., a negative φA,B implies an overall increase in satisfaction, i.e., synergy. σ represents change in satisfaction. Equation (1) represents whether the overall benefit of changes in satisfaction is positive or negative. Equation (1) is linear, but the dynamics of σ s are not necessarily linear. Generalizing, the friction within a group of n agents will be − ni σAi . (2) φAi ...An = n
φAi ...An ,sys =
3. GAMES
C −(b − c) − b−c 2
D . − b−c 2 0
(5)
If we assume that A and B form of a system, we can define naively its satisfaction as the sum of the satisfactions of the elements.2 Therefore, the satisfaction of the system σA,B would be also the negative of (5) times two: σA,B C D
C D 2(b − c) b − c . b−c 0
(6)
Thus, we can study satisfactions at two different scales. At the lower scale, agents are better off defecting, given the conditions of (4). However, at the higher scale, the system will have a higher satisfaction if agents cooperate, as b − c > b−c > 0. Here, we can see that reducing the friction φ at 2 the lower scale increases the satisfaction σsys at the higher scale. Note that reducing friction is very different from the naïve view that attempts to increase global satisfaction by maximizing local satisfaction. Friction reduction notices the relationships (interactions) between elements, not only the states of elements. This principle has been shown to be valid also for the n more general case, when σsys = i σi and has been used as a design principle to engineer self-organizing systems [1].3 1b
> c is assumed.
2 For other systems, the satisfaction of the system does not need
to be a linear function. 3 However, the problem of the engineer now lies in finding suitable relations between σ s at different levels.
© 2010 Wiley Periodicals, Inc. DOI 10.1002/cplx
FIGURE 1
Complexity profile for three systems, adapted from Ref. 14. See text for details.
4. MULTISCALE ANALYSIS Bar-Yam proposed multiscale analysis [14] to study the complexity of systems as scale varies. In particular, one can visualize this with the “complexity profile” of a system, i.e., the complexity of a system depending on the scale at which it is described. Here, complexity is understood as the amount of information required to describe a system [15], which is consistent with the Kolmogorov-Chaitin-Solomonoff measure of information. As an example, Figure 1 shows the complexity profile of three systems: Curve a represents 1 mm3 of a gas. At a low (atomic) scale, its complexity (understood as the amount of information required to describe the system) is high, as one needs to specify the positions and momentums of each atom to describe the system. However, at higher scales, these properties are averaged to obtain properties such as temperature and pressure, so the complexity is low. Curve b represents 1 mm3 of a comet. As the atoms are stable, few information is required to describe the system at low scales, as these are relatively regular. Still, as the comet travels large distances, information is relevant at very high scales. Curve c represents 1 mm3 of an animal. Its atoms are more ordered as a, but less than b, so its complexity at that scale is intermediate. Given the organization of living systems, the complexity required to describe c at the mesoscale is high. For high scales, however, the complexity of b is higher, as the information of c is averaged with that of the rest of the planet. Generalizing Ashby’s law of requisite variety [16], multiscale analysis can be used to match the complexity of a system at different scales to an appropriate control method. This is because systems are doomed to fail when their complexity does not match the complexity of their environment. In other words, solutions need to match the complexity of the problem they are trying to solve at a particular scale. Inspired by the complexity profile, the σ profile is the comparison of satisfaction according to scale. This is in contrast of
© 2010 Wiley Periodicals, Inc. DOI 10.1002/cplx
traditional approaches, where only one scale is studied, or a linear dependency between scales is assumed. Figure 2 shows the σ profile for the prisoner’s dilemma example described above. There are two scales on the x axis: individual and system. The y axis indicates the satisfaction at different levels, for different combinations of two players choosing between cooperation (C) and defection (D). For the individual scale, the play combination that gives the highest satisfaction is DC, i.e., defect when other cooperates. However, at the system level it is clear that the best combination is CC. The σ profile provides a visualization of information that is already present in payoff matrixes. However, the outcomes of different actions at different scales are clearer with the σ profile, complementing the analysis traditionally carried out in game theory. Moreover, it is easy to include different payoff matrixes at different scales, i.e., when the relationship between satisfaction between scales is nontrivial. Furthermore, the σ profile can be used to study not only different actions or strategies but also how changes in the payoff matrixes affect the satisfaction at different scales. This is relevant because, e.g., in the complex dynamics of an ecosystem, the behavior of some animals or species can change the payoff of other animals or species. These changes are difficult to follow if only matrixes are used. Note that the scales mentioned so far are spatial, but these can also be temporal, e.g., short-term payoffs can be different from long-term ones. An example can be given with iterated games: single games are a faster temporal scale, whereas iterations between the same players can constitute slower temporal scales.
FIGURE 2
σ profile for the prisoner’s dilemma. At the individual scale, the best play is DC, whereas at the system level it is CC. For graphical purposes, b = 2c.
C O M P L E X I T Y
39
FIGURE 3
σ profile for an iterated prisoner’s dilemma. At the single game scale, the best play is DC, whereas after more than two games, it is CC. b = 2c.
Figure 3 shows the temporal σ profile for an iterated prisoner’s dilemma where players choose between always defect or always cooperate. The following assumption is made: players are able to give a benefit b at a cost c only if their satisfaction is not negative, i.e., if they have enough resources. Thus, the combination DC cannot achieve more than b for agent A, as B is left with nothing to give after one game, i.e., −c. Like this, with time CC is clearly the best combination at the individual scale and slow time scale, as the benefit of defection applies only at the fast time scale.
Groups and societies also constrain and organize individuals to reach their own goals and increase their satisfaction. The prisoner’s dilemma is an example of this last case. In order for a higher scale structure to maintain itself, its satisfaction has to be greater or equal than that of its components. This leads to an “enslaving” of the lower scale agents [17] by the higher scale system, as their satisfaction will be in some cases decreased. However, their survivability will be enhanced, as the system will mediate conflicts between agents to reduce their friction for its own “selfish” satisfaction. The concept of mediator [18–21] is useful, as it identifies the mechanism(s) by which friction is reduced and synergy promoted. Like this, even when the satisfaction of animals is lower when they are social (rules need to be followed), they have better chances of survival, as they benefit from the social organization and are able to cope with more environmental complexity as a group. The same applies to cells: they are less “free” in a multicellular organism, as they obey intracellular signals that can imply their own destruction. Nevertheless, cells within a multicellular organism have more chances to survive than similar ones competing against each other for resources. In a similar way, many molecules would not be able to maintain themselves if it were not for the organization provided by a cell [22]. Agents at each scale will try to maximize their own satisfaction, so the only way for higher scale agents to emerge is to mediate or enslave the lower scale agents. The scale dominating the σ profile, i.e., with a higher satisfaction, reflects
FIGURE 4
5. NATURE Remembering that satisfactions and goals ascribed to agents partially depend on the observer, the σ profile can be used to compare satisfactions across scales in nature. Figure 4 shows the σ profile at five scales, from atomic to social. At the lowest scale, isolated atoms have the highest satisfaction, as they are able to fulfill their goal of reaching a state of minimum energy and highest entropy. When molecules organize atoms, these cannot reach such a satisfactory state as when they are free, so their satisfaction is reduced at the molecular scale. Naturally, isolated molecules have the highest satisfaction at this scale, as they can “enslave” atoms to reach their goal (minimize chemical potential) and are free from agents of higher scales. Molecules in turn are enslaved by cells, as the latter organize the former to maximize their own satisfaction. The main goals of cells are to survive reproduce. In multicellular organisms, cells are constrained in their reproduction and survival (via apoptosis) to the benefit of the organism. Cancer cells can be seen as “rebels” to the goals and satisfaction of the organism.
40
C O M P L E X I T Y
σ profile across five scales for agents characteristic of each scale. The σ values are only illustrative, thus there is no label on the y axis. Moreover, the change of σ s at different scales is not necessarily linear.
© 2010 Wiley Periodicals, Inc. DOI 10.1002/cplx
the degree of organization and complexity of the system. In Figure 4, agents characteristic of a scale have the highest satisfaction at that scale.
6. EVOLUTION OF COMPLEXITY There has been much work discussing the evolution of complexity [23–25]. Our universe has seen an increase of complexity throughout its history, from the Big Bang to the information age. People have described this as an “arrow” in evolution [26, 27]. However, some others have seen this increase as a natural drift [28, 29]. This means that starting with only simple elements, with random variations you can only get more complex. However, this explanation does not account for the increasing speed at which complexity has evolved. The σ profile can be used to gradually measure metasystem transitions [30, 31],4 which clearly indicate an increase of complexity. Thus, the σ profile can be used to understand better the evolution of complexity. Each agent at its own scale tries to maximize its satisfaction. However, a high satisfaction does not always imply a higher evolutionary fitness. Some systems will have high σ values at higher or lower scales. But, those with high values at high scales will have a higher fitness in comparison, as an agent at a high scale needs to ensure the sustainability and cooperation of all agents at its lower scales to maintain itself. On the other hand, independent agents at lower scales will not be able to do much beyond their own scale to ensure their survivability. The above scenario does not imply that complexity will always increase. Like with any evolutionary process, a source of variation is needed. Once there is a competition between two different systems, one with a higher organizational scale will tend to win the evolutionary race. Because it is beneficial to have high σ at high scales, systems that can evolve higher scales of organization will tend to evolve. And, those that can evolve faster will prevail. There are many ways in which cooperation can evolve [13], but this seems to be a general evolutionary tendency, not only in biological systems but also in economical, technological, and informational systems, where an increasing increase of complexity is also observed [22, 32]. Whether there is an upper bound for complexity increase is still an open question.
7. SIMULATION A simple multiagent simulation was developed in NetLogo [33]. Agents with real spatial coordinates and orientations move randomly in a toroidal virtual world with a certain step size at a certain energy cost. Resources grow randomly at a certain growth rate. Each resource occupies one square of a grid. One square can contain only one resource. Agents feed on resources, increasing their energy by a certain resource
energy. If the agent’s internal energy ∈ [0, 100] reaches zero, it dies. If the energy reaches 100, it reproduces by splitting. This reduces the energy of the parent to half and creates an offspring with the other half of the energy. In this simple simulated ecological system, typical behaviors can be observed depending on the parameters. If the resource growth rate or resource energy is low, or the agent’s energy cost is high or step size is small, the agents will become extinct. As these variables change, larger agent populations can be maintained. Depending on the precise variable values, the population sizes of agents and resources can be roughly constant or oscillate around a mean, as it is typical on population dynamical models. To study the evolutionary advantage of aggregating, i.e., acquiring a higher level or organization, i.e., a metasystem transition, a variable group advantage is introduced. If an agent is not in an aggregate, every time step its energy is reduced by a certain energy cost, as mentioned above. However, if the agent is part of a group, its energy will be reduced by energy cost/group advantage. Thus, the larger the value of group advantage, the less energy that an agent in an aggregate will lose. It should be noted, however, that if many agents are aggregated, there will be less resources left for them, as each agent consumes available resources and these do not regrow immediately. To measure the evolutionary advantage of aggregating, the probabilities of joining a group or splitting from it are evolved with a simple evolutionary algorithm. When an agent is born, probabilities pjoin and psplit are assigned, varying randomly ±0.05 from the parent’s probabilities. pjoin is the probability of joining other agents while next to them, i.e., within a neighborhood of one grid square. Likewise, psplit is the probability that an agent will cut links made with or by another agent at any given time step. Depending on the parameter values, agents with different pjoin and psplit probabilities will have greater advantages, and these will be selected after several generations. Satisfaction can be measured at three levels: the resource level, the agent level, and the system level. The resource σ is defined as the percentage of resources available in the environment. It is 1 if the environment is covered by resources and 0 if there are no resources at all. The agent σ is measured with their energy/100. It is 1 if the agents are about to reproduce and 0 if they are dead. The system σ is defined as the proportion of agents that are joined in the largest group. It is 1 if all the agents are joined in one group and 0 if no agent is joined. A screenshot of the simulation can be seen in Figure 5. The reader is invited to try the simulation at the URL http://turing.iimas.unam.mx/∼ cgg/NetLogo/4.1/MO.html.
8. EXPERIMENTS metasystem transition occurs when the σ at a higher scale dominates the profile.
4A
© 2010 Wiley Periodicals, Inc. DOI 10.1002/cplx
Figure 6 shows results of 100 simulation runs of 10,000 time steps for different values of group advantage. All simulations
C O M P L E X I T Y
41
FIGURE 5
Screenshot of simulation. Green patches contain resource, and dark brown ones are empty. Lighter agents have more energy than darker ones. Independent agents are represented by spheres. In groups, the movements of a cube agent are followed by cone agents. Areas nearby agent groups are scarcer in resources.
start with an initial population of 100 agents with pjoin = psplit = 0.5 and a random energy. Table 1 lists the values of parameters used. We can see that as group advantage is increased, the system σ also increases [Figure 6(c)]. On the other hand, the resource σ and agent σ are reduced [Figure 6(a, b)]. However, the survivability of the individual agents is increased, as indicated by the population size [Figure 6(d)]. For higher values of group advantage, there is a selective pressure toward joining in groups, so the mean pjoin is increased [Figure 6(e)]. This is not favored for low values of group advantage, as close agents need to share local resources, leading to friction between neighbors. For this case, it is more advantageous for the agents to spread as much as possible in their environment, to avoid the friction. On the other hand, high values of group advantage reduce this friction and promote the agent aggregation. As a high pjoin will make agents to join groups even if they split constantly from them, there is no pressure on the value of psplit [Figure 6(f)]. Note that the reproduction and mutation takes place at the agent level, so there is no direct selection of systems. However, the properties of a high system σ give better chances of survival to agents, even if their σ is lower compared to the case when the system σ is low. Note that the metasystem transition depends crucially on the value of group advantage. If this is too low, agents will not aggregate, and in evolutionary time there will be no increase of organization or complexity. However, if we assume that group advantage can take several different values for different contexts, those values that increase the survivability of individuals will have a higher probability of propagating. Therefore, if a system eventually find the mediator to increase the value of group advantage, this system will have a higher
42
C O M P L E X I T Y
evolutionary fitness than a population of isolated agents with no organization. These experiments are intended to illustrate the concepts presented in this article. They are not attempted as a proof. Concepts can only prove their usefulness and suitability with time. The simulations showed that higher levels of organization can be expected in the evolution of complexity under certain assumptions. Even when agents within an organization may have a reduced satisfaction, they have an increased survivability, which increases the system’s satisfaction. In this way, there will be a natural tendency toward increasing the satisfaction of higher scales by constraining the behaviors of lower scales. This implies a tendency to increase complexity, whenever this is possible.
9. CONCLUSIONS The σ profile has several potential uses to describe and compare systems at multiple scales under the same framework, a major goal in studies of complex systems. One of such potential uses was explored here: the difference between satisfaction (payoff) and survivability. A high satisfaction does not imply survivability. This can seem as a problem for some game theoretical formalizations (as it has been addressed by advocates of multiple levels of selection). However, as we observe the satisfactions of agents at different scales (spatial and temporal), it can be argued that the survivability of a system is related to the satisfaction of the highest scale. Lower scale gains (spatial or temporal) will be overturned eventually by organizations that manage to constrain the behavior at lower scales. For the “selfish” benefit of the higher scales, the integrity of lower scales will be maintained, increasing their survivability.
© 2010 Wiley Periodicals, Inc. DOI 10.1002/cplx
FIGURE 6
Simulation results as the group advantage is increased: (a) Resource σ , (b) agent σ , (c) system σ , (d) agent population, (e) mean pjoin , and (e) mean psplit .
© 2010 Wiley Periodicals, Inc. DOI 10.1002/cplx
C O M P L E X I T Y
43
TABLE 1 Parameter Values Used in Simulation Experiments Variable
Value
Resource energy Resource growth rate Energy cost Step size
10 0.1 2 0.2
essential in the emergence of new levels of organization [13; 20, p. 1563]. As a future work, it would be interesting to study the role of friction reduction in evolution and the way in which different mechanisms (mediators) achieve it from the perspective presented here. Also, the application of the σ profile to particular problems in evolutionary game theory and in economics would be of extreme interest.
Acknowledgments
Systems can achieve high satisfaction with mediators [18–21] to reduce friction between agents. Friction reduction can be seen as a generalization of cooperation, which is
The author thanks all the people who have contributed to this work in recent years and an anonymous referee for useful comments. This work was partially supported by SNI membership 47907 of CONACyT, Mexico and by the FWO, Belgium.
REFERENCES 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23. 24. 25. 26. 27. 28. 29. 30. 31. 32. 33.
44
Gershenson, C. Design and Control of Self-organizing Systems; CopIt Arxives: Mexico, 2007, Available at: http://tinyurl.com/DCSOS2007. Maes, P. Modeling adaptive autonomous agents. Artif Life 1994, 1, 135–162. Wooldridge, M.; Jennings, N.R. Intelligent agents: Theory and practice. Knowledge Eng Rev 1995, 10, 115–152. Wooldridge, M. An Introduction to MultiAgent Systems; Wiley: Chichester, England, 2002. Schweitzer, F. Brownian Agents and Active Particles. Collective Dynamics in the Natural and Social Sciences; Springer Series in Synergetics; Springer: Berlin, 2003. Ashby, W.R. Principles of the self-organizing system. In: Principles of Self-Organization; Von Foerster, H.; Zopf, G.W., Jr., Eds.; Pergamon: Oxford, 1962; pp. 255–278. Gershenson, C.; Heylighen, F. When can we call a system self-organizing? Advances in Artificial Life, 7th European Conference, ECAL 2003 LNAI 2801; In Banzhaf, W.; Christaller, T.; Dittrich, P.; Kim, J.T.; Ziegler, J., Eds.; Springer: Berlin, 2003; pp. 606–614. von Neumann, J.; Morgenstern, O. Theory of Games and Economic Behavior; Princeton University Press: Princeton, USA, 1944. Smith, J.M. Evolution and the Theory of Games; Cambridge University Press: Cambridge, UK, 1982. Tucker, A. A two-person dilemma. UMAP J 1980, 1, 101. Poundstone, W. Prisoner’s Dilemma: John Von Neumann, Game Theory and the Puzzle of the Bomb; Doubleday: New York, NY, 1992. Axelrod, R.M. The Evolution of Cooperation; Basic Books: New York, 1984. Nowak, M.A. Five rules for the evolution of cooperation. Science 2006, 314, 1560–1563. Bar-Yam, Y. Multiscale variety in complex systems. Complexity 2004, 9, 37–45. Prokopenko, M.; Boschetti, F.; Ryan, A. An information-theoretic primer on complexity, self-organisation and emergence. Complexity 2009, 15, 11–28. Ashby, W.R. An Introduction to Cybernetics; Chapman & Hall: London, 1956. Haken, H. Information and Self-organization: A Macroscopic Approach to Complex Systems; Springer-Verlag: Berlin, 1988. Michod, R.E. Cooperation and conflict in the evolution of individuality. I. Multi-level selection of the organism. Am Naturalist 1997, 149, 607–645. Michod, R.E. Darwinian Dynamics: Evolutionary Transitions in Fitness and Individuality; Princeton University Press: Princeton, NJ, 2000. Michod, R.E. Cooperation and conflict mediation during the origin of multicellularity. In: Genetic and Cultural Evolution of Cooperation; Hammerstein, P., Ed.; MIT Press: Cambridge, MA, 2003; Chapter 16, pp. 261–307. Heylighen, F. Mediator evolution: a general scenario for the origin of dynamical hierarchies. In: Worldviews, Science and Us; Aerts, D.; D’Hooghe, B.; Note, N., Eds.; World Scientific: Singapore, 2006. Kauffman, S.A. Reinventing the Sacred: A New View of Science, Reason, and Religion; Basic Books: New York, USA, 2008. Bonner, J.T. The Evolution of Complexity, by Means of Natural Selection; Princeton University Press: Princeton, USA, 1988. Bedau, M.; McCaskill, J.; Packard, P.; Rasmussen, S.; Green, D.; Ikegami, T.; Kaneko, K.; Ray, T. Open problems in artificial life. Artificial Life 2000, 6, 363–376. Gershenson, C.; Lenaerts, T. Evolution of complexity. Artif Life 2008, 14, 1–3, Special Issue on the Evolution of Complexity. Bedau, M.A. Four puzzles about life. Artif Life 1998, 4, 125–140. Stewart, J. Evolution’s Arrow: The Direction of Evolution and the Future of Humanity; Chapman Press: Canberra, Australia, 2000. McShea, D. Mechanisms of large-scale evolutionary trends. Evolution 1994, 48, 1747–1763. Miconi, T. Evolution and complexity: The double-edged sword. Artif Life 2008, 14, 325–344, Special Issue on the Evolution of Complexity. Turchin, V. The Phenomenon of Science. A Cybernetic Approach to Human Evolution; Columbia University Press: New York, 1977. Smith, J.M.; Szathmáry, E. The Major Transitions in Evolution; Oxford University Press: Oxford, UK, 1995. Gershenson, C. The world as evolving information. In: Proceedings of International Conference on Complex Systems ICCS2007; Bar-Yam, Y., Ed., 2007. Wilensky, U. NetLogo, Center for Connected Learning and Computer-Based Modeling; Northwestern University: Evanston, IL, 1999; http://ccl.northwestern. edu/netlogo/.
C O M P L E X I T Y
© 2010 Wiley Periodicals, Inc. DOI 10.1002/cplx
Individual and Collective Behavior of Vibrating Motors Interacting Through a Resonant Plate DAVID MERTENS AND RICHARD WEAVER Department of Physics, University of Illinois, Urbana-Champaign, Illinois
Received May 3, 2010; accepted September 20, 2010
We report on experiments of many small motors—cell phone vibrators—glued to and interacting through a resonant elastic plate. We find that the motors tend to avoid frequencies that are just higher than the resonances of a plate, preferring instead frequencies just below those resonances. As a result, motors interacting through a resonant plate exhibit hysteresis in their frequency versus driving voltage. We also find that the stability of a single motor near a resonance is different from the stability of a group of motors near a resonance. When the driving voltage is constant and the transient behavior of the system has passed, we find that the average frequency of all the motors is constant. © 2010 Wiley Periodicals, Inc. Complexity 16: 45–53, 2011 Key Words: synchronization; resonance; hysteresis
1. INTRODUCTION
E
nsembles of oscillators that spontaneously synchronize have been studied for decades. Biological examples abound [1–4], but synchronization occurs in many other systems, including coupled metronomes [5], laser arrays [6], chemical oscillators [7], arrays of convective cells [8], arrays of Josephson junctions [9], transport networks [10], and perhaps most notoriously, pedestrians crossing the Millennium Bridge in London when it first opened [11]. These systems are all examples of populations of similar but not identical oscillators that exhibit the same basic patterns of behavior,
Correspondence to: David Mertens, 1110 West Green Street, Urbana, Illinois 61801, USA (e-mail:
[email protected])
© 2010 Wiley Periodicals, Inc., Vol. 16, No. 5 DOI 10.1002/cplx.20352 Published online 22 December 2010 in Wiley Online Library (wileyonlinelibrary.com)
that (1) they synchronize spontaneously, without the need for any external driving, and (2) as the oscillators’ coupling increases, their synchronization strengthens. For an overview of the topic, see the review by Acebron et al. [12] and the popular book Sync, by Strogatz [13]. The topic of synchronization is much broader than the study of many coupled oscillators. In an effort to better understand radio tuning, Adler [14] studied the synchronization of locking circuits, in which a phase-oscillator synchronizes to a periodic forcing. Burykin and Buchman [2] discussed the possibly lethal outcome of the lack of synchronization among organ systems when taking a patient off of a mechanical respirator. Gintautus and Hubler [15] found synchronization in mixed-reality states, in which virtual and real systems are coupled and interact in real time. All of these systems exhibit
C O M P L E X I T Y
45
synchronization in some sense. Although we find these systems to be interesting, the work presented here is motivated by the many examples listed in the first paragraph: spontaneous collective behavior of many coupled oscillators in the absence of external forcing. In this article, we present yet another system that exhibits synchronization: small mechanical vibrators coupled through a resonant plate. In addition to being inexpensive and easy to study, this system provides a unique twist to the standard coupled-oscillator problem in that the coupling between the oscillators depends on frequency and exhibits a simple resonance structure. How does frequency dependent coupling effect the dynamics of coupled oscillators? Unlike most other globally coupled oscillator systems, we find history-dependent behavior due to characteristic interactions with the resonances of the plate. The frequencies of individual motors tend to level-off just below plate resonances and motors tend to avoid frequencies just above resonances. Groups of motors show similar features but have wider hysteresis loops because the leveling-off of the frequencies below a resonance extends to higher driving voltages than individual motors. Finally, nontransient systems operating at a fixed voltage seem to show that the average motor velocity is constant. A system that can take one of two frequencies depending on the history of the system is sometimes called birhythmic, and the study of birhythmic phenomena in diffusively coupled oscillators has received a surge of interest in recent years [16]. Although many of our results are similar to those found in the study of diffusively coupled oscillators, our system differs from those studied elsewhere in that the interaction strength shows a strong dependence on the interaction frequency and no appreciable dependence on position.
2. EXPERIMENTAL SETUP In this work, we study 16 small motors with eccentrically massed rotors. The motors (All Electronics Corporation, catalog number DCM-2041 ) are small DC motors, the sort used as vibrators in mobile phones. Each motor has a mass of 3 g and is 2 cm long, 1 cm high, and 1 cm wide. Each motor’s rotor has a center of mass that is offset from the axis of rotation, with a first moment of 0.74 g mm. Vibrations arise from the rotation of this off-center mass. To cause the motors to interact, we attach them to a mechanically compliant and resonant aluminum plate held by clamps as shown in Figure 1. The plate is L = 115 cm long, b = 15 cm wide, and 5 mm thick. We adjust the linear response of the system by adjusting the location where
1 This
item is no longer available in the catalog, but similar motors can be found in their catalog searching for “motor vibrator.”
46
C O M P L E X I T Y
FIGURE 1
A photo and diagram of the experimental setup, as described in the second section.
the clamps hold the plate, parametrized by the length a. Although we considered various clamp positions, all of the results reported here are based on a length of a = 12.5 cm, for which the system has resonances at 68 Hz and 100 Hz. We measure the plate’s vertical acceleration a(t) using an accelerometer attached to the plate, a PCB 353B33. In the diagram shown in Figure 1, the accelerometer is depicted by the canister underneath the motors. A few typical time series of acceleration data due to a single motor are shown in Figure 2. The sampling rate for these and all other data we discuss is r = 1000 samples per second. The plate is a linear medium, so we attribute any observed vibrations either to the motors or to background sources, such as building vibrations. To reduce spurious frequencies from the environment, we place the entire setup on a foam pad. Although some background noise still perturbs the system, these vibrations do not dominate the signal reported by the accelerometer and have frequencies much lower than the motors’ primary frequencies. Apart from quantitatively measuring the system, we also listen to and watch the system. The plate creates a great deal of noise, especially when many motors operate near a resonance, making certain transitions immediately apparent just
© 2010 Wiley Periodicals, Inc. DOI 10.1002/cplx
FIGURE 2
Third, because we cannot independently change the power delivered to an individual motor, we cannot precisely control the distribution of natural frequencies for a given experimental run. However, the distribution of the natural frequencies of each of the motors at different voltages remains roughly unimodal.
3. BEHAVIOR OF A SINGLE MOTOR To discuss how multiple motors interact we must first characterize how a single motor behaves. In this section, we discuss how an individual motor’s primary frequency depends on the driving voltage and the we measure how the plate’s response magnitude depends on the motor’s primary frequency. We conclude by explaining how we modify raw power spectra to obtain a representation for motor densities as a function of frequency. Figures 2 and 3 demonstrate typical single motor data at voltages V = 0.65, 0.84, and 1.05 V. Both figures show stable periodic behavior. From our two-second long data sets a(t), sampled at 1000 samples per second, we construct ˜ ), shown in Figure 3 using a short-time Fourier spectra a(f Transform: ˜ )= a(f
Typical time series of a single motor on the plate for different voltages. From top to bottom, the data correspond to driving voltages of 0.65, 0.84, and 1.05 V.
a(t)e i 2 π f dt,
(1)
T
as implemented with an FFT. Using spectroscopic observations and time-frequency plots, we manually determine the minimum and maximum operating frequencies of the by listening. We observe the system visually using a stroboscope, which allows us to identify the motors’ primary frequencies and observe variations of those frequencies. We can also examine the mode-shapes of the plate using the stroboscope. We find that both resonances, near f = 68 and 100 Hz, do not have any nodes along the array of motors and that the displacements of all the motors have about equal magnitude. As such, the couplings between the motors and the plate have no appreciable position dependence. All of the motors operate from a common power supply, which has important implications for our experimental design. First, small variations in each of the motors mean that, despite operating at the same voltage, all the motors have different natural frequencies. We do not attempt to characterize the distribution of motor speeds in any rigorous way since we are only working with 16 motors, but stroboscopic observations indicate that the frequency distribution is approximately unimodal. Second, a common power supply couples the motors electrically, which may lead to synchronization independently of the mechanical coupling. We find that our motors do not show synchronization when run on a massive support (which provides minimal mechanical coupling) so we attribute the synchronization results in the fourth section to mechanical interactions moderated by the plate.
© 2010 Wiley Periodicals, Inc. DOI 10.1002/cplx
FIGURE 3
Typical Fourier transforms of a single motor on the plate for different voltages. The driving voltages are 0.65 (–), 0.84 (· · · ), and 1.05 V (- -).
C O M P L E X I T Y
47
FIGURE 4
Frequency response of a single motor versus voltage, both (a) on a resonant plate, and (b) for comparison, a different motor on a rigid support.
motor for a given collection of samples and we only examine Fourier Transform data within those extrema. Usually the motors operate between f = 40 and 100 Hz, but our fastest motors at the highest frequencies could achieve frequencies up to f = 170 Hz. For a rough estimate for the motor’s primary frequency fˆ , we find the frequency corresponding to ˜ within the pre-determined the maximum amplitude, max |a|, extrema. A plate resonance could be responding to a harmonic of the motor’s primary frequency, so if the amplitude corresponding to half that frequency shows a strong peak (a peak with magnitude at least 1/10 the magnitude of the identified peak), we select that as our rough estimate for the motor’s primary frequency for that sample and call its index ipeak . Having obtained a rough estimate for the motor’s primary frequency, we obtain a precise measurement using a simple weighted average of the frequencies in the vicinity of ipeak . Although we considered fitting the data in the vicinity of the peak to a Lorentzian, noise in the tails of the fit often caused the fits to mischaracterize the width of the fit. Instead, we compute a number of estimates for primary frequency fˆ by performing the following weighted averages: fˆn =
ipeak +n
i=ipeak −n
|a˜ i |2 fi . |a˜ i |2
(2)
As n increases, the sum includes more data from the tails of the peak. Because the average is weighted using the squared amplitude, fˆn reaches stable values once n takes the sum beyond the extent of the peak. We find that including all data points within 5 Hz of the peak is more than enough to
48
C O M P L E X I T Y
give good estimates of the motor’s primary frequency, and all such frequency values agree with measurements taken with the stroboscope. For our data, which involves records with durations of 2 s, this amounts to assigning fˆ = fˆ10 . Using this technique, the primary frequencies verses the driving voltage are plotted for the motor on a plate or on a block in Figure 4(a), (b), respectively. The motor’s primary frequency is relatively stable when the voltage is fixed, but Figure 4(a) shows that the motor’s frequency versus voltage2 is hysteretic. Shown in Figure 4(a) are the primary frequencies for two different sets of consecutive measurements, one in which we started at V = 2.4 V and slowly decreased the voltage to 0.6 V (indicated by triangles pointing downward), and another in which we started the motors at V = 0.6 V and slowly increased the voltage to 2.4 V (indicated by the triangles pointing upward). Although the two measurements demonstrate relatively good agreement below V = 1 V and above V = 2 V, they exhibit a hysteresis between V = 1 V and 2 V. A motor on the lower branch gets stuck near a resonance of the plate and cannot reach the upper branch unless the driving voltage exceeds V = 2 V. Once the motor reaches the upper branch, it does not drop to the lower branch unless the voltage drops below V ≈ 1.2 V, where it will remain unless the voltage is again increased above V = 2 V. In contrast, similar data taken from 2 Note
that the primary frequency as a function of voltage in Figure 4(a) is uncommonly high because this happens to be our fastest motor. It is the same motor as the one operating between f = 80 Hz and f = 95 Hz between V = 1.2 V and 1.4 V in Figure 7(b).
© 2010 Wiley Periodicals, Inc. DOI 10.1002/cplx
data were obtained by powering different motors—one at a time—at various voltages and taking two-second data sets for each voltage. Although we could seek a relation between the RMS magnitude and the applied voltage, Figure 5 indicates that the RMS magnitude is a function of primary frequency. Despite overlaying data from motors at various different locations on the plate, the magnitude as a function of primary frequency is remarkably consistent, confirming that the coupling between the plate and the motors for our geometry does not depend substantially on the motor’s position. Apart from the gaps in the data for frequencies just above the two peaks, which we discuss in the next section, the magnitudes in this plot are equivalent to f 4 |G|, where G represents the passive frequency-dependent Green function of the system. Spectroscopic observations and time-frequency plots indicate that a motor’s primary frequency occasionally jumps quickly by one or two Hertz and then relaxes back to its prejump frequency over the next few seconds. The growth of MRMS,n as a function of n gives a simple criterion for identifying motor data in which the motor’s frequency changes appreciably over the course of the sample. After examining many data samples, we decided to discard any data samples for which
FIGURE 5
Accelerometer amplitude as a function of motor frequency when driven by a single motor. The peaks near 68 and 97 Hz correspond with peaks in the support’s Green function at the same frequencies.
a separate motor on a rigid support are shown in Figure 4(b), showing that in the absence of resonances, a motor’s frequency is nearly linear in the applied voltage. The marked difference indicates that the motor interacts strongly with the resonances of the plate, and these interactions lead to the hysteresis observed in Figure 4(a). We measure the magnitude of the plate’s response using the root mean square (RMS) of the Fourier transform in the vicinity of the peak, MRMS . We compute a number of estimates of the RMS magnitude, similar to the estimates for the primary frequency, as
MRMS,n
=
ipeak +n
i=ipeak −n
|a˜ i |2 , T
(3)
where T is the duration of the sample. For data in which the motor’s frequency was indeed constant, the values obtained for MRMS,n are largely independent of n as long as the sum covers the extent of the peak. For a given sample, we assign MRMS = MRMS,20 . The RMS magnitude can be plotted against the voltage V, but it is better understood as a function of the primary frequency fˆ , as shown in Figure 5. As shown in Figure 5, the magnitude of the plate’s response to a single motor is not monotonic in frequency. We can understand this behavior by noting that the plate has resonances near 68 and 100 Hz, so the plate will have larger accelerations when driven by a motor near these frequencies than when the motor’s frequency is far from the resonances. These
© 2010 Wiley Periodicals, Inc. DOI 10.1002/cplx
MRMS,20 > 1.09. MRMS,1
(4)
In practice this amounts to rejecting about 10% of the samples. All of the discussion of data presented so far has focused on single motors. Since we use a single accelerometer to measure the behavior of multiple motors acting simultaneously, and since we wish to know when two motors synchronize, we must obtain a reasonable estimate for the number of motors at a given frequency. Such an estimate is not trivial: the resonant response of the plate means that one motor turning at 95 Hz will produce a much stronger signal than many synchronized motors with a primary frequency of 78 Hz. Our solution to this problem is to use Figure 5 as a normalization curve. We sample the RMS magnitude uniformly—interpolating where necessary—to obtain norˆ (f ). We then normalize a raw specmalization amplitudes M trum such as Figure 3 by dividing the amplitudes of the original spectrum by the normalization amplitudes: N (f ) =
˜ )| |a(f . ˆ (f ) M
(5)
The result of such a normalization scheme is shown in Figure 6 for the data presented in Figure 3. Except for the artifacts at f = 50 and 75 Hz associated with the signal at V = 1.05 V, the scheme appears to work quite well. Even with the artifacts, single motors can be easily distinguished and counted, providing us with a decent measure of the number of motors in the vicinity of a given frequency.
C O M P L E X I T Y
49
synchronization approaching f = 90 Hz, and ensembles of motors near f = 64 Hz between V = 1.2 and 1.4 V maintain nearly the same frequency. Both figures also show almost no motor activity between f = 65 and 75 Hz. The motors’ behavior in the vicinity of the resonance near f = 68 Hz is the key difference between the figures. As with the single-motor data shown in figure 4, the motors show hysteresis behavior. Starting from low frequencies and moving upward—Figure 7(b)—most of the motors synchronize strongly just below the resonance. The transition out of this synchronized state occurs swiftly and can be observed without any special equipment: the noise of the plate drops many decibels in less than a second. Once the motors have jumped above the f = 68 Hz resonance at V = 1.5 V, many of them remain above that
FIGURE 6
FIGURE 7 Normalized plot of data shown in Figure 3. The driving voltages are 0.65 (–), 0.84 (· · · ), and 1.05 V (- -).
4. MANY MOTORS ON A RESONANT PLATE The behavior of multiple motors interacting on a plate is richer than the behavior of a single motor on a plate. In this section, we explore that richness, first by examining how the driving voltage effects the behavior of the motors, and then by considering the nontransient behavior of the system at fixed voltage over long times.
4.1. Behavior versus Voltage The essential behavior of multiple motors interacting on the plate is given in Figure 7. These plots are consecutive minutelong measurements that have been Fourier transformed and normalized as discussed in the previous section, where we now identify the motor density ρ with the normalized amplitude: ρV (f ) = N (f , V).
(6)
Instead of plotting individual spectra, like those in Figure 6, we plot consecutive spectra by creating gray-scale columns and laying them out sequentially in order of applied voltage. The difference between Figure 7(a,b) is that in the former we started the system at high voltage and stepped the voltage down each consecutive measurement, whereas in the latter we started the system at low voltage and stepped the voltage up each consecutive measurement, in a manner similar to the data shown in Figure 4. The motors exhibit a number of consistent and contrasting behaviors between Figures 7(a, b). Both figures indicate that for high voltages (V > 1.7 V) the motors show strong
50
C O M P L E X I T Y
Behavior of many motors on a plate as a function of voltage. (a) Behavior as we decrease the voltage starting from an initially high value. (b) Behavior as we increase the voltage from an initially low value. The gray-scale is logarithmic in motor density.
© 2010 Wiley Periodicals, Inc. DOI 10.1002/cplx
resonance in a less synchronized state despite reducing the voltage, as in Figure 7(a), until the driving voltage drops below V = 1.2 V. The frequency of the resonance that causes the hysteresis is surprising and is due to interactions of multiple motors. Although the magnitude measurements in Figure 5 clearly show the resonance near f = 68 Hz, and seem to indicate a gap between f = 68 and 71 Hz, the individual motor’s behavior shown in Figure 4(a) indicates that the resonance has no noticeable effect on the motor’s frequency as a function of voltage. Yet, the same resonance has a substantial impact on the dynamics of the multi-motor system. The pronounced effect of the resonance in Figures 7(a, b), and the lack of any effect in Figure 4(a), suggest that a resonance’s effect on a motor’s steady-state frequency depends on the number of motors near the resonance. We have two additional observations that confirm this assertion. We originally planned to study how the motors negotiated the strong resonance near f = 100 Hz. The fastest motor, as reported in Figure 5, jumps over the resonance at about V = 2 V, but we do not observe any such transition for the same motor when operating all 16 motors even up to V = 2.5 V. We do not wish to drive our system much higher than V = 2.5 V because we approach our power supply’s maximum allowable current and because our motors begin to degrade at such high voltages. Although we do not know the voltage at which the fastest motor would have negotiated the resonance, we do know that the effect of the resonance on the motor’s steady state behavior is different with other motors present than with the motor interacting with the plate alone. We are also able to strengthen or weaken the stability of the group of oscillators synchronized near f = 64 Hz by changing the behavior of a single motor. Note that in Figure 7(b) at V = 1.15 V, there is a motor turning with frequency f = 74 Hz. Before proceeding to 1.2 V, we forced the motor back down to the ensemble near 62 Hz, with which it remained synchronized until the transition at V = 1.5 V. Had we continued the measurements with that motor left unchecked, as we did in other measurements, the synchronization at 63 Hz would have dispersed at V = 1.35 V. Forcing the motor in question to operate at the lower frequency may have strengthened the synchronization of that group of motors, or alternatively, the presence of the motor operating at the higher frequency may have weakened the synchronization of that group of motors. We cannot say which of these explanations is correct, but we can confirm that the interaction of the motors with the resonance can change substantially by changing the behavior of one of the motors. The motors avoid frequencies between f = 65 Hz and 75 Hz. We had expected the motors’ frequencies to continuously increase through a resonance as we increased the driving voltage, but the empty region between f = 65 and 75 Hz in Figure 7(a, b) as well as the gaps above the resonances in Figure 5 indicate that the motors avoid those
© 2010 Wiley Periodicals, Inc. DOI 10.1002/cplx
frequencies when approaching from both below and above the resonance. Ongoing calculations not shown here indicate that when motors interact through an elastic structure with resonances, a range of motor frequencies above each resonance are unstable. Simulations indicate that the instability is present even for populations of identical oscillators, confirming our observations. The RMS magnitude shown in Figure 5 measures how strongly the plate couples with the motor and conversely how strongly a motor couples with a vibrating plate. If two or more motors are running simultaneously on the plate, this should also give some indication for how strongly they will interact with each other, making it a proxy for the frequencydependent coupling between motors due to the plate. A rudimentary prediction of standard models of coupled oscillators [4] is that the effective distribution of the oscillators’ frequencies narrows as the coupling between them increases. If the RMS magnitude is a good proxy for the coupling strength, then the narrow frequency distributions in Figure 7(a,b) should correspond to frequencies with greater RMS magnitudes. The narrowest frequency distributions correspond to frequencies close to f = 63 Hz and close to f = 88 Hz or greater, and the most dispersed frequency distributions correspond to frequencies near f = 80 Hz or below f = 60 Hz. These frequencies respectively correspond with the greatest and least values of MRMS , as reported in Figure 5.
4.2. Behavior versus Time The spectrograms in Figures 8 give an alternate perspective on the motors’ behavior. These figures show the nontransient dynamics of the motors at a fixed voltage over about 8 min. The plots have been prepared by dividing their associated time series into two-second intervals, Fourier Transforming the data in each interval, normalizing the data by dividing as explained in the previous section, and plotting consecutive columns. Both systems were given at least 10 minutes to adjust to their stated voltages before these data were taken, so the results represent the non-transient behavior of the system. The differences between Figure 8(a,b) is that in the former the operating voltage is V = 1.49 V and the grayscale is logarithmic in motor density, whereas in the latter the operating voltage is V = 1.06 V and the gray-scale is linear in motor density. In Figure 8(a), 14 of the motors synchronize near f = 79 Hz while one motor turns near f = 65 Hz and another turns near f = 93 Hz. In Figure 8(b), all of the motors operate between f = 40 and 60 Hz, synchronizing in small groups, and spontaneously desynchronizing. A striking feature of Figure 8(a) is the apparent mirror symmetry. The fourteen motors synchronized at f = 79 Hz vary within less than 1 Hz, appearing essentially flat, while the much larger fluctuations of the two other motors are negatively correlated. The slower motor is roughly f = 14 Hz below the synchronized group, while the faster motor is
C O M P L E X I T Y
51
FIGURE 8
our tentative hypothesis that the average frequency is constant. However, the evidence is more subtle and focuses on details of synchronization and desynchronization events. Consider a subset of the motors which transition from two small synchronized groups to one larger synchronized group. If the other motors in the system maintain relatively constant frequencies, then the hypothesis of constant average frequency would imply that the average frequency of the subset would be constant. Furthermore, the slopes of the small groups as they approach each other must satdf df isfy Ni dti + Nj dtj = 0. These criteria appear to be satisfied by many synchronization and desynchronization events in Figure 8(b) as indicated by the arrows in the figure. The simplest synchronization event occurs at the beginning of the time-series, t = 600 s. Two motors are synchronized at f2 = 61.5 Hz and a third turns at f1 = 65.5 Hz giving an average of f¯ = 62.8 Hz; when these three motors synchronize briefly at t = 670 s, their synchronized frequency is between fsync = 62.5 and 63 Hz, agreeing well with the prediction. For the pair of motors just before synchroniza= 0.0714 Hz/s, and for the top motor just before tion, df dt df
Normalized spectrograms of dynamics of multiple motors on a resonant plate. (a) Behavior at 1.49 V, using a logarithmic gray-scale. (b) Behavior at 1.06 V, using a linear gray-scale. The arrows in figure (b) indicate synchronization or desynchronization events.
equally far above the group. The magnitudes of the changes are nearly identical: for example, both motors’ frequencies jump by f = 2 Hz at t = 780 s. The symmetric behavior of the two motors is reproducible3 for voltages in the range 1.48 V < V < 1.52 V. The mirror symmetry suggests that the overall average frequency of all 16 motors is a conserved quantity for non-transient behavior. Despite the stark contrast between data plotted in Figure 8, the second figure also demonstrates behavior that supports 3 In addition, the system must be prepared such that the slowest
motor is below resonance and the fastest motor is not synchronized, which can be difficult since the system’s state is not a function of the driving voltage.
52
C O M P L E X I T Y
synchronization, dt = −0.125 Hz/s, nearly twice the magnitude and opposite sign. Unfortunately, the data for the noted synchronization and desynchronization events are imprecise and do not definitively establish our hypothesis for average frequency conservation. We conclude this section by drawing attention to the many time scales exhibited in Figures 8(a,b). The motors occasionally exhibit long durations of stability, such as the slowest motor in Figure 8(a) from t = 840 to 880 s, and the fastest motors in Figure 8(b) from t = 950 to 990 s. Both figures exhibit jumps in motor frequency, and the magnitude of the jumps as well as the decay-like response that follows involve time scales whose origins are not apparent in the data. We do not presently have an explanation for these time scales and a full analysis will have to wait for more detailed measurements.
5. CONCLUSION AND FUTURE WORK We set out to answer this question: How does frequencydependent coupling effect the synchronization dynamics of many coupled oscillators? Although we have learned much, there are still many avenues for future work. In the work presented here, we studied the behavior of one motor on a rigid support and others on a resonant plate. In future experimental work, one could measure the same motor’s angular velocity when it is on a rigid support and on a plate to get a direct comparison of frequency as a function of voltage. In this way, one could actually determine a relationship between the natural frequency and the modified frequency, which would pave the way for much more precise modeling. Another way to improve the precision of the experiment would be to control each motor’s
© 2010 Wiley Periodicals, Inc. DOI 10.1002/cplx
power supply independently so that precise distributions of natural frequencies can be specified. This experimental setup would enable precisely tuned tests of predictions that are not currently achievable. We considered a specific geometry for the plate and motors so that all of the motors interacted with the plate in nearly the same way. How would the motors behave differently if some of them were placed on the nodes for a given resonance? What if we used a circular geometry or considered different boundary conditions? The behavior of the motors in Figure 7 for voltages between V = 1.2 and 1.6 V resemble a bimodal frequency distribution, which has been thoroughly studied by Bonilla et al. [17]. Fairly recent work by Martens et al. [18] provide another theoretical framework for analyzing the behavior of bimodal frequency distributions. Can these models be extended to shed theoretical light on the behavior we see in this system? Diffusively coupled oscillators appear to share many properties with our system. We suspect that the similarities are coincidental and arise from different dynamics roots, but a closer comparison of the two systems would be appropriate. The long-time behavior of unsynchronized motors, where interactions with the plate are weak, appear to be very interesting. Is the behavior at weak coupling chaotic? Is it stochastic? What governs the time scales of merging and collapsing groups in Figure 8(b)? Why do the individual motors
in Figure 8(a) show such substantial variability compared with the stability of the synchronized group oscillating near f = 80 Hz? We find that the behavior of individual motors and ensembles of motors interacting with a resonant plate shows a characteristic signature near the resonances of the plate. Motors interacting with a resonance tend to avoid frequencies just greater than the resonant frequency; operating frequencies level-off just below a resonance; and the stability of a collection of motors near a resonance is not the same as the stability of a single motor near a resonance. These characteristics have the overall effect of creating a hysteresis in frequency versus voltage both for a single motor on the plate and for a collection of motors. Once all of the transient behavior has passed out of the system, we find evidence that the average motor frequency is constant. All of these provide useful criteria for developing models to explain the motors’ behavior and we hope these criteria will lead both ourselves and others toward a more complete understanding of the effects of frequency-dependent coupling in systems of many coupled oscillators.
ACKNOWLEDGMENTS We are thankful to John Kolinski for initiating the motors project and Nick Wolff for useful discussion. Alfred Hubler also lent us his stroboscope, for which we are grateful. This work was supported in part by NSF grant 0528096.
REFERENCES 1. Buck, J. Synchronous rhythmic flashing of fireflies. Q Rev Biol 1938, 13, 301–314. 2. Burykin, A.; Buchman, T.G. Cardiorespiratory dynamics during transitions between mechanical and spontaneous ventilation in intensive care. Complexity 2008, 13, 40–59. 3. Bush, W.S.; Siegelman, H.T. Circadian synchrony in networks of protein rhythm driven neurons. Complexity 2006, 12, 67–72. 4. Winfree, A.T. Biological rhythms and the behavior of populations of coupled oscillators. J Theor Biol 1967, 16, 15–42. 5. Pantaleone, J. Synchronization of metronomes. Am J Phys 2002, 70, 992–1000. 6. Kourtchatov, S.Y.; Likhanskii, V.V.; Napartovich, A.P.; Arecchi, F.T.; Lapucci, A. Theory of phase locking of globally coupled laser arrays. Phys Rev A 1995, 52, 4089–4094. 7. Kiss, I.Z.; Zhai, Y.; Hudson, J.L. Emerging coherence in a population of chemical oscillators. Science 2002, 296, 1676–1678. 8. Miranda, M.A.; Burguete, J. Spatiotemporal phase synchronization in a large array of convective oscillators. Int J Bifurcation Chaos 2010, 20, 835–847. 9. Weisenfeld, K.; Colet, P.; Strogatz, S.H. Frequency locking in Josephson arrays: Connection with the Kuramoto model. Phys Rev E 1998, 57, 1563–1569. 10. Kincaid, R.K.; Alexandrov, N.; Holroyd, M.J. An investigation of synchrony in transport networks. Complexity 2008, 14, 34–43. 11. Strogatz, S.; Abrams, D.; McRobie, A.; Eckhardt, B.; Ott, E. Theoretical mechanics: Crowd synchrony on the Millennium Bridge. Nature 2005, 438, 43–44. 12. Acebrón, J.A.; Bonilla, L.L.; Vicente, C.J.P.; Ritort, F.; Spigler, R. The Kuramoto model: A simple paradigm for synchronization phenomena. Rev Modern Phys 2005, 77, 137–185. 13. Strogatz, S. Sync: The Emerging Science of Spontaneous Order; Hyperion: New York, 2003. 14. Adler, R. A study of locking phenomena in oscillators. Proc IEEE 1946, 61, 351–357. 15. Gintautas, V.; Hubler, A. Experimental evidence for mixed reality states in an interreality system. Phys Rev E 2007, 75, 057201(1)–057201(4). 16. Casagrande, V.; Mikhailov, A.S. Birhythmicity, synchronization, and turbulence in an oscillatory system with nonlocal inertial coupling. Physica D 2005, 205, 154–169. 17. Bonilla, L.; Vicente, C.P.; Spigler, R. Time-periodic phases in populations of nonlinearly coupled oscillators with bimodal frequency distributions. Physica D 1998, 113, 79–97. 18. Martens, E.A.; Barreto, E.; Strogatz, S.H.; Ott, E.; So, P.; Antonsen, T.M. Exact results for the Kuramoto model with a bimodal frequency distribution. Phys Rev E 2009, 79, 026204–1–026204–11.
© 2010 Wiley Periodicals, Inc. DOI 10.1002/cplx
C O M P L E X I T Y
53
A Definition of Information, the Arrow of Information, and its Relationship to Life STIRLING A. COLGATE 1 AND HANS ZIOCK 2 1
2
Theoretical Division, Los Alamos National Laboratory, Los Alamos, New Mexico 87545; and Earth and Environmental Sciences Division, Los Alamos National Laboratory, Los Alamos, New Mexico 87545
Received April 11, 2010; revised October 22, 2010; accepted October 23, 2010
A new definition of information is proposed that is minimalistic and incorporates a lifetime requirement for conditions, (which we define here) applied to anything that can be considered to be information. The definition explicitly treats a state in thermodynamic equilibrium as an effectively zero information state. The definition includes the absolute requirement of selection for achieving information; the selection criterion being that the information directly or indirectly contributes to its own replication. The definition also explicitly incorporates the Laws of Thermodynamics, the Second Law leading to the requirement for replication. Finally, the definition explains the origin of information and predicts the monotonic increase of information with time. Published 2010 Wiley Periodicals, Inc. Complexity 16: 54–62, 2011 Key Words: definition of information; origin of life/information; selection; reproduction; artificial life
INRODUCTION
A
new definition of information is proposed that is minimalistic and incorporates a lifetime requirement for conditions, (which we define here) applied to anything that can be considered to be information. The definition explicitly treats a state in thermodynamic equilibrium as an effectively zero information state. While it might be argued that the number of particles, types of particles, and the temperature of the state can be regarded as ‘‘information,’’ these are inherent properties of the state and not information that has been encoded into that state.
Correspondence to: Hans Ziock, Mail Stop D462, Los Alamos National Lab, Los Alamos, NM 87545 (e-mail:
[email protected])
54
C O M P L E X I T Y
As such, we define a state in thermodynamic equilibrium as a zero information state in terms of the ability of that state having an ability to contain anything other than its inherent physical properties. In contrast, ordering elements of that state and thereby decreasing its entropy is a means of introducing information into that state. Yet it is argued that introduction of a randomly chosen ordering of elements of that state by itself is insufficient to yield information; rather it just yields potential information. A selection process is required to transform such potential information into what is herein defined to be information. The definition proposed is suggested to be without exception and applies equally before or after the origin of life, life being defined here as the initial form of information with a net positive self-replication rate [1]. To exclude the noninformation (i.e., random) state, this
This article is a US Government work and, as such, is in the public domain in the United States of America. Vol. 16, No. 5 DOI 10.1002/cplx.20364 Published online 28 December 2010 in Wiley Online Library (wileyonlinelibrary.com)
definition includes: (1) the absolute requirement of selection for achieving information. While this requirement alone excludes most noninformation states, but is clearly insufficient to predict the existence of information, three primary additional conditions are included: (a) memory or storage of information, including the ability to read the stored information; (b) that the selection must take place among actionable alternatives; and (c) the selection criterion is that the information directly or indirectly contributes to its own replication. The combination of these four actions is suggested to be called an ‘‘information system’’ (for want of a better name). It is noted, as will be discussed later, that such a system does not depend upon the prior existence of information and furthermore establishes a minimum system that statistically increases its information content, thereby providing an ‘‘arrow’’ of information as a function of time. This increase excludes consideration of the consequences of a catastrophic change in environment, but includes the so-called destruction of information when life ‘‘eats’’ life. It also explicitly incorporates the Laws of Thermodynamics, the Second Law leading to the replication requirement. The term information pervades modern society. Yet, information is a very poorly defined term. When one asks for a definition of information, the definitions given are nearly always circular (i.e., they start with the assumption that the collection of bits being considered is indeed information as opposed to a specific, but randomly ordered set of bits). Furthermore, they do not deal with how information could have originated. Additional confusion results from attempts to differentiate ‘‘human’’ information from ‘‘common’’ information. This issue is then further aggravated by the injection of terms such as consciousness, and effective and physical complexity. Similar problems appear when one discusses life. The point is that life and information are intertwined from the very beginning; information being initiated before life and life being identified by the transformation from a linear to a positive exponential self-replication growth rate. To address these issues, a different definition of information is offered. A consequence of this definition is the suggestion of a natural means for originating information and a path for the transformation of information into life. Furthermore, unless prohibited by the physical laws of nature operating in the local conditions in which the originally formed information exists, the statistical existence of just one information system is sufficient to predict a statistical monotonic increase in information including the transformation to life. In this sense, life becomes an instability in information space. The definition of information is most often addressed from the viewpoint of human cognition, yet it is evident that life is universally accepted as an information system. This is because of the existence of the genetic code, which is known to both provide the instructions (i.e., information) for the construction, maintenance, operation, and the
Q 2010 Wiley Periodicals, Inc. DOI 10.1002/cplx
reproduction of living organisms and for that of the associated instructions themselves. Therefore information must have existed long before human cognition. Yet human cognition has led to a vast literature concerning what is colloquially termed information and the recognition of the close connection of information with the laws of physics. The dual nature of the human perceptions of information from both physical and philosophical points of view is discussed in depth in the review by Lars Qvortrup [2]. After an exhaustive and annotated review of many references, he concludes that information is best described by ‘‘a difference that makes a difference,’’ first proposed by Bateson, 1973, 1980 [3, 4]. Fundamentally we do not disagree with this description, as it is not inconsistent with the suggested underlying requirement of selection to create information. Unfortunately, Bateson leaves difference as a vague unspecified quantity, and much of his discussion is from a human as opposed to universal perspective. Instead, here it is suggested that Darwinian selection1 alone is far more fundamental than the origin of diversity among life forms. In this new view, Darwinian selection exists from the outset because the existence of an information system leads to a statistically monotonic growth of information itself. In the scientific realm, the word or concept ‘‘information’’ is typically used to describe what is transmitted between entities that have memory (e.g., Shannon [5]), although ‘‘Shannon information’’ is more often associated with less or more entropy. This however, does not define information, but does at least provide some assistance by portraying the loss of information and thereby begins to define what is not information. Similar situations exist in most other works dealing with information and curiously enough, complexity. For instance, Gell-Mann and Lloyd [6] in their paper on complexity and total information, deal with the issue of quantifying how much information could be present in a given system by recognizing regularities, e.g., effective complexity, but not what makes whatever has been stored information, particularly in the long term. Likewise, in their paper [7] and references therein, Prokopenko, Boschietti, and Ryan start out with Shannon’s ‘‘definition’’ and thereby again largely deal with how much potential information and complexity a system can store or have, but not how much it actually does have. However, in section 6 of their paper, and in particular with regard to an earlier work by Adami [8] the authors do begin a very
1
We do not differentiate between the point mutation changes typically associated with Darwinism, vs. changes involving horizontal gene transfer or duplicate gene incorporation, just that the selection results in continued replication and that the long-term trend is on average towards better/more information.
C O M P L E X I T Y
55
limited discussion of some of the ideas raised in Adami’s paper. A direct examination of Adami’s paper shows parallels to several of the points that we raise herein. In fact, what Adami refers to as physical complexity is very similar to our definition of information and many of the same links are drawn. However, the critical issue of the underlying selection criterion being related to replication as a consequence of the Second Law is not raised nor is the issue of the origin of information. Finally we believe that the term physical complexity is invented due to a reluctance to cleanly distinguish between random selection and selection of consequence. We emphasize that random selection leads to a rapid approach to thermodynamic equilibrium and thus zero information in the definition used herein. An additional issue is the need to distinguish information from message passing and from data, both of which have potential information content, yet by themselves are not what will be defined as information here. This is because both message passing and data may be nonspecified or randomly chosen states and hence, by the definition presented here, are not information. Only selection can cause the transformation from an unspecified state to a specified state and hence information. Information is also often quantified by entropy (as done for example by Szilard [9, 10]) as a useful measure of possible information content, or a choice among the number of possibilities presented. Entropy is used to help answer questions in communication, statistics, complexity, and gambling [11]. Entropy is the measure of uncertainty of a given ‘‘variable,’’ where the variable is typically a system of states or bits. Here it is suggested that entropy plays no intrinsic role in information content, only that changing information content will change the entropy of an associated heat bath, but not vice-versa. (It is this irreversibility that defies a mathematical description when using only time reversible invariant mathematical expressions). The new definition also conforms to the standard criterion that information represents a nonequilibrium state. From the more universal physics and computation perspective, in 1929 Szilard [9, 10] associated entropy arguments with information in the context of energy. This was later expanded on by Leon Brillouin in 1956 by his Science and Information Theory [12]. The connection between energy, entropy, and information was then initially formalized in 1961 by Landauer [13] who associated the entropy and energy change with the erasure or destruction of information. In 1973 Bennett [14] fully formalized the connection, before he summarized all the work in a major review of the field in 1988 [15]. From these considerations, both life and computers require energy to process, maintain, and grow information, producing heat as a consequence. The production of heat therefore increases the entropy of an associated heat bath, whereas the process of rejection of alternatives, i.e., selection,
56
C O M P L E X I T Y
reduces the entropy of the remaining information system. The entropy of an information system therefore decreases as its information content is used, actively maintained, or grows. However, the lack of entropy itself does not define information, nor does it explain how information originates.
THE DEFINITION Because of the frequent consternation of many thinkers when asked to unambiguously define information, a different approach is to start with the question, ‘‘What is not information?’’. By inverting the answers, and repeating the question multiple times, a highly restrictive definition of information emerges. The definition also turns out to address the questions of how information originated and what is information’s connection to life? The definition of information that arises is simply something that has been selected.2 This definition is seemingly quite simple and nonmathematical in nature; yet due to the nature of the definition, it may not be possible to render it mathematically. Despite its simplicity, the definition has far reaching consequences, such as establishing how information would originate, as well as identifying the parallels and relations between information and life (including both natural and artificial intelligence). An important element of the definition is an examination and specification of the selection criterion. Taking into account the Second Law of Thermodynamics and noting that information is not a thermodynamic equilibrium state, one finds that any specific item of information must have a finite lifetime given by its thermal decay time to equilibrium. Also for the reasons presented earlier, a thermodynamic equilibrium state is considered to have no ability to hold information other than its inherent physical properties. These properties are deemed not to represent information by the definition presented in this article. Whereas, any single instance of a nonequilibrium state will decay back to equilibrium with some average lifetime, which is determined by the environment and the physical laws that constrain it, information cannot be allowed to decay if it is to be maintained long term, i.e., longer than its thermal decay time. Here it is critical to distinguish between the lifetime of a given copy of a piece of information (which, unless purposely destroyed, does indeed decay with its average thermodynamic lifetime as just discussed) and the information itself, which through replication can live far longer than the above average lifetime of
2
One must be careful not to confuse the generation of alternatives, which is random (at least in part, if the ‘‘answer’’ is not already known) and which precedes the selection process, from the actual selection process itself. It is only the selection (survival) of a subset of the alternatives that turns that subset into information.
Q 2010 Wiley Periodicals, Inc. DOI 10.1002/cplx
a given copy.3 In fact, there appears to be no means of achieving a critically longer storage of a nonequilibrium state in its given environment other than the process of replication or duplication. Thus replication is seen to be the key selection criterion required to achieve the transition from noninformation into information, with any rate of replication extending the lifetime of information and thereby opening a path to the achievement of life. Since, as already discussed, information is one manifestation of a nonequilibrium state, a free energy source is required to produce it. Let us assume that a given nonequilibrium state is produced in the local environment at a rate a, the value of a being determined by the same local environment and the free energy available. Further, let us assume that a remains constant provided that the local environment and free energy source themselves are relatively stable. As a result of the Second Law, a copy of the given nonequilibrium state has a thermal decay rate (taken to be b) back to equilibrium. If N(t) designates the number of copies of the given nonequilibrium state, then the rate of change of the number of copies of the given state is given by
dN ðtÞ ¼ a bNðtÞ; dt
(1)
whose solution, assuming N(0) 5 0, is given by
NðtÞ ¼ ða=bÞð1 ebt Þ:
(2)
In the limit of t ? 1, N ? a/b, namely a stable population is established and one that is maintained by the repeated random regeneration of instances of the nonequilibrium state made possible by the free energy source. If one now assumes that the nonequilibrium state is capable of self-replication at a rate given by g, the situation changes and one has three different outcomes depending on whether b > g, b < g, or b 5 g. In the first two cases, Eq. (1) now becomes
dN ðtÞ ¼ a ðb gÞN ðtÞ; dt
(3)
whose solution, again assuming N(0) 5 0, is given by
NðtÞ ¼ ½a=ðg bÞ½eðgbÞt 1:
(4)
In the first case (b > g) and in the limit of t ? 1, N ? a/(b – g), as the exponential term goes to zero at large times. Comparing this with the result from Eq. (2) at large times, it is noted that although N still tends to a stable finite value, that value is larger than when there is no replication (g 5 0). This is expected as the replication mitigates some of the decay rate. In the second case (g > b), a completely different behavior is found. Here one has exponential growth of the population with time, as the exponent is now greater than zero for all positive times. An exponential growth of population is one of the properties of life. Furthermore, the resulting semi-infinite number of copies leads to a far more rapid evolution; that is the large associated increase in the absolute number of both advantageous or disadvantageous random mutations gives a large number of more advantageous ones per unit time with their inevitable selection. This in turn results in an increase in the information content of the overall system. Finally, for the last case where b 5 g, namely when the replication rate is just canceled by the decay rate, Eq. (3) simplifies to dN(t)/dt 5 a. In this case one has linear growth of the population of the nonequilibrium state, the same as if there were no decay (and replication). To summarize, it is only when the nonequilibrium state leads to replication of itself at a rate faster than the average thermodynamic decay time of a single copy of the given state that a phase change occurs; the change being an exponential growth of the population of that state, that state being an information state. The exponential growth gives the state, or more accurately the population of that state, an essentially infinite lifetime, that is now independent of the continued random production of isolated instances of that state by the environmental conditions which include a free energy source.4 In all other cases, the existence of a population of any given nonequilibrium state is maintained only through the continual random production of that state by the environment and not by the existence of the state itself. It is also important to note that when there is a nonzero replication rate of a given copy of the nonequilibrium state, even if it is smaller than its decay rate, the population of the nonequilibrium state is enhanced over what it would be if there were no replication at all. As a result, this state would still be compatible with having information, as it does change the status of the quasi-equilibrium state to be further from the quasi-equilibrium point where it would be if no replication took place. The greater population in turn enhances the probability that in a fixed
3
Although the average thermodynamic decay time of a nonequilibrium state can be as great as the age of the universe, practically we think of infinitesimally shorter times of thermo–chemical processes.
Q 2010 Wiley Periodicals, Inc. DOI 10.1002/cplx
4
The exponential growth however still remains dependent upon the continuing availability of free energy that can be obtained from the environment.
C O M P L E X I T Y
57
period of time one of the nonequilibrium states is further modified randomly to an even higher replication rate state. As such, the greater population increases the possible progression of the system to a situation where a nonequilibrium state is achieved whose replication is greater than its thermodynamic decay rate, and hence the phase transition to exponential growth and life-like properties. Since the information system with the longer decay time is preferentially selected for, even before the phase transition to exponential growth, information will statistically increase, leading to the concept of an arrow of information analogous to the arrow of time. The fact that such an imperative exists before the phase transition to exponential growth and life implies a quality of information analogous to an instability in information space. Hence life and information are inextricably linked from the outset. Furthermore, as a result of the Second Law and the lifetime requirement, one is led to conclude that natural selection and ‘‘positive’’ evolution are inevitable (given sufficient free energy and a suitable environment with physical laws that do not preclude life) for an information system to achieve self-replication. Finally, this requirement of selection-based replication of information clearly separates data and/or signal transmission from information. The former have potential information content, but until selected and then being reproduced, are definitely distinct from information itself. From the preceding discussion we conclude that the fundamental criterion of selection is simply whatever directly (and/or indirectly) contributes to the ability to make and sustain more of ‘‘itself,’’ namely the information, e.g., DNA, RNA in the case of life. Without reproduction, thermal noise will eventually degrade the nonequilibrium state that information is, thereby destroying the information. Having established the selection criterion, and continuing to ask what is not information, several additional requirements and observations are found for something to be defined as information. These are: 1. The selection must take place among actionable alternatives that have consequences and hence the information must be actionable. Anything that is not (directly or indirectly) expressed in some form that has consequences has no basis for selection other than a random one and hence cannot carry any predetermined meaning; i.e., it can have no information content. 2. To usefully select information, information must be stored (written); otherwise there is no way to decide what has been selected.5
5
The need for reading and writing of information in the context used here was discussed by Goran Goranovic and one of the authors (HZ) prior to the current work.
58
C O M P L E X I T Y
3. To make information actionable, as well as reproducible, the information must be read. 4. Although selection takes place at an individual level, its net result is truly seen only on the group/population level. 5. The selection is made by the environment; the environment containing whatever the potential information is directly or indirectly acting on. The environment includes any previously generated information and any consequences of that information which often has changed the local environment [e.g., the chemical conditions interior to a cell (the local environment) are quite distinct from the global environment (e.g., chemical concentrations are very different)], or in some case the global environment (e.g., the generation of an oxygen rich global atmosphere). The definition and additional five points will be examined in more detail in the following section.
DISCUSSION One notes that the preceding definition does not have the rigor of a typical mathematical definition. This, however, is the result of the nature of information itself, since information is effectively determined by the environment, and by selection measures that are often ‘‘soft’’ in nature. This is especially true when considering the selection criterion, where the ‘‘indirect’’ assistance to making more of itself is part of the definition. The simultaneous soft, yet seemingly robust nature of this new definition is one of the major and somewhat surprising results of this work. A second surprise is the inherent tie to natural selection and evolution. This results from the competition of the underlying selection requirement for replication and from the Second Law of Thermodynamics; the latter without the former driving any system to a maximal entropy state and therefore zero information state. As discussed, long-term preservation of information (relative to its natural thermodynamically driven decay time) is a fundamental requirement for anything defined here as information. If the information content becomes sufficient for the reproduction rate to exceed the decay rate (g > b), exponential growth of the population and hence information will occur, albeit in the form of copies of the original information. This growth will continue until limited by available resources or self-quenching, both indicative of a change in the general nature of the environment in which the information exists. Errors in the reproduction caused by thermal noise will of course occur. The ‘‘bad’’ faulty copies will be weeded out through the requirement for the information to be actionable and thus selectable. ‘‘Faulty’’ copies that actually prove to be superior to the original information in terms of achieving net higher replication rates (in the now perhaps modified environment) would
Q 2010 Wiley Periodicals, Inc. DOI 10.1002/cplx
be reinforced, yielding natural selection and evolution. These are of course underlying concepts in life. The requirement for a long-term preservation of information also immediately distinguishes information from signals, messages, or data, and/or the potential information content that they might carry. It is only the use, selection, and long-term preservation through replication of any information content of the messages or data that converts the content into actual information. Furthermore, the reproduction process can be an inherent characteristic of the information (self-reproduction) or the reproduction can be by an external ‘‘entity’’ due to the usefulness of the information to that entity. The former case we would call life, while the latter would be nonliving information that requires life for its existence. One might also ask on a more fundamental level, what function the information actually performs. It would seem that the proposed definition implies that once the problem of initially replicating itself has been solved and thus one has the first life system, the subsequent ‘‘purpose’’ of information is to provide a map of the environment that is used to (or can be put to use to) more accurately predict (and deal with) the future state of the environment. The ability to predict the future is central to improving survivability and therefore replication. Initially this map is simply the existence of the elements of the information that give it the ability to reproduce in the environment that is present. However, with time and the growth in the amount/quality of the information, the ability to change the environment and/or adapt to ongoing changes in the environment will become implemented through, and contained in the information. Further growth in the information would provide the ability to sense impending environmental changes and respond accordingly. In principle, if the environment remains completely stable there is no need for an information increase. However, with population growth resulting from the requirement for replication and the inherent limit of resources in the environment needed to make ever more copies of the information, the environment will never be fully stable over the long term. In fact, once the initial problem of reproduction/replication has been solved and one has the first true life system, it is the information itself that becomes the fastest changing part of the environment, and hence the element that provides the major part of the selection basis. This inevitably leads to more/better information and an open-ended possibility for the evolution of information itself. It is interesting to note that these consequences/properties closely parallel the properties of life and Darwinian evolution. To gain deeper insight into the reasoning behind this definition of information and the associated requirements/observations (1)–(5) above, as well as their consequences, the definition and additional requirements are expanded on below.
Q 2010 Wiley Periodicals, Inc. DOI 10.1002/cplx
In specifying that information is only that which has been selected, one notes that without selection, anything (e.g., any sequence of ‘‘bits’’) is possible and so consequently without selection nothing is information. This is the fundamental basis for the presented definition and one can note that through the associated rejection of alternatives (that comes with selection), the local entropy of the system decreases and hence one of the inherent requirements of information is met: that it results in a local reduction of entropy. Of course as noted previously, the selection cannot be on a random basis as otherwise there is no reduction of entropy in the sense that any other choice would be equally ‘‘good.’’ We now turn to a more detailed examination of the above numbered set: 1. The need for the information to be actionable would appear to be self-evident, since if the information has no consequences, any random alternative would be equally ‘‘good’’ and hence would effectively represent thermal noise, i.e., noninformation. Arbitrary selections are also ‘‘selections’’ by thermal noise and hence do not represent information. Here it is particularly important to emphasize again that the generation of possible alternatives is in itself not a selection process if done randomly. Subsequent selection by the environment, as detailed below, turns the selected alternatives into information. The ‘‘Intelligent’’ generation of alternatives, i.e., alternatives put forward based on preexisting information, do contain some vestiges of information as the possible choice list is reduced in scope from a completely random reshuffling of all the bits. However, it is again not until one or more of the reduced number of prior choices has been more specifically selected for by the environment, that they become what is defined here to be true new information. Also a single one time only ‘‘selection’’ does not qualify as a selection by this definition. The reason is that there is no way to distinguish a single selection from a completely arbitrary selection unless the single selection itself is already based on pre-existing information, in which case one already has that information. 2. Without storage, there is certainly no longer-term presence of any specific piece of information. Furthermore by the required longevity attribute, anything that is not stored in some manner would not be classified as information. It is also readily apparent that without a record of the information selected by the environment there is no information or anything to reproduce.6
6
One notes that in theory the information storage could be dynamic in nature, as long as the dynamic information is being reproduced as required by selection based on the replication criterion. At the same time, a static storage has many more appealing features than a dynamic approach.
C O M P L E X I T Y
59
a. Storage also implies the possibility of multiple copies and hence ‘‘more of itself.’’ b. Additional copies represent a more ordered state and hence a local reduction in entropy. 3. Any stored information must be readable to be turned into an actionable form that can be selected for by the environment. a. Note that DNA/RNA read themselves. b. Computer memories/programs can also be implemented/written to read (and also write copies of) themselves. 4. The selection of the information is made by the environment as noted in additional requirement (5); the environment determining what is and what is not useful. Furthermore, given that information has an inherent thermodynamic tendency to decay, the selection requirement of fundamental importance is for the information to propagate itself (directly or indirectly). As discussed previously, when the reproduction rate is greater than the decay rate, an exponential population growth results. The information itself could exist in any one of a variety of forms [e.g., a collection of ‘‘bits,’’7 a shape, or an interaction ability resulting from orderings of atoms, amino acids, or DNA/ RNA itself)]. Any errors made during the information reproduction process will yield new potential information. These errors could be a modification to a piece of preexisting information (e.g., flipping of one or several bits), an addition of more bits (possibly even an extra partial or full copy of the preexisting information), or the deletion of some preexisting bits.8 The environment then selects the ‘‘bit pattern’’ that is best able to reproduce itself in the long-term. The selection process by the environment simply weeds out the less (or nonbeneficial) potential information, propagating only the information that is sufficiently successful at reproducing itself or having itself be reproduced. Hence the selection eliminates the bad potential information or pre-existing inferior information while allowing the existing or better information to be reproduced and stored at a higher rate, and hence propagated. The resulting reproduction of the better information shows up as an increase in the population count of the better information. The exponential nature of the population growth also implies that a given individual is not important, except as the potential origin of the better information. The exponential population growth also implies that resource limitations will be reached relatively
7
The bits must of course be converted into an actionable form. 8 In order for to grow the information in general in the long-term, more bits must be added in some fashion. However in the short-term, better information can be gained by removal of bad or useless bits.
60
C O M P L E X I T Y
quickly and hence will tend to drive faster evolution by means other than just a larger absolute number of mutations/possible choices. a. The population of the alternative selected is increased due to its ability to duplicate or usefulness of being reproduced, whereas the alternatives that are not selected will eventually degrade due to thermal noise. b. As a result of the duplication (sometimes imperfect) and selection, the population as a whole will be nonthermal and the distance from local equilibrium conditions will be increased. c. The initial (until restricted by resource limitations) exponential growth in population of the information system is another attribute that is associated with ‘‘life.’’ d. It needs to be explicitly noted that any information previously created is itself part of the environment. Hence that preexisting information is part of the basis for determining what the new information can be. 5. The ability to reproduce and make additional and potentially better versions of the information can only be determined by the environment, which gives context to the potential or true information. This environment includes the physical characteristics and physical laws of nature, as well as the information (life) that already exists. As the information is changing on a faster and more dynamic time scale than the physical laws of nature, the environmental selection at present is dominated by the already existing information (as of course constrained by the physical laws of nature). There are many environmental niches, so different information systems or life forms will be best adapted to their own environmental niche. If the niche is sufficiently small and constrained, a best information system may even exist for that niche. In contrast, for larger niches, more information rich systems are likely to develop and through natural selection continue to evolve to ever-richer information versions, as long as the information can always be stored (and accessed) robustly enough. This leads to a continual push/selection for more/better information and leads to an open-endedness, another underlying ‘‘feature’’ of life. a. Initially the selection was made by the purely ‘‘physical’’ environment. b. The environment must be ‘‘stable’’ in the sense, that if the environment is itself randomly changing on a fast time scale, the basis of selection will have changed and the selection basis itself becomes random, which contradicts the requirement for a nonrandom selection basis. c. Life (information) itself has become the most important/challenging part of the environment [i.e., the issue of the physical environment was already largely addressed (and solved) by the information life stored early on].
Q 2010 Wiley Periodicals, Inc. DOI 10.1002/cplx
d. This in turn makes life, which is manifested by the information it has stored, the key environmental feature today. e. This leads to a continual trend for the selection of more and better information and the positive evolution of life/information as a whole. f. Provided the storage medium for the information is essentially unlimited and sufficiently stable, this creates open-endedness, an underlying ‘‘property’’ of life [additional information giving an advantage over and above the information already acquired]. g. The exponential growth in the number of copies of a piece of information (until limited by resource limitations) has a positive feedback effect on the development of more useful and richer information, as well as on the rate of new information growth and changes to the environment itself. In discussing life and information above, the issues of energy/metabolism (a directed free energy source) and containers have been sidestepped. These are, however, often raised when discussing the origin of life. The argument typically revolves about which one of these three (information, metabolism, or container) came first. This turns out to be an irrelevant point for the following reasons. To grow and reproduce information, one requires available free energy and hence a metabolism, as one is locally reducing the entropy, which is known to require the expenditure of free energy. Furthermore, a selection means rejecting an alternative, the rejection being effectively to the external environment. This immediately implies that one must effectively have a boundary/container (regardless of how loosely defined) that separates the living organism or information system from the environment. If this boundary does not exist, one can’t reject possible choices, i.e., one can’t make a selection. Thus for life to exist, one simultaneously needs an information system in addition to a metabolism/free energy source and container. It is also worth noting that if one considers a system that is unable to reject entropy to its exterior, namely a ‘‘sealed’’ system that has an input of free energy, but no way to dissipate energy to its exterior, the system will increase in temperature without limit (until equal in effective temperature to that of the free energy source itself), which will necessarily destroy any stored information. In other words, complete isolation defeats an information system in that it is unable to reject entropy and becomes fully thermalized internally.
By additionally including the requirement for information to be something that outlives the inherent thermal decay brought about by the Second Law of Thermodynamics, one is able to differentiate between information itself and life. At the same time, the requirement of a lifetime longer than that of its own inherent thermal decay time necessitates that the fundamental selection criterion be reproduction, either selfreproduction or reproduction because of deemed usefulness. The former represents life, while the latter represents information that more advanced life finds to be useful and therefore must reproduce to both maintain it and put it to use. The reproduction, when coupled with the selection process itself and the random errors from thermal noise during reproduction, effectively leads to natural selection and evolution, and the essential parallelism between life and information. In fact, when a rate of self-reproduction of the information is attained that is faster than the thermal decay rate, one also has life. Finally the requirement of reproduction/replication allows one to cleanly separate information from the potential information in the form of messages, data, or signals from the environment. Furthermore this definition would in no way seem to preclude artificial intelligence or living electronic machines. However, to date, the selection of computational algorithms by machines is performed by life-specified algorithms. Additionally the computational machines don’t yet reproduce themselves, although they are sufficiently useful to us for us to make many copies of them. Finally the machine systems are currently still far from being open-ended on their own, e.g., neural net systems are always terminated or frozen upon sufficient optimization and furthermore they are currently fundamentally limited by the memory space we provide them. Finally a comment is deserved on the similarity between general information and the information stored by humans, both individually and by society, and in forms ranging from memory, journals, books, or digitally. The digitally stored information we have produced, certainly does not yet reproduce itself (except for perhaps computer viruses). On the other hand, digital information that is indeed useful to us results in our reproduction of it many times over. This reproduction will continue as long as the information proves to be useful to us, its immediate environmental selection agent. Thus the human acquired information has in fact ‘‘attained’’ an ability to be reproduced, albeit by us. Fortunately this is usually in a symbiotic as opposed to parasitic mode.
CONCLUSION A new definition for information is proposed that attempts to avoid the circular nature of most of the common definitions and which is universal in nature and thereby independent of the human element. Furthermore, by making the definition revolve around the action of selection by the environment, this leads to a direct path for the origin of information itself.
Q 2010 Wiley Periodicals, Inc. DOI 10.1002/cplx
Acknowledgments The authors gratefully acknowledge the support of the U.S. Department of Energy through the LANL/LDRD Program for this work. They also thank the reviewers and their colleagues for the suggested and implemented improvements and clarifications of arguments discussed in this article.
C O M P L E X I T Y
61
REFERENCES 1. Quotes from Ricardo, A.; Szostak, J.W. Origin of life on earth. Sci Am 2009, 301(3), 54–61. [Schrodinger, E.: ‘‘self-assemble against nature’s tendency toward disorder, or entropy’’, Joyce, G. (NASA): ‘‘a self-sustaining chemical system capable of Darwinian evolution’’, Korzeniewski, B.: ‘‘a network of feedback mechanisms’’]. 2. Qvortrup, L. The controversy over the concept of information. An overview and a selected and annotated bibliography. Cybern Hum Knowing 1993, 1, 3–24. Available at http://www.burlgrey.com/xtra/infola/infolap3.htm. 3. Bateson, G. Steps to an Ecology of Mind; Paladin, Frogmore: St. Albans, 1973. 4. Bateson, G. Mind and Nature: A Necessary Unit; Bantam Books: New York, 1980. 5. Shannon, C.E. A mathematical theory of communication. Bell Sys Tech J 1948, 27, 379–423, 623–656. 6. Gell-Mann, M.; Lloyd, S. Information measures, effective complexity, and total information. Complexity 1996, 2, 44–52. 7. Prokopenko, M.; Boschietti, F.; Ryan, A.J. An Information-theoretic primer on complexity, self-organization, and emergence. Complexity 2009, 15, 11–28. 8. Adami, C.; What is complexity? Bioessays 2002, 24, 1084–1095. 9. Szilard, L. Uber die entropiererminderung in einem thermodynamischen system bei eingriffen intelligenter wesen. Z Phys 1929, 53, 840–856. 10. Leff, H.S.; Rex, A.F., Eds. Maxwell’s Demon 2: Entropy, Classical and Quantum Information, Computing; Taylor & Francis: Bristol, UK, 2003; pp 110–119. 11. Cover, M.; Thomas, J.A. Elements of Information Theory, 2nd ed.; Wiley: Hoboken, NJ, 2006; p 13. 12. Brillouin, L. Science and Information Theory; Dover Publication: New York, 2004. 13. Landauer, R. Irreversibility and heat generation in the computing process. IBM J Res Dev 2000, 44, 262–269. 14. Bennett, C.H. Logical reversibility of computation. IBM J Res Dev 1973, 17, 525–532. 15. Bennett, C.H. Logical depth and physical complexity. In: The Universal Turing Machine—A Half-Century Survey, Herken, R., Ed.; Springer-Verlag: New York, 1995; pp 207–236.
62
C O M P L E X I T Y
Q 2010 Wiley Periodicals, Inc. DOI 10.1002/cplx